zlacker

[parent] [thread] 50 comments
1. ramraj+(OP)[view] [source] 2022-12-12 04:20:38
It’ll be interesting if we soon come to a day when a comment can be suspected to be from a bot because it’s too coherent and smart!
replies(7): >>dang+m >>matthb+q2 >>jfoste+d6 >>ycombo+09 >>midori+0a >>TotoHo+8f >>baanda+H01
2. dang+m[view] [source] 2022-12-12 04:23:02
>>ramraj+(OP)
I agree, but in that case we can learn from the bots instead of wincing at regurgitated material.

Basically, if it improves thread quality, I'm for it, and if it degrades thread quality, we should throw the book at it. The nice thing about this position is that comment quality is a function of the comments themselves, and little else.

replies(4): >>andsoi+J8 >>smeagu+S8 >>UweSch+6o >>mdp202+Or
3. matthb+q2[view] [source] 2022-12-12 04:44:12
>>ramraj+(OP)
There is an xkcd comic about this (of course):

#810 Constructive: https://xkcd.com/810/

replies(2): >>ramraj+P7 >>Kim_Br+6d
4. jfoste+d6[view] [source] 2022-12-12 05:27:01
>>ramraj+(OP)
Seems this isn't a widely held opinion, but some of what I've seen from ChatGPT is already better than the typical non-LLM equivalents.
replies(1): >>jfoste+hI
◧◩
5. ramraj+P7[view] [source] [discussion] 2022-12-12 05:41:30
>>matthb+q2
Of course there is, but it’s definitely weird when the jokes only funny when it’s not easy to think of it as a real possibility!

In someways this thread sounds like the real first step in the raise of true AI, in a weird banal encroachment kind of way.

replies(2): >>amirhi+Ab >>midori+kc
◧◩
6. andsoi+J8[view] [source] [discussion] 2022-12-12 05:50:34
>>dang+m
I suggest thinking about the purpose of discussion on HN.

There’s a tension between thread quality on the one hand and the process of humans debating and learning from each other on the other hand.

replies(8): >>burnis+Mb >>Aeolun+Bd >>CGames+Nd >>komali+Me >>ckastn+zf >>nextac+Io >>tkgall+Eq >>intere+141
◧◩
7. smeagu+S8[view] [source] [discussion] 2022-12-12 05:51:40
>>dang+m
What measure do you propose for thready quality?
8. ycombo+09[view] [source] 2022-12-12 05:52:55
>>ramraj+(OP)
We need a variant that knows to link to the Relevant XKCD.
replies(1): >>Kuraj+3e
9. midori+0a[view] [source] 2022-12-12 06:07:56
>>ramraj+(OP)
From the ChatGPT-generated stuff I've seen just in the last week, I think we're already there. Most humans these days are incredibly stupid.
replies(1): >>mschne+4b
◧◩
10. mschne+4b[view] [source] [discussion] 2022-12-12 06:18:16
>>midori+0a
I would rephrase that: Humans are incredibly stupid most of the time. Only if they make diligent use of ‚system 2‘ they are not.
◧◩◪
11. amirhi+Ab[view] [source] [discussion] 2022-12-12 06:22:20
>>ramraj+P7
I think it would be really interesting to see threads on Hackernews start with an AI digestion of the article and surrounding discussion. This could provide a helpful summary and context for readers, and also potentially highlight important points and counterpoints in the conversation. It would be a great way to use AI to enhance the user experience on the site.

I routinely use AI to help me communicate. Like Aaron to my Moses.

replies(1): >>execut+Sy
◧◩◪
12. burnis+Mb[view] [source] [discussion] 2022-12-12 06:24:37
>>andsoi+J8
I don’t think so, at least, I find that process to be very educational, especially when some one changes their mind or an otherwise strong argument gets an unusually compelling critique.

Basically I think those two things are synonymous.

◧◩◪
13. midori+kc[view] [source] [discussion] 2022-12-12 06:29:28
>>ramraj+P7
When I compare the ChatGPT-generated comments to those written by real humans on most web forums, I could easily see myself preferring to only interact with AIs in the future rather than humans, where I have to deal with all kinds of stupidity and bad and rude behavior.

The AIs aren't going to take over by force, it'll be because they're just nicer to deal with than real humans. Before long, we'll let AIs govern us, because the leaders we choose for ourselves (e.g. Trump) are so awful that it'll be easier to compromise on an AI.

Before long, we'll all be happy to line up to get installed into Matrix pods.

replies(4): >>sudoma+vd >>beebee+1h >>lonely+kr >>goatlo+2w
◧◩
14. Kim_Br+6d[view] [source] [discussion] 2022-12-12 06:36:33
>>matthb+q2
There is -of course- the famous Alan Turing paper about this [1], which is rapidly becoming more relevant by the day.

Alan Turing's paper was quite forward thinking. At the time, most people did not yet consider men and women to be equal (let alone homosexuals).

I don't think it is so important whether a comment is written by a man, a woman, a child, or a <machine>, or some combination thereof. What is important is that the comment stands on its own, and has merit.

Pseudonyms(accounts) do have a role to play here. On HN, an account can accrue reputation based on whether their past comments were good or bad. This can help rapidly filter out certain kinds of edge cases and/or bad actors.

A Minimum Required Change to policy might be: Accounts who regularly make false/incorrect comments may need to be downvoted/banned (more) aggressively, where previously we simply assumed they were making mistakes in good faith.

This is not to catch out bots per-se, but rather to deal directly with new failure modes that they introduce. This particular approach also happens to be more powerful: it immediately deals with meatpuppets and other ancillary downsides.

We're currently having a bit of a revolution in AI going on. And we might come up with better ideas over time too. Possibly we need to revisit our position and adjust every 6 months; or even every 3 months.

[1] https://academic.oup.com/mind/article/LIX/236/433/986238?log...

replies(2): >>Kim_Br+5e >>wruza+tj
◧◩◪◨
15. sudoma+vd[view] [source] [discussion] 2022-12-12 06:41:21
>>midori+kc
I think it's important to remember that just because something is easier to deal with, it doesn't necessarily mean it's better. The fact that AIs may be more pleasant to interact with than some humans doesn't mean that they are better equipped to govern us. In fact, I would argue that it is precisely the challenges and difficulties that come with dealing with other humans that make us better, more resilient, and more capable as a society.
replies(1): >>ramraj+Kf
◧◩◪
16. Aeolun+Bd[view] [source] [discussion] 2022-12-12 06:42:25
>>andsoi+J8
If I can’t determine your comment is by a bot, does it make a difference? You are just a random name on the internet.
replies(1): >>Fridge+Km
◧◩◪
17. CGames+Nd[view] [source] [discussion] 2022-12-12 06:44:33
>>andsoi+J8
Intelligent debate can happen in high-quality threads. And when we are intelligently debating subjective matters, the debate is targeted towards the reader, not the opposing party. On the other hand, when we are debating objective matters, the debate leads to the parties learning from the other. So I don't think these things are opposites.
replies(1): >>swader+LP
◧◩
18. Kuraj+3e[view] [source] [discussion] 2022-12-12 06:46:43
>>ycombo+09
Please no. We can't allow this to be a slippery slope to what is happening on reddit.
◧◩◪
19. Kim_Br+5e[view] [source] [discussion] 2022-12-12 06:47:10
>>Kim_Br+6d
Note: Alan Turing's Imitiation Game pretty much directly involves Men, Women, Machines, Teletypes.

These days of course we use such things as IRC clients, Discord, Web Browsers etc, instead of teletypes. If you substitute in these modern technologies, the Imitation Game still applies to much online interaction today.

I've often applied the lessons gleaned from this to my own online interactions with other people. I don't think I ever quite imagined it might start applying directly to <machines>!

◧◩◪
20. komali+Me[view] [source] [discussion] 2022-12-12 06:53:55
>>andsoi+J8
I like thinking about the purpose, because I doubt there is a defined purpose right now. I have absolutely no idea why whoever hosts this site (ycombinator?) wants comments - if they're like reddit or twitter, though, it's to build a community and post history, because you can put that down as an asset and, idk, do money stuff with it. Count it in valuations and whatnot. And maybe do marketing and data mining. Or sell APIs. Stuff like that. So in this case, for the host, the "purpose" is "generate content that attracts more users to register and post, that is in a format that we can pitch as having Value to the people who decide valuations, or is in a format that we can pitch as having Value to the people who may want to pay for an API to access it, or is valuable for data mining, or, gives us enough information about the users that, combined with their contact info, functions as something we can sell for targeted ads."

For me the "purpose" of discussion on HN is to fill a dopamine addiction niche that I've closed off by blocking reddit, twitter, and youtube, and, to hone ideas I have against a more-educated-than-normal and partially misaligned-against-my-values audience (I love when the pot gets stirred with stuff we aren't supposed to talk about that much such as politics and political philosophy, though I try not to be the first one to stir), and occasionally to ask a question that I'd like answered or just see what other people think about something.

Do you think there's much "learning from eachother" on HN? I'm skeptical that really happens much on the chat-internet outside of huge knowledge-swaps happening on stackoverflow. I typically see confident value statements: "that's why xyz sucks," "that's not how that works," "it wasn't xyz, it was zyx," etc. Are we all doing the "say something wrong on the internet to get more answers" thing to eachother? What's the purpose of discussion on HN to you? Why are you here?

The purpose of my comment is I wanna see what other people think about my reasons for posting, whether others share it, maybe some thoughts on that weird dopamine hit some of us get from posting at eachother, and see why others are here.

replies(1): >>prox+3j
21. TotoHo+8f[view] [source] 2022-12-12 06:58:53
>>ramraj+(OP)
“That’s too clever, you’re one of them!”

i.e. The Simpsons Already did it.

◧◩◪
22. ckastn+zf[view] [source] [discussion] 2022-12-12 07:05:13
>>andsoi+J8
I don't think that thread quality and the process of humans debating and learning from each other are opposing concepts.

On the contrary. It's precisely when people aren't willing to learn, or to debate respectfully and with an open mind, when thread quality deteriorates.

◧◩◪◨⬒
23. ramraj+Kf[view] [source] [discussion] 2022-12-12 07:07:22
>>sudoma+vd
Isn’t this true for pretty much all democracy too? Almost all elected politicians are not the best just the easiest or convenient to deal and agree with for the majority..
◧◩◪◨
24. beebee+1h[view] [source] [discussion] 2022-12-12 07:20:34
>>midori+kc
Thanks for deciding for us. I greatly appreciate people who overuse "we" when expressing their own thoughts.

I would take rude, well-intentioned jerks to kindly speaking devils seeking to deceive me. Have a good one in your pod, though

◧◩◪◨
25. prox+3j[view] [source] [discussion] 2022-12-12 07:41:53
>>komali+Me
As someone who did a lot of debates in philosophy, most casual commenters are hilariously bad at discussing something. It’s like a wheel that wobbles from its axis and the wheel quickly comes of the axis. It’s not always a bad thing, some threads are just that, casual.

If the purpose for you is to get a dopamine hit and not true interest (exaggerating here) it might tune you out from the matter at hand.

For me it is the aspect of a more eclectic crowd, with a host of opinions, yet often still respectful that I like. Most threads give insights that are lacking in more general, less well moderated places. You get more interesting in depth opinions and knowledge sharing what makes HN great to me.

replies(1): >>komali+VJ
◧◩◪
26. wruza+tj[view] [source] [discussion] 2022-12-12 07:46:42
>>Kim_Br+6d
I don't think it is so important whether a comment is written by a man, a woman, a child, or a <machine>, or some combination thereof. What is important is that the comment stands on its own, and has merit

This feels wrong for some reasons. A generalized knowledge that AI can express may be useful. But if it makes things up convincingly, the result that someone may follow its line of thought may be worse for them? With all shit humans say, it’s their real human experience formulated through a prism of their mood, intelligence and other states and characteristics. It’s a reflection of a real world somewhere. AI statements in this sense are minced realities cooked into something that may only look like a solid one. Maybe for some communities it would be irrelevant because participants are expected to judge logically and to check all facts, but it would require to keep awareness at all times.

By “real human” I don’t mean that they are better (or worse) in a discussion, only that I am a human too, a real experience is applicable to me in principle and I could meet it irl. AI’s experience applicability has yet to be proven, if it makes sense at all.

replies(2): >>Kim_Br+mw >>magica+rR
◧◩◪◨
27. Fridge+Km[view] [source] [discussion] 2022-12-12 08:19:07
>>Aeolun+Bd
I mean, I’d certainly prefer to be engaged in conversation with actual humans, who have actual experience and motivation. If I want to talk to the latest iteration of the gpt-parrot-robot, I’ll go to the gpt site and talk to it there.
replies(2): >>fragme+Cq >>tobtah+IT2
◧◩
28. UweSch+6o[view] [source] [discussion] 2022-12-12 08:33:21
>>dang+m
Then humans might just be on the sideline, watching chatbots flooding the forums with superbly researched mini-whitepapers with links, reasoning, humour; a flow of comments optimized like tiktok videos, unbeatable like chess engines in chess. Those bots could also collude with complementing comments, and create a background noise of opinions to fake a certain sentiment in the community.

I have no suggestion or solution, I'm just trying to wrap my head around those possibilities.

replies(1): >>techdr+zw
◧◩◪
29. nextac+Io[view] [source] [discussion] 2022-12-12 08:38:06
>>andsoi+J8
There's the quality of the written commentary (which is all that matters for anyone only reading, never posting on HN) and the quality of the engagement of people that do write comments (which include how much their learned, the emotions they had, and other less tangible stuff)

I think HN is optimizing for the former quality aspects and not the latter. So in that sense, if you can't tell if it's written by a bot, does it matter? (cue Westworld https://www.youtube.com/watch?v=kaahx4hMxmw)

◧◩◪◨⬒
30. fragme+Cq[view] [source] [discussion] 2022-12-12 08:58:39
>>Fridge+Km
Watching others more creative than I trick the bot into revealing its biases despite being programmed not to, has been highly entertaining and lets me see some of the creativity of my fellow human beings, and it definitely exceeds that of a parrot. (Not to impugn how intelligent some parrots are, but they seem to have a much more limited vocabulary.) If a curious commenter is able to come up with actually interesting content, why does it matter if there was yet a program between what they typed and what you see?
◧◩◪
31. tkgall+Eq[view] [source] [discussion] 2022-12-12 08:58:43
>>andsoi+J8
There are many types of contributions to discussions on HN, of course. But I will tell you the contributions that resonate most with me: Personal experiences and anecdotes that illuminate the general issue being discussed. Sometimes a single post is enough for that illumination, and sometimes it is the sum of many such posts that sheds the brightest light.

An example of the latter: Since March 2020, there have been many, many discussions on HN about work-from-home versus work-at-office. I myself started working from home at the same time, and articles about working from home started to appear in the media around then, too. But my own experience was a sample of one, and many of the media articles seemed to be based on samples not much larger. It was thus difficult judge which most people preferred, what the effects on overall productivity, family life, and mental health might be, how employers might respond when the pandemic cooled down, etc. The discussions on HN revealed better and more quickly what the range of experiences with WFH was, which types of people preferred it and which types didn’t, the possible advantages and disadvantages from the point of view of employers, etc.

In contrast, discussions that focus only on general principles—freedom of this versus freedom of that, foo rights versus bar obligations, crypto flim versus fiat flam—yield less of interest, at least to me.

That’s my personal experience and/or anecdote.

◧◩◪◨
32. lonely+kr[view] [source] [discussion] 2022-12-12 09:03:45
>>midori+kc
Unlikely, at least one of the Trumpbot/Bidenbot/Polibots will be against pod entry for whatever financial/religous/gut feeling/.. reasons they've been trained on.
◧◩
33. mdp202+Or[view] [source] [discussion] 2022-12-12 09:07:38
>>dang+m
> in that case we can learn from the bots

That is the whole purpose of AGI ;)

replies(1): >>ofrzet+ys
◧◩◪
34. ofrzet+ys[view] [source] [discussion] 2022-12-12 09:14:42
>>mdp202+Or
Oh yeah? So maybe you would like to be the object of supervised-by-AI learning? :-)
replies(1): >>mdp202+eu
◧◩◪◨
35. mdp202+eu[view] [source] [discussion] 2022-12-12 09:28:38
>>ofrzet+ys
> Oh yeah?

Yes, absolutely yes. We use a tool because it "does things better"; we consult the Intelligent because "it is a better input"; we strive towards AGI "to get a better insight".

> supervised

We are all inside an interaction of reciprocal learning, Ofrzeta :)

◧◩◪◨
36. goatlo+2w[view] [source] [discussion] 2022-12-12 09:47:20
>>midori+kc
Do you trust the companies training and running the AIs to happily guide you and society along?
◧◩◪◨
37. Kim_Br+mw[view] [source] [discussion] 2022-12-12 09:50:06
>>wruza+tj
Moderators need to put up with trolls and shills (and outright strange people) a lot of the time too. While so far AI's aren't always quite helpful, they also are not actively hostile.

So as far as the spectrum of things moderation needs to deal with goes, AI contribution to discussions doesn't seem to be the worst of problems, and it doesn't seem like it would be completely unmanageable.

But while AI may not be an unmitigated disaster, you are quite correct that unsupervised AI currently might not be an unmitigated boon yet either.

Currently if one does want to use an AI to help participate in discussions, I'd recommend one keep a very close eye on it to make sure the activity remains constructive. This seems like common courtesy and common sense at this time. (And accounts who act unwisely should be sanctioned.)

◧◩◪
38. techdr+zw[view] [source] [discussion] 2022-12-12 09:52:14
>>UweSch+6o
If there’s a bot that can take a topic and research the argument you feed it, all without hallucinating any data and made up references… please please point me to it.
replies(1): >>jacoop+tX
◧◩◪◨
39. execut+Sy[view] [source] [discussion] 2022-12-12 10:14:15
>>amirhi+Ab
This has already started happening at wikipedia : https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2...
◧◩
40. jfoste+hI[view] [source] [discussion] 2022-12-12 11:37:26
>>jfoste+d6
An example: I've asked it for "package delivery notification" and it generally produces something that is a better email template than communications I've seen humans put together and have many long "review sessions" on. Potentially an incredible saving of time & effort.
◧◩◪◨⬒
41. komali+VJ[view] [source] [discussion] 2022-12-12 11:49:47
>>prox+3j
You know out of curiosity I just now logged into reddit for the first time in a while and made some posts on /r/changemymind just to see if I could get some good debate, and I don't know if it was always like that over there and I just didn't realize it (bringing that type of rhetoric here might be why i'm rate limited on HN lol), or if it just got worse over the last year of my "reddit break," but holy shit is it WAY better over here. I was very skeptical when people describe HN as "insightful" or "well moderated" or "in depth" but compared to other places on the internet it's certainly true.

Dare I venture back to 4chan and see how my detoxxed brain sees it now...

replies(1): >>bombca+3V
◧◩◪◨
42. swader+LP[view] [source] [discussion] 2022-12-12 12:38:48
>>CGames+Nd
I agree that intelligent debate can happen in high-quality threads, regardless of whether the topic being discussed is subjective or objective. However, I think it's important to note that the approach to debating subjective and objective matters may be different. When debating subjective matters, the focus is often on persuading the reader or audience, whereas when debating objective matters, the goal is often to arrive at the truth or the most accurate understanding of the topic at hand. In either case, engaging in intelligent debate can be a valuable way to learn and expand our understanding of the world.
◧◩◪◨
43. magica+rR[view] [source] [discussion] 2022-12-12 12:53:46
>>wruza+tj
> But if it makes things up convincingly, the result that someone may follow its line of thought may be worse for them?

How is this different than folks getting convinced by "media" people that mass shootings didn't happen, that 9/11 was an inside job or similar?

◧◩◪◨⬒⬓
44. bombca+3V[view] [source] [discussion] 2022-12-12 13:21:59
>>komali+VJ
My gauge is how predictable it is - I can predict how a Reddit thread will go 90% of the time it seems, maybe even a 4chan thread 80% of the time.

The value of a community is in the unpredictability and HN has a good percentage of that, and I can choose to ignore the threads that will be predictable (though it can be fun to read them sometimes).

replies(1): >>prox+rX
◧◩◪◨⬒⬓⬔
45. prox+rX[view] [source] [discussion] 2022-12-12 13:41:34
>>bombca+3V
That’s mostly the default subs tho, like worldnews, funny and so on. The first three comments are whatever the previous three comments where in the previous thread on the same topic. Subs like r/askhistorians has a brutal moderation where the only parent comments are well sourced informed ones.

But in general I agree on its predictability.

replies(1): >>bombca+xY
◧◩◪◨
46. jacoop+tX[view] [source] [discussion] 2022-12-12 13:41:43
>>techdr+zw
Prompt: systemd is bad
replies(1): >>techdr+551
◧◩◪◨⬒⬓⬔⧯
47. bombca+xY[view] [source] [discussion] 2022-12-12 13:50:16
>>prox+rX
Yeah the upvote/karma system in general leads itself to "easy" replies, which quickly become recycled jokes and memes.
48. baanda+H01[view] [source] 2022-12-12 14:06:00
>>ramraj+(OP)
At that point the whole concept of a message board with humans exchanging information is probably over.

I am ultimately motivated to read this site to read smart things and something interesting. It is quite inefficient though. This comment is great but most comments are not what I am looking for.

If you could spend your time talking to Von Neumann about computing the input from thousands of random people who know far less than Von Neumann would not be interesting at all.

◧◩◪
49. intere+141[view] [source] [discussion] 2022-12-12 14:26:40
>>andsoi+J8
Yeah. Overemphasis on wanting "smart thoughtful comments" coul create a chilling effect where people might refrain from asking simple questions or posting succinct (yet valuable!) responses. Sometimes dumb questions are okay (because it's all relative).
◧◩◪◨⬒
50. techdr+551[view] [source] [discussion] 2022-12-12 14:32:37
>>jacoop+tX
I mean … if the whole “the internet is dead” conspiracy was true then all the Linux systemd debate for like the last 5 years was entirely generated by bots…

Not that it’s true. Cause I’d know if I was a bot… unless I was programmed not to notice ;-)

◧◩◪◨⬒
51. tobtah+IT2[view] [source] [discussion] 2022-12-12 23:26:53
>>Fridge+Km
ChatGPT has the potential to make online discussions more engaging and dynamic. For example, by generating additional discussion prompts or questions to keep the conversation moving.
[go to top]