zlacker

[parent] [thread] 14 comments
1. matthb+(OP)[view] [source] 2022-12-12 04:44:12
There is an xkcd comic about this (of course):

#810 Constructive: https://xkcd.com/810/

replies(2): >>ramraj+p5 >>Kim_Br+Ga
2. ramraj+p5[view] [source] 2022-12-12 05:41:30
>>matthb+(OP)
Of course there is, but it’s definitely weird when the jokes only funny when it’s not easy to think of it as a real possibility!

In someways this thread sounds like the real first step in the raise of true AI, in a weird banal encroachment kind of way.

replies(2): >>amirhi+a9 >>midori+U9
◧◩
3. amirhi+a9[view] [source] [discussion] 2022-12-12 06:22:20
>>ramraj+p5
I think it would be really interesting to see threads on Hackernews start with an AI digestion of the article and surrounding discussion. This could provide a helpful summary and context for readers, and also potentially highlight important points and counterpoints in the conversation. It would be a great way to use AI to enhance the user experience on the site.

I routinely use AI to help me communicate. Like Aaron to my Moses.

replies(1): >>execut+sw
◧◩
4. midori+U9[view] [source] [discussion] 2022-12-12 06:29:28
>>ramraj+p5
When I compare the ChatGPT-generated comments to those written by real humans on most web forums, I could easily see myself preferring to only interact with AIs in the future rather than humans, where I have to deal with all kinds of stupidity and bad and rude behavior.

The AIs aren't going to take over by force, it'll be because they're just nicer to deal with than real humans. Before long, we'll let AIs govern us, because the leaders we choose for ourselves (e.g. Trump) are so awful that it'll be easier to compromise on an AI.

Before long, we'll all be happy to line up to get installed into Matrix pods.

replies(4): >>sudoma+5b >>beebee+Be >>lonely+Uo >>goatlo+Ct
5. Kim_Br+Ga[view] [source] 2022-12-12 06:36:33
>>matthb+(OP)
There is -of course- the famous Alan Turing paper about this [1], which is rapidly becoming more relevant by the day.

Alan Turing's paper was quite forward thinking. At the time, most people did not yet consider men and women to be equal (let alone homosexuals).

I don't think it is so important whether a comment is written by a man, a woman, a child, or a <machine>, or some combination thereof. What is important is that the comment stands on its own, and has merit.

Pseudonyms(accounts) do have a role to play here. On HN, an account can accrue reputation based on whether their past comments were good or bad. This can help rapidly filter out certain kinds of edge cases and/or bad actors.

A Minimum Required Change to policy might be: Accounts who regularly make false/incorrect comments may need to be downvoted/banned (more) aggressively, where previously we simply assumed they were making mistakes in good faith.

This is not to catch out bots per-se, but rather to deal directly with new failure modes that they introduce. This particular approach also happens to be more powerful: it immediately deals with meatpuppets and other ancillary downsides.

We're currently having a bit of a revolution in AI going on. And we might come up with better ideas over time too. Possibly we need to revisit our position and adjust every 6 months; or even every 3 months.

[1] https://academic.oup.com/mind/article/LIX/236/433/986238?log...

replies(2): >>Kim_Br+Fb >>wruza+3h
◧◩◪
6. sudoma+5b[view] [source] [discussion] 2022-12-12 06:41:21
>>midori+U9
I think it's important to remember that just because something is easier to deal with, it doesn't necessarily mean it's better. The fact that AIs may be more pleasant to interact with than some humans doesn't mean that they are better equipped to govern us. In fact, I would argue that it is precisely the challenges and difficulties that come with dealing with other humans that make us better, more resilient, and more capable as a society.
replies(1): >>ramraj+kd
◧◩
7. Kim_Br+Fb[view] [source] [discussion] 2022-12-12 06:47:10
>>Kim_Br+Ga
Note: Alan Turing's Imitiation Game pretty much directly involves Men, Women, Machines, Teletypes.

These days of course we use such things as IRC clients, Discord, Web Browsers etc, instead of teletypes. If you substitute in these modern technologies, the Imitation Game still applies to much online interaction today.

I've often applied the lessons gleaned from this to my own online interactions with other people. I don't think I ever quite imagined it might start applying directly to <machines>!

◧◩◪◨
8. ramraj+kd[view] [source] [discussion] 2022-12-12 07:07:22
>>sudoma+5b
Isn’t this true for pretty much all democracy too? Almost all elected politicians are not the best just the easiest or convenient to deal and agree with for the majority..
◧◩◪
9. beebee+Be[view] [source] [discussion] 2022-12-12 07:20:34
>>midori+U9
Thanks for deciding for us. I greatly appreciate people who overuse "we" when expressing their own thoughts.

I would take rude, well-intentioned jerks to kindly speaking devils seeking to deceive me. Have a good one in your pod, though

◧◩
10. wruza+3h[view] [source] [discussion] 2022-12-12 07:46:42
>>Kim_Br+Ga
I don't think it is so important whether a comment is written by a man, a woman, a child, or a <machine>, or some combination thereof. What is important is that the comment stands on its own, and has merit

This feels wrong for some reasons. A generalized knowledge that AI can express may be useful. But if it makes things up convincingly, the result that someone may follow its line of thought may be worse for them? With all shit humans say, it’s their real human experience formulated through a prism of their mood, intelligence and other states and characteristics. It’s a reflection of a real world somewhere. AI statements in this sense are minced realities cooked into something that may only look like a solid one. Maybe for some communities it would be irrelevant because participants are expected to judge logically and to check all facts, but it would require to keep awareness at all times.

By “real human” I don’t mean that they are better (or worse) in a discussion, only that I am a human too, a real experience is applicable to me in principle and I could meet it irl. AI’s experience applicability has yet to be proven, if it makes sense at all.

replies(2): >>Kim_Br+Wt >>magica+1P
◧◩◪
11. lonely+Uo[view] [source] [discussion] 2022-12-12 09:03:45
>>midori+U9
Unlikely, at least one of the Trumpbot/Bidenbot/Polibots will be against pod entry for whatever financial/religous/gut feeling/.. reasons they've been trained on.
◧◩◪
12. goatlo+Ct[view] [source] [discussion] 2022-12-12 09:47:20
>>midori+U9
Do you trust the companies training and running the AIs to happily guide you and society along?
◧◩◪
13. Kim_Br+Wt[view] [source] [discussion] 2022-12-12 09:50:06
>>wruza+3h
Moderators need to put up with trolls and shills (and outright strange people) a lot of the time too. While so far AI's aren't always quite helpful, they also are not actively hostile.

So as far as the spectrum of things moderation needs to deal with goes, AI contribution to discussions doesn't seem to be the worst of problems, and it doesn't seem like it would be completely unmanageable.

But while AI may not be an unmitigated disaster, you are quite correct that unsupervised AI currently might not be an unmitigated boon yet either.

Currently if one does want to use an AI to help participate in discussions, I'd recommend one keep a very close eye on it to make sure the activity remains constructive. This seems like common courtesy and common sense at this time. (And accounts who act unwisely should be sanctioned.)

◧◩◪
14. execut+sw[view] [source] [discussion] 2022-12-12 10:14:15
>>amirhi+a9
This has already started happening at wikipedia : https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2...
◧◩◪
15. magica+1P[view] [source] [discussion] 2022-12-12 12:53:46
>>wruza+3h
> But if it makes things up convincingly, the result that someone may follow its line of thought may be worse for them?

How is this different than folks getting convinced by "media" people that mass shootings didn't happen, that 9/11 was an inside job or similar?

[go to top]