zlacker

[parent] [thread] 7 comments
1. wisty+(OP)[view] [source] 2023-11-22 07:31:20
There is a middle ground, in that maybe ChatGTP shouldn't help users commit certain serious crimes. I am pretty pro free speech, and I think there's definitely a slippery slope here, but there is a bit of justification.
replies(3): >>Stanis+G8 >>hef198+Q8 >>low_te+Ye
2. Stanis+G8[view] [source] 2023-11-22 08:37:17
>>wisty+(OP)
Which users? The greatest crimes, by far, are committed by the US government (and other governments around the world) - and you can be sure that AI and/or AGI will be designed to help them commit their crimes more efficiently, effectively and to manufacture consent to do so.
3. hef198+Q8[view] [source] 2023-11-22 08:38:43
>>wisty+(OP)
I am a little less free speech than Americans, in Germany we have serious limitations around hate speech and holicaust denial for example.

Putting thise restrictions into a tool like ChatGPT goes to far so, because so far AI still needs a prompt to do anything. The problem I see, is with ChatGPT, being trained on a lot hate speech or prpopagabda, slipts in those things even if not prompted to. Which, and I am by no means an AI expert not by far, seems to be a sub-problem of the hallucination problems of making stuff up.

Because we have to remind ourselves, AI so far is glorified mavhine learning creating content, it is not concient. But it can be used to create a lot of propaganda and deffamation content at unprecedented scale and speed. And that is the real problem.

replies(1): >>freedo+tO1
4. low_te+Ye[view] [source] 2023-11-22 09:30:33
>>wisty+(OP)
The problem here is to equate AI speech with human speech. The AI doesn't "speak", only humans speak. The real slippery slope for me is this tendency of treating ChatGPT as some kind of proto-human entity. If people are willing to do that, then we're screwed either way (whether the AI is outputting racist content or excessively PI content). If you take the output of the AI and post it somewhere, it's on you, not the AI. You're saying it; it doesn't matter where it came from.
replies(3): >>cyanyd+6z >>silvar+NM3 >>miracu+uX5
◧◩
5. cyanyd+6z[view] [source] [discussion] 2023-11-22 12:21:52
>>low_te+Ye
AI will be in the fore front in multiple elections globally in a few years.

And it'll likely be doing it with very little input, and generate entire campaigns.

You can claim that "people" are the ones responsible for that, but it's going to overwhelm any attempts to stop it.

So yeah, there's a purpose to examine how these machines are built, not just what the output is.

◧◩
6. freedo+tO1[view] [source] [discussion] 2023-11-22 18:28:05
>>hef198+Q8
Apologies this is very off topic, but I don't know anyone from Germany that I can ask and you opened the door a tiny bit by mentioning the holocaust :-)

I've been trying to really understand the situation and how Hitler was able to rise to power. The horrendous conditions placed on Germany after WWI and the Weimar Republic for example have really enlightened me.

Have you read any of the big books on the subject that you could recommend? I'm reading Ian Kershaw's two-part series on Hitler, and William Shirer's "Collapse of the Third Republic" and "Rise and Fall of the Third Reich". Have you read any of those, or do you have books you would recommend?

◧◩
7. silvar+NM3[view] [source] [discussion] 2023-11-23 08:21:41
>>low_te+Ye
Youre saying that the problem will be people using AI to persuade other people that the AI is 'super smart' and should be held in high esteem.

Its already being done now with actors and celebrities. We live in this world already. AI will just make this trend so that even a kid in his room can anonymously lead some cult for nefarious ends. And it will allow big companies to scale their propaganda without relying on so many 'troublesome human employees'.

◧◩
8. miracu+uX5[view] [source] [discussion] 2023-11-23 23:34:31
>>low_te+Ye
Yes, but this distinction will not be possible in the future some people are working on. This future will be such that whatever their "safe" AI says is not ok will lead to prosecution as "hate speech". They tried it with political correctness, it failed because people spoke up. Once AI makes the decision they will claim that to be the absolute standard. Beware.
[go to top]