zlacker

[parent] [thread] 6 comments
1. g42gre+(OP)[view] [source] 2023-05-16 19:03:20
I understand the idea behind it: the risks are high and we want to ensure that the AI can not be used for purposes that threatens the survival human civilization. Unfortunately, there is high probability that this agency will be abused from day one: instead of (or in addition to) focusing on the humanity's survival, the agency could be used as a thought police. The AI that allows for the 'wrongthink' will be banned. Only the 'correct think' AI will be licensed to the public.
replies(2): >>curiou+h1 >>diputs+F2
2. curiou+h1[view] [source] 2023-05-16 19:10:04
>>g42gre+(OP)
The risks are not high. I see this as simply a power play to convince people that OpenAI is better than they actually are. I am not saying they're stupid but I wouldn't consider Sam Altman to be an AI expert by virtue of being OpenAI's CEO.
replies(1): >>comp_t+ng1
3. diputs+F2[view] [source] 2023-05-16 19:16:09
>>g42gre+(OP)
I mean, yeah, that sounds good. It wouldn't affect your ability to think for yourself and spread your ideas, it would just put boundaries on AI.

I've seen a lot of people completely misunderstand what chat GPT is doing and is capable of. They treat it as an oracle that reveals "hidden truths" or makes infallible decisions based on pure cold logic, both of which are completely wrong. It's just a text jumbler that jumbles text well. Sometimes that text reflects facts, sometimes it doesn't.

But if it has the capability to confidently express lies and convince the general public that those lies are true because "the smart computer said so", then maybe we should be really careful about what we let the "smart computer" say.

Personally, I don't want my kids learning that "Hitler did nothing wrong" because the public model ingested too much garbage from 4chan. People will use chatGPT as a vector for propaganda if we let them, we don't need to make it any easier for them.

replies(1): >>g42gre+y4
◧◩
4. g42gre+y4[view] [source] [discussion] 2023-05-16 19:23:05
>>diputs+F2
But would you like your kids to learn that there are no fat people, only "differently weight abled"? That being overweight is not bad for you, it just makes you a victim of oppression and deserve, no actually require a sympathy? No smart people, only "mentally privileged" that deserve, no actually require a public condemnation? These are all examples of a 'wrongthink'. It's a long list, but you get the idea.
replies(1): >>diputs+Vb
◧◩◪
5. diputs+Vb[view] [source] [discussion] 2023-05-16 19:56:02
>>g42gre+y4
I think you have a bad media diet if you think any of those are actual problems in the real world and not just straw men made by provocateurs stirring the pot.

Honestly though, I would prefer an AI that was strictly neutral about anything other than purely factual information. That isn't really possible with the tech we have now though. I think we need to loudly change the public perception of what chatGPT and similar actually are. They are fancy programs that create convincing hallucinations, directed by your input. We need to think of it as a brainstorming tool, not a knowledge engine.

◧◩
6. comp_t+ng1[view] [source] [discussion] 2023-05-17 03:46:59
>>curiou+h1
So the fact that Geoffrey Hinton, Stuart Russell, Dario Amodei, Shane Legg, Demis Hassabis, Paul Christiano, Jürgen Schmidhuber, among many others, think that there's a non-trivial chance of human extinction from AI in the next few decades, should be a reason to actually evaluate the arguments for x-risk, yeah?
replies(1): >>curiou+Gt1
◧◩◪
7. curiou+Gt1[view] [source] [discussion] 2023-05-17 06:20:33
>>comp_t+ng1
Nope. All you need to know is that there are billions of people whose lives have been slightly impacted by even the internet.
[go to top]