zlacker

[parent] [thread] 2 comments
1. cthalu+(OP)[view] [source] 2023-11-18 23:30:07
I do not believe AGI poses an exponential threat. I honestly don't believe we're particularly close to anything resembling AGI, and I certainly don't think transformers are going to get us there.

But this is a bad argument. No one is saying ChatGPT is going to turn evil and start killing people. The argument is that an AGI is so far beyond anything we have experience with and that there are arguments to be made that such an entity would be dangerous. And of course no one has been able to demonstrate this unsafe AGI - we don't have AGI to begin with.

replies(1): >>sho_hn+u4
2. sho_hn+u4[view] [source] 2023-11-18 23:53:40
>>cthalu+(OP)
I don't think we need AIs to posess superhuman intelligence to cause us a lot of work - legislatively regulating and policing good old limited humans already requires a lot of infrastructure.
replies(1): >>cthalu+ic
◧◩
3. cthalu+ic[view] [source] [discussion] 2023-11-19 00:33:48
>>sho_hn+u4
Certainly. I think at current "AI" just enables us to continue making the same bad decisions we were already making, though, albeit at a faster pace. It's existential in that those same bad decisions might lead to existential threats, e.g. climate change, continued inter-nation aggression and warfare, etc., I suppose, but I don't think the majority of the AI safety crowd is worried about the LLMs of today bringing about the end of the world, and talking about ChatGPT in that context is, to me, a misrepresentation of what they are actually most worried about.
[go to top]