zlacker

[parent] [thread] 3 comments
1. dns_sn+(OP)[view] [source] 2023-07-06 08:50:01
This is such a ridiculous take. Make up a hand-waving doomsday scenario, assign an arbitrarily large probability to it happening and demand that people take it seriously because we're talking about human extinction, after all. If it looks like a cult and quacks like a cult, it's probably a cult.

If nothing else, it's a great distraction from the very real societal issues that AI is going to create in the medium to long term, for example inscrutable black box decision-making and displacement of jobs.

replies(2): >>goneho+Ks >>arisAl+jK
2. goneho+Ks[view] [source] 2023-07-06 12:37:50
>>dns_sn+(OP)
Low probability events do happen sometimes though and a heuristic that says it never happens can let you down, especially when the outcome is very bad.

Most of the time a new virus is not a pandemic, but sometimes it is.

Nothing in our (human) history has caused an extinction level event for us, but these events do happen and have happened on earth a handful of times.

The arguments about superintelligent AGI and alignment risk are not that complex - if we can make an AGI the other bits follow and an extinction level event from an unaligned superintelligent AGI looks like the most likely default outcome.

I’d love to read a persuasive argument about why that’s not the case, but frankly the dismissals of this have been really bad and don’t hold up to 30 seconds of scrutiny.

People are also very bad at predicting when something like this will come. Right before the first nuclear detonation those closest to the problem thought it was decades away, similar for flight.

What we’re seeing right now doesn’t look like failure to me, it looks like something you might predict to see right before AGI is developed. That isn’t good when alignment is unsolved.

3. arisAl+jK[view] [source] 2023-07-06 14:04:12
>>dns_sn+(OP)
What are you on about? The technology we are talking about is created by 3 labs and all 3 assign a large probability. How can you refute this with what kind of credentials and science?
replies(1): >>dns_sn+GR1
◧◩
4. dns_sn+GR1[view] [source] [discussion] 2023-07-06 18:05:48
>>arisAl+jK
Unfortunately for you, that's not how the whole "science" thing works. The burden of proof lies with the people who are dreaming about these doomsday scenarios.

So far we haven't seen any proof or even a coherent hypothesis, just garden variety paranoia, mixed with opportunistic calls for regulation that just so happen to align with OpenAI's commercial interests.

[go to top]