zlacker

[return to "Thousands of AI Authors on the Future of AI"]
1. bbor+D6[view] [source] 2024-01-08 21:58:40
>>treebr+(OP)
Very interesting, especially the huge jump forward in the first figure and a possible majority of AI researchers giving >10% to the Human Extinction outcome.

To AI skeptics bristling at these numbers, I’ve got a potentially controversial question: what’s the difference between this and the scientific consensus on Climate Change? Why heed the latter and not the former?

◧◩
2. michae+V9[view] [source] 2024-01-08 22:13:31
>>bbor+D6
We have extremely detailed and well-tested models of climate. It's worth reading the IPCC report - it's extremely interesting, and quite accessible. I was somewhat skeptical of climate work before I began reading, but I spent hundreds of hours understanding it, and was quite impressed by the depth of the work. By contrast, our models of future AI are very weak. Something like the scaling laws paper or the Chinchilla paper are far less convincing than the best climate work. And arguments like those in Nick Bostrom or Stuart Russell's books are much more conjectural and qualitative (& less well-tested) than the climate argument

I say this as someone who written several pieces about xrisk from AI, and who is concerned. The models and reasoning are simply not nearly as detailed or well-tested as in the case of climate.

[go to top]