zlacker

[return to "Thousands of AI Authors on the Future of AI"]
1. bbor+D6[view] [source] 2024-01-08 21:58:40
>>treebr+(OP)
Very interesting, especially the huge jump forward in the first figure and a possible majority of AI researchers giving >10% to the Human Extinction outcome.

To AI skeptics bristling at these numbers, I’ve got a potentially controversial question: what’s the difference between this and the scientific consensus on Climate Change? Why heed the latter and not the former?

◧◩
2. michae+V9[view] [source] 2024-01-08 22:13:31
>>bbor+D6
We have extremely detailed and well-tested models of climate. It's worth reading the IPCC report - it's extremely interesting, and quite accessible. I was somewhat skeptical of climate work before I began reading, but I spent hundreds of hours understanding it, and was quite impressed by the depth of the work. By contrast, our models of future AI are very weak. Something like the scaling laws paper or the Chinchilla paper are far less convincing than the best climate work. And arguments like those in Nick Bostrom or Stuart Russell's books are much more conjectural and qualitative (& less well-tested) than the climate argument

I say this as someone who written several pieces about xrisk from AI, and who is concerned. The models and reasoning are simply not nearly as detailed or well-tested as in the case of climate.

◧◩◪
3. bbor+Qm[view] [source] 2024-01-08 23:12:03
>>michae+V9
Amazing response, thanks for taking the time - concise, clear, and I don’t think I’ll be using that comparison again because of you. I see now how much more convincing mathematical models are than philosophical arguments in this context, and why that allows modern climate-change-believing scientists to dismiss this (potential, very weak, uncertain) cogsci consensus.

In this light, widespread acknowledgement of xrisk will only come once we have a statistical model that shows it will. And at that point, it seems like it would be too late… Perhaps “Intelligence Explosion Modeling” should be a new sub-field under “AI Safety & Alignment” — a grim but useful line of work.

FAKE_EDIT: In fact, after looking it up, it sorta is! After a few minutes skimming I recommend Intelligence Explosion Microeconomics (Yudkowsky 2013>) to anyone interested in the above. On the pile of to-read lit it goes…

[go to top]