To AI skeptics bristling at these numbers, I’ve got a potentially controversial question: what’s the difference between this and the scientific consensus on Climate Change? Why heed the latter and not the former?
You might as well roll a ball down an incline and then ask me whether Keynes was right.
I say this as someone who written several pieces about xrisk from AI, and who is concerned. The models and reasoning are simply not nearly as detailed or well-tested as in the case of climate.
What’s the progression that leads to AI human extinction ?
In this light, widespread acknowledgement of xrisk will only come once we have a statistical model that shows it will. And at that point, it seems like it would be too late… Perhaps “Intelligence Explosion Modeling” should be a new sub-field under “AI Safety & Alignment” — a grim but useful line of work.
FAKE_EDIT: In fact, after looking it up, it sorta is! After a few minutes skimming I recommend Intelligence Explosion Microeconomics (Yudkowsky 2013>) to anyone interested in the above. On the pile of to-read lit it goes…
If I remember an article from a few days ago correctly, this would make the AI threat an “uncertain” one, rather than merely “risky” like climate change (we know what might happen, we just need to figure out how likely it is).
EDIT: Disregarding the fact that in that article, climate change was actually the example of a quintessentially uncertain problem… makes me chuckle. A lesson on relative uncertainty
In many ways AI risk looks like the opposite. It might actually cause extinction but we have no idea how likely that is and neither do we have any idea how likely any bad not-quite-extinction outcome is. The outcome might even be very positive. We have no idea when anything will happen and the only realistic plan that's sure to avoid the bad outcome is to stop building AI, which also means we don't get the potential good outcome, and there's no scientific consensus about that (or anything else) being a good plan because it's almost impossible to gather concrete empirical evidence about the risk. By the time such evidence is available, it might be too late (this could also have happened with climate change, we got lucky there...)