To AI skeptics bristling at these numbers, I’ve got a potentially controversial question: what’s the difference between this and the scientific consensus on Climate Change? Why heed the latter and not the former?
I say this as someone who written several pieces about xrisk from AI, and who is concerned. The models and reasoning are simply not nearly as detailed or well-tested as in the case of climate.
In this light, widespread acknowledgement of xrisk will only come once we have a statistical model that shows it will. And at that point, it seems like it would be too late… Perhaps “Intelligence Explosion Modeling” should be a new sub-field under “AI Safety & Alignment” — a grim but useful line of work.
FAKE_EDIT: In fact, after looking it up, it sorta is! After a few minutes skimming I recommend Intelligence Explosion Microeconomics (Yudkowsky 2013>) to anyone interested in the above. On the pile of to-read lit it goes…