In many ways AI risk looks like the opposite. It might actually cause extinction but we have no idea how likely that is and neither do we have any idea how likely any bad not-quite-extinction outcome is. The outcome might even be very positive. We have no idea when anything will happen and the only realistic plan that's sure to avoid the bad outcome is to stop building AI, which also means we don't get the potential good outcome, and there's no scientific consensus about that (or anything else) being a good plan because it's almost impossible to gather concrete empirical evidence about the risk. By the time such evidence is available, it might be too late (this could also have happened with climate change, we got lucky there...)