It doesn't matter at all if experts disagree. Even a 30% chance we all die is enough to treat it as 100%. We should not care at all if 51% think it's a non issue.
If nothing else, it's a great distraction from the very real societal issues that AI is going to create in the medium to long term, for example inscrutable black box decision-making and displacement of jobs.
Most of the time a new virus is not a pandemic, but sometimes it is.
Nothing in our (human) history has caused an extinction level event for us, but these events do happen and have happened on earth a handful of times.
The arguments about superintelligent AGI and alignment risk are not that complex - if we can make an AGI the other bits follow and an extinction level event from an unaligned superintelligent AGI looks like the most likely default outcome.
I’d love to read a persuasive argument about why that’s not the case, but frankly the dismissals of this have been really bad and don’t hold up to 30 seconds of scrutiny.
People are also very bad at predicting when something like this will come. Right before the first nuclear detonation those closest to the problem thought it was decades away, similar for flight.
What we’re seeing right now doesn’t look like failure to me, it looks like something you might predict to see right before AGI is developed. That isn’t good when alignment is unsolved.
So far we haven't seen any proof or even a coherent hypothesis, just garden variety paranoia, mixed with opportunistic calls for regulation that just so happen to align with OpenAI's commercial interests.