It's uncool to look like an alarmist nut, but sometimes there's no socially acceptable alarm and the risks are real: https://intelligence.org/2017/10/13/fire-alarm/
It's worth looking at the underlying arguments earnestly, you can with an initial skepticism but I was persuaded. Alignment is also been something MIRI and others have been worried about since as early as 2007 (maybe earlier?) so it's also a case of a called shot, not a recent reaction to hype/new LLM capability.
Others have also changed their mind when they looked, for example:
- https://twitter.com/repligate/status/1676507258954416128?s=2...
- Longer form: https://www.lesswrong.com/posts/kAmgdEjq2eYQkB5PP/douglas-ho...
For a longer podcast introduction to the ideas: https://www.samharris.org/podcasts/making-sense-episodes/116...
And in other fields, being alarmist has paid off too with little recourse for bad predictions -- how many times have we heard that there will be huge climate disasters ending humanity, the extinction of bees, mass starvation, etc. (not to diminish the dangers of climate change which is obviously very real)? I think alarmism is generally rewarded, at least in media.
Most of the AI concern that's high status to believe has been the bias, misinformation, safety, stuff. Until very recently talk about e-risk was dismissed and mocked without really engaging with the underlying arguments. That may be changing now, but on net I still mostly see people mocked and dismissed for it.
The set of people alarmed by AGI e-risk are also pretty different than the set alarmed about a lot of these other issues that aren't really e-risks (though still might have bad outcomes). At least EY, Bostrom, Toby Ord are not also as worried about about all these other things to nearly the same extent - the extinction risk of unaligned AGI is different in severity.