It's uncool to look like an alarmist nut, but sometimes there's no socially acceptable alarm and the risks are real: https://intelligence.org/2017/10/13/fire-alarm/
It's worth looking at the underlying arguments earnestly, you can with an initial skepticism but I was persuaded. Alignment is also been something MIRI and others have been worried about since as early as 2007 (maybe earlier?) so it's also a case of a called shot, not a recent reaction to hype/new LLM capability.
Others have also changed their mind when they looked, for example:
- https://twitter.com/repligate/status/1676507258954416128?s=2...
- Longer form: https://www.lesswrong.com/posts/kAmgdEjq2eYQkB5PP/douglas-ho...
For a longer podcast introduction to the ideas: https://www.samharris.org/podcasts/making-sense-episodes/116...
If a team of leading cardiac surgeons declared tomorrow that coca-cola is a leading cause of heart attacks, and devoted 20% of their income to fighting it, would you ignore their warnings as well?