It's uncool to look like an alarmist nut, but sometimes there's no socially acceptable alarm and the risks are real: https://intelligence.org/2017/10/13/fire-alarm/
It's worth looking at the underlying arguments earnestly, you can with an initial skepticism but I was persuaded. Alignment is also been something MIRI and others have been worried about since as early as 2007 (maybe earlier?) so it's also a case of a called shot, not a recent reaction to hype/new LLM capability.
Others have also changed their mind when they looked, for example:
- https://twitter.com/repligate/status/1676507258954416128?s=2...
- Longer form: https://www.lesswrong.com/posts/kAmgdEjq2eYQkB5PP/douglas-ho...
For a longer podcast introduction to the ideas: https://www.samharris.org/podcasts/making-sense-episodes/116...
Yes, but for reasons that no one seems to be looking at: skill atrophy. As more and more people buy into this gambit that AI is "super intelligent," they will cede more and more cognitive power to it.
On a curve, that means ~10-20 years out, AI doesn't kill us because it took over all of our work, people just got too lazy (read: over-dependent on AI doing "all the things") and then subsequently too dumb to do the work. Idiocracy, but the M. Night Shymalan version.
As we approach that point, systems that require some form of conscious human will begin to fail and the bubble will burst.