I think it’s in everyone’s benefit if we start planning for a world where a significant portion of the experts are stubbornly wrong about AGI. As a technology, generally intelligent ML has the potential to change so many aspects of our world. The dangers of dismissing the possibility of AGI emerging in the next 5-10 years are huge.
Again, I think we should consider "The Human Alignment Problem" more in this context. The transformers in question are large, heavy and not really prone to "recursive self-improvement".
If the ML-AGI works out in a few years, who gets to enter the prompts?
... ... ...
Obviously "/s", obviously joking, but meant to highlight that there are a few parties that would all answer "me" and truly mean it, often not in a positive way.
We can worry about two things at once. We can be especially worried that at some point (maybe decades away, potentially years away), we'll have nuclear weapons and rampant AGI.