Idk. Folks much smarter than I seem worried so maybe I should be too but it just seems like such a long shot.
So yes, the insiders very likely know a thing or two that the rest of us don’t.
What we're going to see over next year seems mostly pretty obvious - a lot of productization (tool use, history, etc), and a lot of efforts with multimodality, synthetic data, and post-training to add knowledge, reduce brittleness, and increase benchmark scores. None of which will do much to advance core intelligence.
The major short-term unknown seems to be how these companies will be attempting to improve planning/reasoning, and how successful that will be. OpenAI's Schulman just talked about post-training RL over longer (multi-reasoning steps) time horizons, and another approach is external tree-of-thoughts type scaffolding. These both seem more about maximizing what you can get out of the base model rather than fundamentally extending it's capabilities.