So yes, the insiders very likely know a thing or two that the rest of us don’t.
The most obvious reason is costs - if it costs many millions to train foundation models, they don't have a ton of experiments sitting around on a shelf waiting to be used. They may only get 1 shot at the base-model training. Sure productization isn't instant, but no one is throwing out that investment or delaying it longer than necessary. I cannot fathom that you can train an LLM at like 1% size/tokens/parameters to experiment on hyper parameters, architecture, etc and have a strong idea on end-performance or marketability.
Additionally, I've been part of many product launches - both hyped up big-news-events and unheard of flops. Every time, I'd say that 25-50% of the product is built/polished in the mad rush between press event and launch day. For an ML Model, this might be different, but again see above point.
Sure products may be planned month/years out, but OpenAI didn't even know LLMs were going to be this big a deal in May 2022. They had GPT-2 and GPT-3 and thought they were fun toys at that time, and had an idea for a cool tech demo. I think that OpenAI (and Google, etc) are entirely living day-to-day with this tech like those of us on the outside.
If you've been working on AI, you've seen everything go up and to the right for a while - who really benefits from pointing out that a slowdown is occurring? Who is incentivized to talk about how the benefits from scaling are slowing down or the publicly available internet-scale corpuses are running out? Not anyone who trains models and needs compute, I can tell you that much. And not anyone who has a financial interest in these companies either.
What we're going to see over next year seems mostly pretty obvious - a lot of productization (tool use, history, etc), and a lot of efforts with multimodality, synthetic data, and post-training to add knowledge, reduce brittleness, and increase benchmark scores. None of which will do much to advance core intelligence.
The major short-term unknown seems to be how these companies will be attempting to improve planning/reasoning, and how successful that will be. OpenAI's Schulman just talked about post-training RL over longer (multi-reasoning steps) time horizons, and another approach is external tree-of-thoughts type scaffolding. These both seem more about maximizing what you can get out of the base model rather than fundamentally extending it's capabilities.
I agree, and they are also living in a group-think bubble of AI/AGI hype. I don't think you'd be too welcome at OpenAI as a developer if you didn't believe they are on the path to AGI.