If that's the foundation your luster is built on - then it's not really ridiculous.
GPT popularized LLMs to the world with GPT-3, not too long before GPT-4 came out. They made a lot of big, cool changes shortly after GPT-4 - and everyone in their mother announced LLM projects and integrations in that time.
It's been about 9 months now, and not a whole lot has happened in the space.
It's almost as if the law of diminishing returns has kicked in.
Keep in mind GPT-3.5 is not an overnight craze. It takes months before normal people even know what it is.
To the general public sure but not research which is what produces the models.
The idea that diminishing returns has hit because there hasn't been a new SOTA model in 9 months is ridiculous. Models take months just to train. Open AI sat on 4 for over half a year after training was done just red-teaming it.