Is there some theoretical substance or empirical evidence to suggest that the story doesn't just end here? Perhaps OpenBrain sees no significant gains over the previous iteration and implodes under the financial pressure of exorbitant compute costs. I'm not rooting for an AI winter 2.0 but I fail to understand how people seem sure of the outcome of experiments that have not even been performed yet. Help, am I missing something here?
And when there were the first murmurings that maybe we're finally hitting a wall the labs published ways to harness inference-time compute to get better results which can be fed back into more training.