And when there were the first murmurings that maybe we're finally hitting a wall the labs published ways to harness inference-time compute to get better results which can be fed back into more training.
But let's take for granted that we are putting exponential scaling to good use in terms of compute resources. It looks like we are seeing sublinear performance improvements on actual benchmarks[1]. Either way it seems optimistic at best to conclude that 1000x more compute would yield even 10x better results in most domains.
[1]fig.1 AI performance relative to human baseline. (https://hai.stanford.edu/ai-index/2025-ai-index-report)