Based on a similar understanding, the idea that transformer models will lead to AGI seems obviously incorrect, as impressive as they are, they are just statistical pattern matchers of tokens, not systems that understand the world from first principles. And just in case you're among those that believe "humans are just pattern matchers", that might be true, but humans are modeling the world based on real time integrated sensory input, not on statistical patterns of a selection of text posted online. There's simply no reason to believe that AGI can come out of that.