It's still too strong a claim given that matrix multiplication also describes quantum mechanics and by extension chemistry and by extension biology and by extension our own brains… but I frequently encounter examples of mistaking two related concepts for synonyms, and I assume in this case it is meant to be a weaker claim about LLMs not being conscious.
Me, I think the word "intuition" is fine, just like I'd say that a tree falling in a forest with no one to hear it does produce a sound because sound is the vibration of the air instead of the qualia.
It's the active, iterative thinking and planning that is more critical for AGI and, while obviousky theoretically possible, much harder to imagine a neural network performing.
That is literally, literally, what it does.
One may argue that it does so wrongly, but that's a different claim entirely.
> there’s no reason to imply they do
The predictions matching reality to the best of our collective abilities to test them is such a reason.
The saying that "all models are wrong but some are useful" is a reason against that.