I found this out when attempting to transform wiki pages into blog-specific-speak, repeatedly.
If we define "understanding" like "useful", as in, not an innate attribute, but something in relation to a goal, then again, a good imitation, or a rudimentary model can get very far. ChatGPT "understood" a lot of things I have thrown at it, be that algorithms, nutrition, basic calculations, transformation between text formats, where I'm stuck in my personal development journey, or how to politely address people in the email I'm about to write.
>What if our „understanding“ is just unlocking another level in a model?
I believe that it is - that understanding is basically an illusion. Impressions are made up from perceptions and thinking, and extrapolated over the unknown. And just look how far that got us!
He alludes to quite a bit here - impossible languages, intrinsic rules that don’t actually express in the language, etc - that leads me to believe there’s a pretty specific sense by which he means “understanding,” and I’d expect there’s a decent literature in linguistics covering what he’s referring to. If it’s a topic of interest to you, chasing down some of those leads might be a good start.
(I’ll note as several others have here too that most of his language seems to be using specific linguistics terms of art - “language” for “human language” is a big tell, as is the focus on understanding the mechanisms of language and how humans understand and generate languages - I’m not sure the critique here is specifically around LLMs, but more around their ability to teach us things about how humans understand language.)
I would say that it is to what extent your mental model of a certain system is able to make accurate predictions of that system's behavior.