zlacker

[return to "Chomsky on what ChatGPT is good for (2023)"]
1. atdt+SZ[view] [source] 2025-05-26 01:19:36
>>mef+(OP)
The level of intellectual engagement with Chomsky's ideas in the comments here is shockingly low. Surely, we are capable of holding these two thoughts: one, that the facility of LLMs is fantastic and useful, and two, that the major breakthroughs of AI this decade have not, at least so far, substantially deepened our understanding of our own intelligence and its constitution.

That may change, particularly if the intelligence of LLMs proves to be analogous to our own in some deep way—a point that is still very much undecided. However, if the similarities are there, so is the potential for knowledge. We have a complete mechanical understanding of LLMs and can pry apart their structure, which we cannot yet do with the brain. And some of the smartest people in the world are engaged in making LLMs smaller and more efficient; it seems possible that the push for miniaturization will rediscover some tricks also discovered by the blind watchmaker. But these things are not a given.

◧◩
2. xron+IT1[view] [source] 2025-05-26 11:43:48
>>atdt+SZ
Chompsky's central criticism of LLMs is that they can learn impossible languages just as easily as they learn possible languages. He refers to this repeatedly in the linked interview. Therefore, they cannot teach us about our own intelligence.

However, a paper published last year (Mission: Impossible Language Models, Kallini et al.) proved that LLMs do NOT learn impossible languages as easily as they learn possible languages. This undermines everything that Chompsky says about LLMs in the linked interview.

◧◩◪
3. Compos+QH2[view] [source] 2025-05-26 17:30:18
>>xron+IT1
I'm not that convinced by this paper. The "impossible languages" are all English with some sort of transformation applied, such as shuffling the word order. It seems like learning such languages would require first learning English and then learning the transformation. It's not surprising that systems would be worse at learning such languages than just learning English on its own. But I don't think these sorts of languages are what Chomsky is talking about. When Chomsky says "impossible languages," he means languages that have a coherent and learnable structure but which aren't compatible with what he thinks are innate grammatical facilities of the human mind. So for instance, x86 assembly language is reasonably structured and can express anything that C++ can, but unlike C++, it doesn't have a recursive tree-based syntax. Chomsky believes that any natural language you find will be structured more like C++ than like assembly language, because he thinks humans have an innate mental facility for using tree-based languages. I actually think a better test of whether LLMs learn languages like humans would be to see if they learn assembly as well as C++. That would be incomplete of course, but it would be getting at what Chomsky's talking about.

Also, GPT-2 actually seems to do quite well on some of the tested languages, including word-hop, partial reverse, and local-shuffle. It doesn't do quite as well as plain English, but GPT-2 was designed to learn English, so it's not surprising that it would do a little better. For instance, they tokenization seems biased towards English. They show "bookshelf" becoming the tokens "book", "sh", and "lf" – which in many of the languages get spread throughout a sentence. I don't think a system designed to learn shuffled-English would tokenize this way!

https://aclanthology.org/2024.acl-long.787.pdf

[go to top]