zlacker

[return to "Chomsky on what ChatGPT is good for (2023)"]
1. paulsu+x5[view] [source] 2025-05-25 17:51:42
>>mef+(OP)
"Expert in (now-)ancient arts draws strange conclusion using questionable logic" is the most generous description I can muster.

Quoting Chomsky:

> These considerations bring up a minor problem with the current LLM enthusiasm: its total absurdity, as in the hypothetical cases where we recognize it at once. But there are much more serious problems than absurdity.

> One is that the LLM systems are designed in such a way that they cannot tell us anything about language, learning, or other aspects of cognition, a matter of principle, irremediable... The reason is elementary: The systems work just as well with impossible languages that infants cannot acquire as with those they acquire quickly and virtually reflexively.

Response from o3:

LLMs do surface real linguistic structure:

• Hidden syntax: Attention heads in GPT-style models line up with dependency trees and phrase boundaries—even though no parser labels were ever provided. Researchers have used these heads to recover grammars for dozens of languages.

• Typology signals: In multilingual models, languages that share word-order or morphology cluster together in embedding space, letting linguists spot family relationships and outliers automatically.

• Limits shown by contrast tests: When you feed them “impossible” languages (e.g., mirror-order or random-agreement versions of English), perplexity explodes and structure heads disappear—evidence that the models do encode natural-language constraints.

• Psycholinguistic fit: The probability spikes LLMs assign to next-words predict human reading-time slow-downs (garden-paths, agreement attraction, etc.) almost as well as classic hand-built models.

These empirical hooks are already informing syntax, acquisition, and typology research—hardly “nothing to say about language.”

◧◩
2. foobar+ta[view] [source] 2025-05-25 18:27:45
>>paulsu+x5
> LLMs do surface real linguistic structure...

It's completely irrelevant because the point he's making is that LLMs operate differently from human languages as evidenced by the fact that they can learn language structures that humans cannot learn. Put another way, I'm sure you can point out an infinitude of similarities between human language faculty and LLMs but it's the critical differences that make LLMs not useful models of human language ability.

> When you feed them “impossible” languages (e.g., mirror-order or random-agreement versions of English), perplexity explodes and structure heads disappear—evidence that the models do encode natural-language constraints.

This is confused. You can pre-train an LLM on English or an impossible language and they do equally well. On the other hand humans can't do that, ergo LLMs aren't useful models of human language because they lack this critical distinctive feature.

◧◩◪
3. paulsu+ij[view] [source] 2025-05-25 19:32:23
>>foobar+ta
> You can pre-train an LLM on English or an impossible language and they do equally well

It's impressive that LLMs can learn languages that humans cannot. In what frame is this a negative?

Separately, "impossible language" is a pretty clear misnomer. If an LLM can learn it, it's possible.

◧◩◪◨
4. foobar+qk[view] [source] 2025-05-25 19:42:18
>>paulsu+ij
The latter. Moro showed that you can construct simple language rules, in particular linear rules, like the third word of every sentence modifies the noun, that humans have a hard time learning (specifically they use different parts of their brain in MRI scans and take longer to process than control languages) and are different from conventional human language structure (which hierarchical structure dependent, i.e. roughly that words are interpreted according to their position in a parse tree not their linear order).

That's what "impossible language" means in this context, not something like computationally impossible or random.

◧◩◪◨⬒
5. paulsu+Rk[view] [source] 2025-05-25 19:45:15
>>foobar+qk
Ok then .. what makes that a negative? You're describing a human limitation and a strength of LLMs
◧◩◪◨⬒⬓
6. foobar+qm[view] [source] 2025-05-25 19:56:30
>>paulsu+Rk
It's not a negative, it's just not what humans do, which is Chomsky's (a person studying what humans do) point.

As I said in another comment this whole dispute would be put to bed if people understood that they don't care about what humans do (and that Chomsky does).

[go to top]