zlacker

[return to "Chomsky on what ChatGPT is good for (2023)"]
1. lanfeu+f6[view] [source] 2025-05-25 17:56:36
>>mef+(OP)
I'm noticing that leftists overwhelmingly toe the same line on AI skepticism, which suggests to me an ideological motivation.
◧◩
2. thomas+AU[view] [source] 2025-05-26 00:26:28
>>lanfeu+f6
Chomsky's problem here has nothing to do with his politics, but unfortunately a lot to do with his long-held position in the Nature/Nurture debate - a position that is undermined by the ability of LLMs to learn language without hardcoded grammatical rules:

  Chomsky introduced his theory of language acquisition, according to which children have an inborn quality of being biologically encoded with a universal grammar
https://psychologywriting.com/skinner-and-chomsky-on-nature-...
◧◩◪
3. js8+Au1[view] [source] 2025-05-26 07:27:34
>>thomas+AU
I don't see how the two things are related. Whether acquisition of human language is nature or nurture - it is still learning of some sort.

Yes, maybe we can reproduce that learning process in LLMs, but that doesn't mean the LLMs imitate only the nurture part (might as well be just finetuning), and not the nature part.

An airplane is not an explanation for a bird's flight.

◧◩◪◨
4. thomas+3U1[view] [source] 2025-05-26 11:47:25
>>js8+Au1
The great breakthrough in AI turned out to be LLMs.

Nature, for an LLM, is its design: graph, starting weights, etc.

Environment, for an LLM, is what happens during training.

LLMs are capable of learning grammar entirely from their environment, which suggests that infants are too, which is bad for Chomsky's position that the basics of grammar are baked into human DNA.

◧◩◪◨⬒
5. sudden+DY1[view] [source] 2025-05-26 12:22:40
>>thomas+3U1
LLMs require vastly more data than humans and still struggle with some more esoteric grammatical rules like parasitic gaps. The fact grammar can be approximated given trillions of words doesn't explain how babies learn language from a much more modest dataset.
◧◩◪◨⬒⬓
6. thomas+v22[view] [source] 2025-05-26 12:52:49
>>sudden+DY1
It's not that the invention of LLMs conclusively disproves Chomksy's position.

However, we now have a proof-of-concept that a computer can learn grammar in a sophisticated way, from the ground up.

We have yet to code something procedural that approaches the same calibre via a hard-coded universal grammar.

That may not obliterate Chomksy's position, but it looks bad.

◧◩◪◨⬒⬓⬔
7. sudden+h62[view] [source] 2025-05-26 13:25:47
>>thomas+v22
That's not the goal of generative linguistics though; it's not an engineering project.
◧◩◪◨⬒⬓⬔⧯
8. thomas+A82[view] [source] 2025-05-26 13:45:14
>>sudden+h62
The problem encompasses not just biology and information technology, but also linguistics. Even if LLMs say nothing about biology, they do tell us something about the nature of language itself.

Again, that LLMs can learn to compose sophisticated texts from training alone does not close the case on Chomsky's position.

However, it is a piece of evidence against it. It does suggest, by Occam's razor, that a hardwired universal grammar is the lesser theory.

◧◩◪◨⬒⬓⬔⧯▣
9. sudden+Kv2[view] [source] 2025-05-26 16:16:30
>>thomas+A82
How do LLMs explain how 5 year olds respect island constraints?
◧◩◪◨⬒⬓⬔⧯▣▦
10. thomas+uz2[view] [source] 2025-05-26 16:41:04
>>sudden+Kv2
I don't have the domain knowledge to discuss that.
◧◩◪◨⬒⬓⬔⧯▣▦▧
11. sudden+BL2[view] [source] 2025-05-26 17:52:40
>>thomas+uz2
If you don't know what a syntactic island is, perhaps you're not the best judge of the plausibility of a linguistic theory.
[go to top]