zlacker

[return to "Chomsky on what ChatGPT is good for (2023)"]
1. caliba+cd[view] [source] 2025-05-25 18:48:51
>>mef+(OP)
The fact that we have figured out how to translate language into something a computer can "understand" should thrill linguists. Taking a word (token) and abstracting it's "meaning" as a 1,000-dimension vector seems like something that should revolutionize the field of linguistics. A whole new tool for analyzing and understanding the underlying patterns of all language!

And there's a fact here that's very hard to dispute, this method works. I can give a computer instructions and it "understands" them in a way that wasn't possible before LLMs. The main debate now is over the semantics of words like "understanding" and whether or not an LLM is conscious in the same way as a human being (it isn't).

◧◩
2. kracke+AG[view] [source] 2025-05-25 22:26:39
>>caliba+cd
Restricted to linguistics, LLM's supposed lack of understanding should be a non-sequitur. If the question is whether LLMs have formed a coherent ability to parse human languages, the answer is obviously yes. In fact not just human languages, as seen with multimodality the same transformer architecture seems to work well to model and generate anything with inherent structure.

I'm surprised that he doesn't mention "universal grammar" once in that essay. Maybe it so happens that humans do have some innate "universal grammar" wired in by instinct but it's clearly not _necessary_ to be able to parse things. You don't need to set up some explicit language rules or generative structure, enough data and the model learns to produce it. I wonder if anyone has gone back and tried to see if you can extract out some explicit generative rules from the learned representation though.

Since the "universal grammar" hypothesis isn't really falsifiable, at best you can hope for some generalized equivalent that's isomorphic to the platonic representation hypothesis and claim that all human language is aligned in some given latent representation, and that our brains have been optimized to be able to work in this subspace. That's at least a testable assumption, by trying to reverse engineer the geometry of the space LLMs have learned.

◧◩◪
3. 0xbadc+GS[view] [source] 2025-05-26 00:10:10
>>kracke+AG
Can LLMs actually parse human languages? Or can they react to stimuli with a trained behavioral response? Dogs can learn to sit when you say "sit", and learn to roll over when you say "roll over". But the dog doesn't parse human language; it reacts to stimuli with a trained behavioral response.

(I'm not that familiar with LLM/ML, but it seems like trained behavioral response rather than intelligent parsing. I believe this is part of why it hallucinates? It doesn't understand concepts, it just spits out words - perhaps a parrot is a better metaphor?)

◧◩◪◨
4. K0balt+921[view] [source] 2025-05-26 01:42:27
>>0xbadc+GS
Animals definitely parse human language, some to a significant extent.

Like an airplane taking off, things that seem like “emergent behavior” and hard lines of human vs animal behavior are really matters of degree that, like the airplane, we don’t notice until it actually takes flight… then we think there is a clean line between flying and not flying, but there isn’t. The airplane is gradually becoming weightless until it breaks contact with the ground, and animals use and understand language, but we only notice when it seems human.

◧◩◪◨⬒
5. vright+lU1[view] [source] 2025-05-26 11:49:46
>>K0balt+921
There actually is a clean line between flying and not flying. And that's when the lift generated is greater than the pull of earth's gravity. The fact that it "feels" weightless gradually doesn't change the fact that if lift<weight then the plane is not flying. If lift>weight, plane is flying. There is no "semi flying". If it's already airborne and lift becomes less than weight, then it stops flying and starts gliding.

The lift is an emergent behavior of molecules interacting (mostly) with the wings. But there is a hard clean cutoff between "flying" and "not flying".

◧◩◪◨⬒⬓
6. K0balt+MG2[view] [source] 2025-05-26 17:23:54
>>vright+lU1
Of course, but the cutoff I one of perception more than physics. The airplane is “not flying” right up until the lift generated is infinitesimally more than the weight of the aircraft. Likewise, during “flight” there are times when the lift is less than the weight, during descent. So the line seems clear but it is a matter of degree. The aircraft is not doing anything fundamentally different during the takeoff roll than during flight, it is all a matter of degree. There is no magical Change in physics or process.
[go to top]