zlacker

[return to "Chomsky on what ChatGPT is good for (2023)"]
1. caliba+cd[view] [source] 2025-05-25 18:48:51
>>mef+(OP)
The fact that we have figured out how to translate language into something a computer can "understand" should thrill linguists. Taking a word (token) and abstracting it's "meaning" as a 1,000-dimension vector seems like something that should revolutionize the field of linguistics. A whole new tool for analyzing and understanding the underlying patterns of all language!

And there's a fact here that's very hard to dispute, this method works. I can give a computer instructions and it "understands" them in a way that wasn't possible before LLMs. The main debate now is over the semantics of words like "understanding" and whether or not an LLM is conscious in the same way as a human being (it isn't).

◧◩
2. kracke+AG[view] [source] 2025-05-25 22:26:39
>>caliba+cd
Restricted to linguistics, LLM's supposed lack of understanding should be a non-sequitur. If the question is whether LLMs have formed a coherent ability to parse human languages, the answer is obviously yes. In fact not just human languages, as seen with multimodality the same transformer architecture seems to work well to model and generate anything with inherent structure.

I'm surprised that he doesn't mention "universal grammar" once in that essay. Maybe it so happens that humans do have some innate "universal grammar" wired in by instinct but it's clearly not _necessary_ to be able to parse things. You don't need to set up some explicit language rules or generative structure, enough data and the model learns to produce it. I wonder if anyone has gone back and tried to see if you can extract out some explicit generative rules from the learned representation though.

Since the "universal grammar" hypothesis isn't really falsifiable, at best you can hope for some generalized equivalent that's isomorphic to the platonic representation hypothesis and claim that all human language is aligned in some given latent representation, and that our brains have been optimized to be able to work in this subspace. That's at least a testable assumption, by trying to reverse engineer the geometry of the space LLMs have learned.

◧◩◪
3. 0xbadc+GS[view] [source] 2025-05-26 00:10:10
>>kracke+AG
Can LLMs actually parse human languages? Or can they react to stimuli with a trained behavioral response? Dogs can learn to sit when you say "sit", and learn to roll over when you say "roll over". But the dog doesn't parse human language; it reacts to stimuli with a trained behavioral response.

(I'm not that familiar with LLM/ML, but it seems like trained behavioral response rather than intelligent parsing. I believe this is part of why it hallucinates? It doesn't understand concepts, it just spits out words - perhaps a parrot is a better metaphor?)

◧◩◪◨
4. GolfPo+8X[view] [source] 2025-05-26 00:52:27
>>0xbadc+GS
>Can LLMs actually parse human languages?

IMHO, no, they have nothing approaching understanding. It's Chinese Rooms[1] all the way down, just with lots of bell and whistles. Spicy autocomplete.

1. https://en.wikipedia.org/wiki/Chinese_room

◧◩◪◨⬒
5. Camper+pX[view] [source] 2025-05-26 00:55:25
>>GolfPo+8X
Go ask the operator of a Chinese room to do some math they weren't taught in school, and see if the translation guide helps.

The analogy I've used before is a bright first-grader named Johnny. Johnny stumbles across a high school algebra book. Unless Johnny's last name is von Neumann, he isn't going to get anything out of that book. An LLM will.

So much for the Chinese Room.

◧◩◪◨⬒⬓
6. dTal+5O2[view] [source] 2025-05-26 18:06:32
>>Camper+pX
A "Chinese Room" absolutely will, because the original thought experiment proposed no performance limits on the setup - the Room is said to pass the Turing Test flawlessly.

People keep using "Chinese Room" to mean something it isn't and it's getting annoying. It is nothing more than a (flawed) intuition pump and should not be used as an analogy for anything, let alone LLMs. "It's a Chinese Room" is nonsensical unless there is literally an ACTUAL HUMAN in the setup somewhere - its argument, invalid as it is, is meaningless in its absence.

◧◩◪◨⬒⬓⬔
7. Camper+913[view] [source] 2025-05-26 19:27:15
>>dTal+5O2
A Chinese Room has no attention model. The operator can look up symbolic and syntactical equivalences in both directions, English to Chinese and Chinese back to English, but they can't associate Chinese words with each other or arrive at broader inferences from doing so. An LLM can.

If I were to ask a Chinese room operator, "What would happen if gravity suddenly became half as strong while I'm drinking tea?," what would you expect as an answer?

Another question: if I were to ask "What would be an example of something a Chinese room's operator could not handle, that an actual Chinese human could?", what would you expect in response?

Claude gave me the first question in response to the second. That alone takes Chinese Rooms out of the realm of any discussion regarding LLMs, and vice versa. The thought experiment didn't prove anything when Searle came up with it, and it hasn't exactly aged well. Neither Searle nor Chomsky had any earthly idea that language was this powerful.

◧◩◪◨⬒⬓⬔⧯
8. dTal+0h3[view] [source] 2025-05-26 21:14:09
>>Camper+913
Where are you getting all this (wrong) detail about the internals of the Chinese Room? The thought experiment merely says that the operator consults "books" and follows "instructions" (no doubt Turing-complete but otherwise unspecified) for manipulating symbols they they explicitly DO NOT understand - they do NOT have access to "symbolic and syntactical equivalences" - that is the POINT of the thought experiment. But the instructions in the books in a Chinese Room could perfectly well have an attention model. The details are irrelevant, because - I stress again - Searle's Chinese Room is not cognitively limited, by definition. Its hypothetical output is indistinguishable from a Chinese human.

I tend to agree that Chinese Rooms should be kept out of LLM discussions. In addition to it being a flawed thought experiment, of all the dozens of times I've seen them brought up, not a single example has demonstrated understanding of what a Chinese Room is anyway.

[go to top]