zlacker

[return to "Chomsky on what ChatGPT is good for (2023)"]
1. caliba+cd[view] [source] 2025-05-25 18:48:51
>>mef+(OP)
The fact that we have figured out how to translate language into something a computer can "understand" should thrill linguists. Taking a word (token) and abstracting it's "meaning" as a 1,000-dimension vector seems like something that should revolutionize the field of linguistics. A whole new tool for analyzing and understanding the underlying patterns of all language!

And there's a fact here that's very hard to dispute, this method works. I can give a computer instructions and it "understands" them in a way that wasn't possible before LLMs. The main debate now is over the semantics of words like "understanding" and whether or not an LLM is conscious in the same way as a human being (it isn't).

◧◩
2. kracke+AG[view] [source] 2025-05-25 22:26:39
>>caliba+cd
Restricted to linguistics, LLM's supposed lack of understanding should be a non-sequitur. If the question is whether LLMs have formed a coherent ability to parse human languages, the answer is obviously yes. In fact not just human languages, as seen with multimodality the same transformer architecture seems to work well to model and generate anything with inherent structure.

I'm surprised that he doesn't mention "universal grammar" once in that essay. Maybe it so happens that humans do have some innate "universal grammar" wired in by instinct but it's clearly not _necessary_ to be able to parse things. You don't need to set up some explicit language rules or generative structure, enough data and the model learns to produce it. I wonder if anyone has gone back and tried to see if you can extract out some explicit generative rules from the learned representation though.

Since the "universal grammar" hypothesis isn't really falsifiable, at best you can hope for some generalized equivalent that's isomorphic to the platonic representation hypothesis and claim that all human language is aligned in some given latent representation, and that our brains have been optimized to be able to work in this subspace. That's at least a testable assumption, by trying to reverse engineer the geometry of the space LLMs have learned.

◧◩◪
3. 0xbadc+GS[view] [source] 2025-05-26 00:10:10
>>kracke+AG
Can LLMs actually parse human languages? Or can they react to stimuli with a trained behavioral response? Dogs can learn to sit when you say "sit", and learn to roll over when you say "roll over". But the dog doesn't parse human language; it reacts to stimuli with a trained behavioral response.

(I'm not that familiar with LLM/ML, but it seems like trained behavioral response rather than intelligent parsing. I believe this is part of why it hallucinates? It doesn't understand concepts, it just spits out words - perhaps a parrot is a better metaphor?)

◧◩◪◨
4. geyser+B01[view] [source] 2025-05-26 01:27:36
>>0xbadc+GS
The terms are too unclear here. Can you define what it means to "be able to parse human language"? I'm sure contemporary chatbots score higher on typical reading comprehension tests than most humans. You're certainly correct that llms "only" react to stimuli with a trained response, but I guess anything that isn't consciousness necessarily fits that description
◧◩◪◨⬒
5. 0xbadc+8l1[view] [source] 2025-05-26 05:41:41
>>geyser+B01
Good point, thanks for calling that out. I'm honestly not sure myself! On further reflection, it's probably a matter of degrees?

So for example, a soldier is trained, and then does what it is told. But the soldier also has a deep trough of contextual information and "decision weights" which can change its decisions, often in ways it wasn't trained for. Or perhaps to put it another way: it is capable of operating outside the parameters it was given, "if it feels like it", because the information the soldier processes at any given time may make it not follow its training.

A dog may also disobey an order after being trained, but it has a much smaller range of information it works off of, and fewer things influence its decision-making process. (genetics being a big player in the decision-making process, since they were literally bred to do what we want/defend our interests)

So perhaps a chat AI, a dog, and a soldier, are just degrees along the same spectrum. I remember reading something about how we can get AI to be about as intelligent as a 2-year-old, and that dogs are about that smart. If that's the case (and I don't know that it is; I also don't know if chat AI is actually capable of "disobeying", much less "learning" anything it isn't explicitly trained to learn), then the next question I'd have is, why isn't the AI able to act and think like a dog yet?

If we put an AI in a robot dog body and told it to act like a dog, would it? Or would it only act the way that we tell it dogs act like? Could/would it have emergent dog-like traits and spawn new dog lineages? Because as far as I'm aware, that's not how AI works yet; so to me, that would mean it's not actually doing the things we're talking about above (re: dogs/soldiers)

[go to top]