zlacker

[return to "Chomsky on what ChatGPT is good for (2023)"]
1. caliba+cd[view] [source] 2025-05-25 18:48:51
>>mef+(OP)
The fact that we have figured out how to translate language into something a computer can "understand" should thrill linguists. Taking a word (token) and abstracting it's "meaning" as a 1,000-dimension vector seems like something that should revolutionize the field of linguistics. A whole new tool for analyzing and understanding the underlying patterns of all language!

And there's a fact here that's very hard to dispute, this method works. I can give a computer instructions and it "understands" them in a way that wasn't possible before LLMs. The main debate now is over the semantics of words like "understanding" and whether or not an LLM is conscious in the same way as a human being (it isn't).

◧◩
2. kracke+AG[view] [source] 2025-05-25 22:26:39
>>caliba+cd
Restricted to linguistics, LLM's supposed lack of understanding should be a non-sequitur. If the question is whether LLMs have formed a coherent ability to parse human languages, the answer is obviously yes. In fact not just human languages, as seen with multimodality the same transformer architecture seems to work well to model and generate anything with inherent structure.

I'm surprised that he doesn't mention "universal grammar" once in that essay. Maybe it so happens that humans do have some innate "universal grammar" wired in by instinct but it's clearly not _necessary_ to be able to parse things. You don't need to set up some explicit language rules or generative structure, enough data and the model learns to produce it. I wonder if anyone has gone back and tried to see if you can extract out some explicit generative rules from the learned representation though.

Since the "universal grammar" hypothesis isn't really falsifiable, at best you can hope for some generalized equivalent that's isomorphic to the platonic representation hypothesis and claim that all human language is aligned in some given latent representation, and that our brains have been optimized to be able to work in this subspace. That's at least a testable assumption, by trying to reverse engineer the geometry of the space LLMs have learned.

◧◩◪
3. 0xbadc+GS[view] [source] 2025-05-26 00:10:10
>>kracke+AG
Can LLMs actually parse human languages? Or can they react to stimuli with a trained behavioral response? Dogs can learn to sit when you say "sit", and learn to roll over when you say "roll over". But the dog doesn't parse human language; it reacts to stimuli with a trained behavioral response.

(I'm not that familiar with LLM/ML, but it seems like trained behavioral response rather than intelligent parsing. I believe this is part of why it hallucinates? It doesn't understand concepts, it just spits out words - perhaps a parrot is a better metaphor?)

◧◩◪◨
4. kracke+wU[view] [source] 2025-05-26 00:25:51
>>0xbadc+GS
You can train LLMs on the output very complex CFGs, and it successfully learns the grammar and hierarchy needed to complete any novel prefix. This is a task much more recursive and difficult than human languages, so there's no reason to believe that LLMs aren't able to parse human languages in the formal sense as well.

And of course empirically LLMs do generate valid English sentences. They may not necessarily be _correct_ sentences in a propositional truth-value sense (as seen by so-called "hallucinations), but they are semantically "well-formed" in contrast to Chomsky's famous example of the failure of probabilistic grammar models, "Colorless green ideas sleep furiously."

I'm not a linguist but I don't think linguistics has ever cared about the truth value of a sentence, that's more under the realm of logic.

◧◩◪◨⬒
5. what-t+x11[view] [source] 2025-05-26 01:36:12
>>kracke+wU
I’ve seen ChatGPT generate bad English and I’ve seen the layer or logic / UI re-render the page as I think there is a simple spell checker that kicks in and tells the api to re-render and recheck.

I don’t believe for one second that LLMs reason, understand, know, anything.

There are plenty of times LLMs fail to generate correct sentences, and plenty of times they fail to generate correct words.

Around the time ChatGPT rolled out web search inside actions, you’d get really funky stuff back and watch other code clearly try to catch the run away.

o3 can be hot garbage if you ask it expand a specific point inside a 3 paragraph memo, the reasoning models perform very, very poorly when they are not summarizing.

There are times where the thing works like magic, other times, asking it to write me a PowerShell script that gets users by first and last name has it inventing commands that flags that don’t exist.

If the model ‘understood’, ‘followed, some sort of structure outside parroting stuff it already knows about it would be easy to spot and guide it via prompts. That is not the case even with the most advanced models today.

It’s clear that LLMs work best at specific small tasks that have a well established pattern defined in a strict language or api.

I’ve broken o3 trying to have it lift working python code, into formal python code, how? The person that wrote the code didn’t exactly code it how a developer would code a program. 140 lines of basic grab some data generate a table broke the AI and it had the ‘informal’ solution in the prompt. So no there is zero chance LMMs do more than predict.

And to be clear, it one shot a whole thing for me last night, using the GitHub/Codex/agent thing in VS code, probably saved me 30 minutes but god forbid you start from a bad / edge / poorly structured thing that doesn’t fit the mould.

[go to top]