zlacker

[return to "Chomsky on what ChatGPT is good for (2023)"]
1. papave+591[view] [source] 2025-05-26 03:01:56
>>mef+(OP)
There was an interesting debate where Chomsky took a position on intelligence being rooted in symbolic reasoning and Asimov asserted a statistical foundation (ah, that was not intentional ;).

LLM designs to date are purely statistical models. A pile, a morass of floating point numbers and their weighted relationships, along with the software and hardware that animates them and the user input and output that makes them valuable to us. An index of the data fed into them, different from a Lucene or SQL DB index made from compsci algorithms & data structure primitives. Recognizable to Azimov's definition.

And these LLMs feature no symbolic reasoning whatsoever within their computational substrate. What they do feature is a simple recursive model: Given the input so far, what is the next token? And they are thus enabled after training on huge amounts of input material. No inherent reasoning capabilities, no primordial ability to apply logic, or even infer basic axioms of logic, reasoning, thought. And therefore unrecognizable to Chomsky's definition.

So our LLMs are a mere parlor trick. A one-trick pony. But the trick they do is oh-so vastly complicated, and very appealing to us, of practical application and real value. It harkens back to the question: What is the nature of intelligence? And how to define it?

And I say this while thinking of the marked contrast of apparent intelligence between an LLM and say a 2-year age child.

◧◩
2. sdwr+ka1[view] [source] 2025-05-26 03:14:50
>>papave+591
That's not true, symbols emerge out of the statistics. Just look at the imagenet analysis that identified distinct concepts in different layers, or the experiments with ablation in LLMs.

They may not be doing strict formal logic, but they are definitely compressing information into, and operating using, symbols.

◧◩◪
3. ggm+Yn1[view] [source] 2025-05-26 06:15:41
>>sdwr+ka1
Which argues much symbolic manipulation is formulaic, and not inductive reasoning or indicative of intelligence.

Sentence parsing with multiple potential verb-noun-adjective interpretations are an example of old, Chomsky made fruit flies like a banana famous for a reason.

(without the weights and that specific sentence programmed in, I would be interested exactly how the symbol models cope with that, and the myriad other examples)

[go to top]