zlacker

[return to "Chomsky on what ChatGPT is good for (2023)"]
1. titzer+S5[view] [source] 2025-05-25 17:53:56
>>mef+(OP)
All this interview proves is that Chomsky has fallen far, far behind how AI systems work today and is retreating to scoff at all the progress machine learning has achieved. Machine learning has given rise to AI now. It can't explain itself from principles or its architecture. But you couldn't explain your brain from principles or its architecture, you'd need all of neuroscience to do it. Because the brain is digital and (probably) does not reason like our brains do, it somehow falls short?

While there's some things in this I find myself nodding along to in this, I can't help but feel it's an a really old take that is super vague and hand-wavy. The truth is that all of the progress on machine learning is absolutely science. We understand extremely well how to make neural networks learn efficiently; it's why the data leads anywhere at all. Backpropagation and gradient descent are extraordinarily powerful. Not to mention all the "just engineering" of making chips crunch incredible amounts of numbers.

Chomsky is extremely ungenerous to the progress and also pretty flippant about what this stuff can do.

I think we should probably stop listening to Chomsky; he hasn't said anything here that he hasn't already say a thousand times for decades.

◧◩
2. lxgr+Q6[view] [source] 2025-05-25 18:00:30
>>titzer+S5
> [...] I can't help but feel it's an a really old take [...]

To be fair the article is from two years ago, which when talking about LLMs in this age arguably does count as "old", maybe even "really old".

◧◩◪
3. lostms+Bd[view] [source] 2025-05-25 18:51:38
>>lxgr+Q6
I think GPT-2 (2019) was already strong enough argument for possibility of modeling knowledge and language that Chomsky rejected.
◧◩◪◨
4. gf000+bg[view] [source] 2025-05-25 19:09:20
>>lostms+Bd
Though given that LLMs fundamentally can't know whether they know something or not (without a later pass of fine-tuning on what they should know) is a pretty good argument against them being good knowledge bases.
◧◩◪◨⬒
5. lostms+Mc1[view] [source] 2025-05-26 03:44:34
>>gf000+bg
No, it is not. In mathematical limit this applies to literally everything. In practice you are not going to store video compressed with a lossless codec, for example.
◧◩◪◨⬒⬓
6. gf000+bf1[view] [source] 2025-05-26 04:20:57
>>lostms+Mc1
Me forgetting/never having "recorded" what necklace the other person had during an important event is not at all similar to a statistical text-generation.

If they ask me the previous question I can retrospect/query my memory and tell 100% whether I know it or not - lossy compression aside. An LLM will just reply based on how likely a yes answer is with no regards to having that knowledge or not.

[go to top]