zlacker

[return to "Chomsky on what ChatGPT is good for (2023)"]
1. atdt+SZ[view] [source] 2025-05-26 01:19:36
>>mef+(OP)
The level of intellectual engagement with Chomsky's ideas in the comments here is shockingly low. Surely, we are capable of holding these two thoughts: one, that the facility of LLMs is fantastic and useful, and two, that the major breakthroughs of AI this decade have not, at least so far, substantially deepened our understanding of our own intelligence and its constitution.

That may change, particularly if the intelligence of LLMs proves to be analogous to our own in some deep way—a point that is still very much undecided. However, if the similarities are there, so is the potential for knowledge. We have a complete mechanical understanding of LLMs and can pry apart their structure, which we cannot yet do with the brain. And some of the smartest people in the world are engaged in making LLMs smaller and more efficient; it seems possible that the push for miniaturization will rediscover some tricks also discovered by the blind watchmaker. But these things are not a given.

◧◩
2. lovepa+v61[view] [source] 2025-05-26 02:29:36
>>atdt+SZ
> AI this decade have not, at least so far, substantially deepened our understanding of our own intelligence and its constitution

I would push back on this a little bit. While it has not helped us to understand our own intelligence, it has made me question whether such a thing even exists. Perhaps there are no simple and beautiful natural laws, like those that exists in Physics, that can explain how humans think and make decisions. When CNNs learned to recognize faces through a series of hierarchical abstractions that make intuitive sense it's hard to deny the similarities to what we're doing as humans. Perhaps it's all just emergent properties of some messy evolved substrate.

The big lesson from the AI development in the last 10 years from me has been "I guess humans really aren't so special after all" which is similar to what we've been through with Physics. Theories often made the mistake of giving human observers some kind of special importance, which was later discovered to be the cause of theories not generalizing.

◧◩◪
3. _glass+VA1[view] [source] 2025-05-26 08:35:20
>>lovepa+v61
I think it is important to realize, that we need to understand language on our own terms. The logic of LLMs is not unlike alien technology to us. That being said, the minimalist program of Chomsky lead to nowhere, because just like programming, it found edge case after edge case, reducing it further and further, until there was no program anymore that resembled a real theory. But it is wrong to assume that the big progress in linguistics is in vain, the same reason Prolog, Theorem provers, type theory, category theory is in vain, when we have LLMs that can produce everything in C++. We can use the technology of linguistics to ground our knowledge, and in some dark corner of the LLM it might already have integrated this. I think the original divide between the sciences and the humanities might be deeper and more fundamental than we think. We need linguistic as a discipline of the humanities, and maybe huge swaths of Computer Science is just that.
◧◩◪◨
4. nz+JT2[view] [source] 2025-05-26 18:43:35
>>_glass+VA1
By way of analogy, the result of the theorem prover is usually actionable (i.e. we can replace one kind of expression with its proven equivalent for some end like optimizing code-size or code-run-time), but mathematicians _still_ endeavor to translate the unwieldy and verbose machine-generated proofs into concise human-readable proofs, because those readable proofs are useful to our understanding of mathematics even long after the "productive action" has been taken.

In a way, this collaboration between the machine and the human is better than what came before, because now productive actions can be taken sooner, and mathematicians do not have to doubt whether they are searching for a proof that exists.

[go to top]