zlacker

[return to "Chomsky on what ChatGPT is good for (2023)"]
1. atdt+SZ[view] [source] 2025-05-26 01:19:36
>>mef+(OP)
The level of intellectual engagement with Chomsky's ideas in the comments here is shockingly low. Surely, we are capable of holding these two thoughts: one, that the facility of LLMs is fantastic and useful, and two, that the major breakthroughs of AI this decade have not, at least so far, substantially deepened our understanding of our own intelligence and its constitution.

That may change, particularly if the intelligence of LLMs proves to be analogous to our own in some deep way—a point that is still very much undecided. However, if the similarities are there, so is the potential for knowledge. We have a complete mechanical understanding of LLMs and can pry apart their structure, which we cannot yet do with the brain. And some of the smartest people in the world are engaged in making LLMs smaller and more efficient; it seems possible that the push for miniaturization will rediscover some tricks also discovered by the blind watchmaker. But these things are not a given.

◧◩
2. AfterH+5B3[view] [source] 2025-05-27 00:52:26
>>atdt+SZ
What exactly do you mean, "analogous to our own" and, "in a deep way" without making an appeal to magic or non-yet discovered fields of science? I understand what you're saying but when you scrutinize these things you end up in a place that's less scientific than one might think. That kind of seems to be one of Chomsky's salient points; we really, really need to get a handle on when we're doing science in the contemporary Kuhnian sense and philosophy.

The AI works on English, C++, Smalltalk, Klingon, nonsense, and gibberish. Like Turing's paper this illustrates the difference between, "machines being able to think" and, "machines being able to demonstrate some well understood mathematical process like pattern matching."

https://en.wikipedia.org/wiki/Computing_Machinery_and_Intell...

[go to top]