zlacker

[return to "Chomsky on what ChatGPT is good for (2023)"]
1. next_x+Y2[view] [source] 2025-05-25 17:32:06
>>mef+(OP)
Chomsky is always saying that LLMs and such can only imitate, not understand language. But I wonder if there is a degree of sophistication at which he would concede these machines exceed "imitation". If his point is that LLMs arrive at language in a way different than humans... great. But I'm not sure how he can argue that some kind of extremely sophisticated understanding of natural language is not embedded in these models in a way that, at this point, exceeds the average human. In all fairness, this was written in 2023, but given his longstanding stubbornness on this topic, I doubt it would make a difference.
◧◩
2. icedri+H3[view] [source] 2025-05-25 17:38:08
>>next_x+Y2
From what I've read/watched of Chomsky he's holding out for something that truly cannot be distinguished from human no matter how hard you tried.
◧◩◪
3. f30e3d+t31[view] [source] 2025-05-26 01:56:18
>>icedri+H3
I think that misses the point entirely. Even if you constructed some system the output of which could not be distinguished from human-produced language but that either (1) clearly operated according to principles other than those that govern human language or (2) operated according to principles that its creators could not adequately explain, it would not be of that much interest to him.

He wants to understand how human language works. If I get him right — and I'm absolutely sure that I don't in important ways — then LLMs are not that interesting because both (1) and (2) above are true of them.

[go to top]