zlacker

[return to "Chomsky on what ChatGPT is good for (2023)"]
1. caliba+cd[view] [source] 2025-05-25 18:48:51
>>mef+(OP)
The fact that we have figured out how to translate language into something a computer can "understand" should thrill linguists. Taking a word (token) and abstracting it's "meaning" as a 1,000-dimension vector seems like something that should revolutionize the field of linguistics. A whole new tool for analyzing and understanding the underlying patterns of all language!

And there's a fact here that's very hard to dispute, this method works. I can give a computer instructions and it "understands" them in a way that wasn't possible before LLMs. The main debate now is over the semantics of words like "understanding" and whether or not an LLM is conscious in the same way as a human being (it isn't).

◧◩
2. gerdes+471[view] [source] 2025-05-26 02:37:31
>>caliba+cd
"The fact that we have figured out how to translate language into something a computer can "understand" should thrill linguists."

No, there is no understanding at all. Please don't confuse codifying with understanding or translation. LLMs don't understand their input, they simply act on it based on the way they are trained on it.

"And there's a fact here that's very hard to dispute, this method works. I can give a computer instructions and it "understands" them "

No, it really does not understand those instructions. It is at best what used to be called an "idiot savant". Mind you, people used to describe others like that - who is the idiot?

Ask your favoured LLM to write a programme in a less used language - ooh let's try VMware's PowerCLI (it's PowerShell so quite popular) and get it to do something useful. It wont because it can't but it will still spit out something. PowerCLI is not extant across Stackoverflow and co much but it is PS based so the LLMs will hallucinate madder than a hippie on a new super weed.

[go to top]