zlacker

[return to "Chomsky on what ChatGPT is good for (2023)"]
1. caliba+cd[view] [source] 2025-05-25 18:48:51
>>mef+(OP)
The fact that we have figured out how to translate language into something a computer can "understand" should thrill linguists. Taking a word (token) and abstracting it's "meaning" as a 1,000-dimension vector seems like something that should revolutionize the field of linguistics. A whole new tool for analyzing and understanding the underlying patterns of all language!

And there's a fact here that's very hard to dispute, this method works. I can give a computer instructions and it "understands" them in a way that wasn't possible before LLMs. The main debate now is over the semantics of words like "understanding" and whether or not an LLM is conscious in the same way as a human being (it isn't).

◧◩
2. qwery+EH3[view] [source] 2025-05-27 02:20:55
>>caliba+cd
Why would that thrill linguists? I'm not saying it hasn't/wouldn't/shouldn't, but I don't see why this technology would have the dramatic impact you imagine.

Is/was the same true for ASCII/Smalltalk/binary? They are all another way to translate language into something the computer "understands".

Perhaps the fact that it hasn't would lead some to question the validity of their claims. When a scientist makes a claim about how something works, it's expected that they prove it.

If the technology is as you say, show us.

[go to top]