zlacker

[return to "Chomsky on what ChatGPT is good for (2023)"]
1. caliba+cd[view] [source] 2025-05-25 18:48:51
>>mef+(OP)
The fact that we have figured out how to translate language into something a computer can "understand" should thrill linguists. Taking a word (token) and abstracting it's "meaning" as a 1,000-dimension vector seems like something that should revolutionize the field of linguistics. A whole new tool for analyzing and understanding the underlying patterns of all language!

And there's a fact here that's very hard to dispute, this method works. I can give a computer instructions and it "understands" them in a way that wasn't possible before LLMs. The main debate now is over the semantics of words like "understanding" and whether or not an LLM is conscious in the same way as a human being (it isn't).

◧◩
2. automa+Z31[view] [source] 2025-05-26 02:00:59
>>caliba+cd
Word embeddings (that 1000-dimension vector you mention) are not new. No comment on the rest of your comment, but that aspect of LLMs is "old" tech - word2vec was published 11 years ago.
[go to top]