zlacker

[parent] [thread] 1 comments
1. gortok+(OP)[view] [source] 2024-05-15 16:13:31
it requires calculation of frequency of how often words appear next to each other given other surrounding words. If you want to call that 'understanding', you can, but it's not semantic understanding.

If it were, these LLMs wouldn't hallucinate so much.

Semantic understanding is still a ways off, and requires much more intelligence than we can give machines at this moment. Right now the machines are really good at frequency analysis, and in our fervor we mistake that for intelligence.

replies(1): >>Diogen+x6
2. Diogen+x6[view] [source] 2024-05-15 16:44:36
>>gortok+(OP)
> it requires calculation of frequency of how often words appear next to each other given other surrounding words

In order to do that effectively, you have to have very significant understanding of the world. The texts that LLMs are learning from describe a wide range of human knowledge, and if you want to accurately predict what words will appear where, you have to build an internal representation of that knowledge.

ChatGPT knows who Henry VIII was, who his wives were, the reasons he divorced/offed them, what a divorce is, what a king is, that England has kings, etc.

> If it were, these LLMs wouldn't hallucinate so much.

I don't see how this follows. First, humans hallucinate. Second, why does hallucination prove that LLMs don't understand anything? To me, it just means that they are trained to answer, and if they don't know the answer, they BS it.

[go to top]