zlacker

[parent] [thread] 3 comments
1. bilsbi+(OP)[view] [source] 2025-06-08 13:20:17
In this case it found generic advice and was confusing itself.
replies(1): >>jqpabc+L2
2. jqpabc+L2[view] [source] 2025-06-08 13:51:52
>>bilsbi+(OP)
That's one explanation.

Another could be that it simply has no real *understanding* of anything. It simply did a statistical comparison of the question to the available advice and picked the best match --- kinda what a search engine might do.

Expecting *understanding* from a synthetic, statistical process will often end in disappointment.

replies(1): >>naijab+46
◧◩
3. naijab+46[view] [source] [discussion] 2025-06-08 14:26:50
>>jqpabc+L2
Yup. It’s time for us to just accept that LLMs are “similar in meaning” machines not “thinking/ understanding” machines
replies(1): >>jqpabc+u7
◧◩◪
4. jqpabc+u7[view] [source] [discussion] 2025-06-08 14:41:58
>>naijab+46
If you think about it --- an LLM that could really *grasp* "pickleball" from a text description without ever seeing, playing or "experiencing" the game is not just human level intelligence --- it's superhuman.

And the same applies to a lot of real world situations.

[go to top]