zlacker

[parent] [thread] 2 comments
1. jqpabc+(OP)[view] [source] 2025-06-08 13:51:52
That's one explanation.

Another could be that it simply has no real *understanding* of anything. It simply did a statistical comparison of the question to the available advice and picked the best match --- kinda what a search engine might do.

Expecting *understanding* from a synthetic, statistical process will often end in disappointment.

replies(1): >>naijab+j3
2. naijab+j3[view] [source] 2025-06-08 14:26:50
>>jqpabc+(OP)
Yup. It’s time for us to just accept that LLMs are “similar in meaning” machines not “thinking/ understanding” machines
replies(1): >>jqpabc+J4
◧◩
3. jqpabc+J4[view] [source] [discussion] 2025-06-08 14:41:58
>>naijab+j3
If you think about it --- an LLM that could really *grasp* "pickleball" from a text description without ever seeing, playing or "experiencing" the game is not just human level intelligence --- it's superhuman.

And the same applies to a lot of real world situations.

[go to top]