zlacker

[parent] [thread] 4 comments
1. jqpabc+(OP)[view] [source] 2025-06-08 13:00:22
Hey --- if the internet says it, it can't be wrong.
replies(1): >>bilsbi+z1
2. bilsbi+z1[view] [source] 2025-06-08 13:20:17
>>jqpabc+(OP)
In this case it found generic advice and was confusing itself.
replies(1): >>jqpabc+k4
◧◩
3. jqpabc+k4[view] [source] [discussion] 2025-06-08 13:51:52
>>bilsbi+z1
That's one explanation.

Another could be that it simply has no real *understanding* of anything. It simply did a statistical comparison of the question to the available advice and picked the best match --- kinda what a search engine might do.

Expecting *understanding* from a synthetic, statistical process will often end in disappointment.

replies(1): >>naijab+D7
◧◩◪
4. naijab+D7[view] [source] [discussion] 2025-06-08 14:26:50
>>jqpabc+k4
Yup. It’s time for us to just accept that LLMs are “similar in meaning” machines not “thinking/ understanding” machines
replies(1): >>jqpabc+39
◧◩◪◨
5. jqpabc+39[view] [source] [discussion] 2025-06-08 14:41:58
>>naijab+D7
If you think about it --- an LLM that could really *grasp* "pickleball" from a text description without ever seeing, playing or "experiencing" the game is not just human level intelligence --- it's superhuman.

And the same applies to a lot of real world situations.

[go to top]