zlacker

[parent] [thread] 3 comments
1. gf000+(OP)[view] [source] 2025-05-25 19:09:20
Though given that LLMs fundamentally can't know whether they know something or not (without a later pass of fine-tuning on what they should know) is a pretty good argument against them being good knowledge bases.
replies(1): >>lostms+BW
2. lostms+BW[view] [source] 2025-05-26 03:44:34
>>gf000+(OP)
No, it is not. In mathematical limit this applies to literally everything. In practice you are not going to store video compressed with a lossless codec, for example.
replies(1): >>gf000+0Z
◧◩
3. gf000+0Z[view] [source] [discussion] 2025-05-26 04:20:57
>>lostms+BW
Me forgetting/never having "recorded" what necklace the other person had during an important event is not at all similar to a statistical text-generation.

If they ask me the previous question I can retrospect/query my memory and tell 100% whether I know it or not - lossy compression aside. An LLM will just reply based on how likely a yes answer is with no regards to having that knowledge or not.

replies(1): >>lostms+m11
◧◩◪
4. lostms+m11[view] [source] [discussion] 2025-05-26 04:56:05
>>gf000+0Z
You obviously forgot you previously heard about false memories and/or never thought that happens to you (would be v. ironic).
[go to top]