zlacker

[parent] [thread] 5 comments
1. lxgr+(OP)[view] [source] 2025-05-25 18:00:30
> [...] I can't help but feel it's an a really old take [...]

To be fair the article is from two years ago, which when talking about LLMs in this age arguably does count as "old", maybe even "really old".

replies(1): >>lostms+L6
2. lostms+L6[view] [source] 2025-05-25 18:51:38
>>lxgr+(OP)
I think GPT-2 (2019) was already strong enough argument for possibility of modeling knowledge and language that Chomsky rejected.
replies(1): >>gf000+l9
◧◩
3. gf000+l9[view] [source] [discussion] 2025-05-25 19:09:20
>>lostms+L6
Though given that LLMs fundamentally can't know whether they know something or not (without a later pass of fine-tuning on what they should know) is a pretty good argument against them being good knowledge bases.
replies(1): >>lostms+W51
◧◩◪
4. lostms+W51[view] [source] [discussion] 2025-05-26 03:44:34
>>gf000+l9
No, it is not. In mathematical limit this applies to literally everything. In practice you are not going to store video compressed with a lossless codec, for example.
replies(1): >>gf000+l81
◧◩◪◨
5. gf000+l81[view] [source] [discussion] 2025-05-26 04:20:57
>>lostms+W51
Me forgetting/never having "recorded" what necklace the other person had during an important event is not at all similar to a statistical text-generation.

If they ask me the previous question I can retrospect/query my memory and tell 100% whether I know it or not - lossy compression aside. An LLM will just reply based on how likely a yes answer is with no regards to having that knowledge or not.

replies(1): >>lostms+Ha1
◧◩◪◨⬒
6. lostms+Ha1[view] [source] [discussion] 2025-05-26 04:56:05
>>gf000+l81
You obviously forgot you previously heard about false memories and/or never thought that happens to you (would be v. ironic).
[go to top]