zlacker

[parent] [thread] 1 comments
1. akomtu+(OP)[view] [source] 2024-10-19 20:34:19
Imo, that's the essense of reasoning. Limited memory and slow communication channels force us to create compact, but expressive models of reality. LLMs, on the other hand, have all the memory in the world and their model of reality is a piece-wise interpolation of the huge training dataset. Why invent grammar rules if you can keep the entire dictionary in mind?
replies(1): >>mcswel+hs
2. mcswel+hs[view] [source] 2024-10-20 02:01:30
>>akomtu+(OP)
Why do LLMs (or rather similar models that draw pictures) keep getting the number of fingers on the human hand wrong, or show two people's arms or legs merging? Or in computer-created videos, fail at object preservation? It seems to me they do not have a model of the world, only an imperfect model of pictures they've seen.
[go to top]