zlacker

[parent] [thread] 1 comments
1. woodru+(OP)[view] [source] 2024-05-15 16:17:13
To my understanding (ha!), none of these language models have demonstrated the "recursive" ability that's basic to human consciousness and language: they've managed to iteratively refine their internal world model, but that model implodes as the user performs recursive constructions.

This results in the appearance of an arms race between world model refinement and user cleverness, but it's really a fundamental expressive limitation: the user can always recurse, but the model can only predict tokens.

(There are a lot of contexts in which this distinction doesn't matter, but I would argue that it does matter for a meaningful definition of human-like understanding.)

replies(1): >>johnth+CQ
2. johnth+CQ[view] [source] 2024-05-15 20:49:48
>>woodru+(OP)
Supposedly that was Q* all about. Search recursively, backtrack if dead end. who knows really, but the technology is still very new, I personally don't see why a sufficiently good world model can't be used in this manner.
[go to top]