zlacker

[parent] [thread] 1 comments
1. barrke+(OP)[view] [source] 2025-07-07 09:41:30
I believe saying the LLM has a plan is a useful anthropomorphism for the fact that it does have hidden state that predicts future tokens, and this state conditions the tokens it produces earlier in the stream.
replies(1): >>godsha+8Y
2. godsha+8Y[view] [source] 2025-07-07 16:50:01
>>barrke+(OP)
Are the devs behind the models adding their own state somehow? Do they have code that figures out a plan and use the LLM on pieces of it and stitch them together? If they do, then there is a plan, it's just not output from a magical black box. Unless they are using a neural net to figure out what the plan should be first, I guess.

I know nothing about how things work at that level, so these might not even be reasonable questions.

[go to top]