zlacker

[parent] [thread] 15 comments
1. barrke+(OP)[view] [source] 2025-07-06 23:56:59
There is hidden state as plain as day merely in the fact that logits for token prediction exist. The selected token doesn't give you information about how probable other tokens were. That information, that state which is recalculated in autoregression, is hidden. It's not exposed. You can't see it in the text produced by the model.

There is plenty of state not visible when an LLM starts a sentence that only becomes somewhat visible when it completes the sentence. The LLM has a plan, if you will, for how the sentence might end, and you don't get to see an instance of that plan unless you run autoregression far enough to get those tokens.

Similarly, it has a plan for paragraphs, for whole responses, for interactive dialogues, plans that include likely responses by the user.

replies(2): >>8note+t1 >>gpm+5a
2. 8note+t1[view] [source] 2025-07-07 00:14:02
>>barrke+(OP)
this sounds like a fun research area. do LLMs have plans about future tokens?

how do we get 100 tokens of completion, and not just one output layer at a time?

are there papers youve read that you can share that support the hypothesis? vs that the LLM doesnt have ideas about the future tokens when its predicting the next one?

replies(2): >>Zee2+f3 >>Xenoph+p3
◧◩
3. Zee2+f3[view] [source] [discussion] 2025-07-07 00:31:14
>>8note+t1
This research has been done, it was a core pillar of the recent Anthropic paper on token planning and interpretability.

https://www.anthropic.com/research/tracing-thoughts-language...

See section “Does Claude plan its rhymes?”?

◧◩
4. Xenoph+p3[view] [source] [discussion] 2025-07-07 00:32:38
>>8note+t1
Lol... Try building systems off them and you will very quickly learn concretely that they "plan".

It may not be as evident now as it was with earlier models. The models will fabricate preconditions needed to output the final answer it "wanted".

I ran into this when using quasi least-to-most style structured output.

5. gpm+5a[view] [source] 2025-07-07 01:44:29
>>barrke+(OP)
The LLM does not "have" a plan.

Arguably there's reason to believe it comes up with a plan when it is computing token propabilities, but it does not store it between tokens. I.e. it doesn't possess or "have" it. It simply comes up with a plan, emits a token, and entirely throws all its intermediate thoughts (including any plan) to start again from scratch on the next token.

replies(4): >>NiloCK+0b >>lostms+vg >>yorwba+YL >>barrke+eP
◧◩
6. NiloCK+0b[view] [source] [discussion] 2025-07-07 01:53:20
>>gpm+5a
I don't think that the comment above you made any suggestion that the plan is persisted between token generations. I'm pretty sure you described exactly what they intended.
replies(2): >>gpm+Ib >>gugago+1U
◧◩◪
7. gpm+Ib[view] [source] [discussion] 2025-07-07 02:00:24
>>NiloCK+0b
I agree. I'm suggesting that the language they are using is unintentionally misleading, not that they are factually wrong.
◧◩
8. lostms+vg[view] [source] [discussion] 2025-07-07 02:55:52
>>gpm+5a
This is wrong, intermediate activations are preserved when going forward.
replies(1): >>ACCoun+OJ
◧◩◪
9. ACCoun+OJ[view] [source] [discussion] 2025-07-07 08:49:25
>>lostms+vg
Within a single forward pass, but not from one emitted token to another.
replies(1): >>andy12+Er1
◧◩
10. yorwba+YL[view] [source] [discussion] 2025-07-07 09:11:34
>>gpm+5a
It's true that the last layer's output for a given input token only affects the corresponding output token and is discarded afterwards. But the penultimate layer's output affects the computation of the last layer for all future tokens, so it is not discarded, but stored (in the KV cache). Similarly for the antepenultimate layer affecting the penultimate layer and so on.

So there's plenty of space in intermediate layers to store a plan between tokens without starting from scratch every time.

◧◩
11. barrke+eP[view] [source] [discussion] 2025-07-07 09:41:30
>>gpm+5a
I believe saying the LLM has a plan is a useful anthropomorphism for the fact that it does have hidden state that predicts future tokens, and this state conditions the tokens it produces earlier in the stream.
replies(1): >>godsha+mN1
◧◩◪
12. gugago+1U[view] [source] [discussion] 2025-07-07 10:36:25
>>NiloCK+0b
The concept of "state" conveys two related ideas.

- the sufficient amount of information to do evolution of the system. The state of a pendulum is it's position and velocity (or momentum). If you take a single picture of a pendulum, you do not have a representation that lets you make predictions.

- information that is persisted through time. A stateful protocol is one where you need to know the history of the messages to understand what will happen next. (Or, analytically, it's enough to keep track of the sufficient state.) A procedure with some hidden state isn't a pure function. You can make it a pure function by making the state explicit.

◧◩◪◨
13. andy12+Er1[view] [source] [discussion] 2025-07-07 14:39:32
>>ACCoun+OJ
What? No. The intermediate hidden states are preserved from one token to another. A token that is 100k tokens into the future will be able to look into the information of the present token's hidden state through the attention mechanism. This is why the KV cache is so big.
replies(1): >>ACCoun+5s3
◧◩◪
14. godsha+mN1[view] [source] [discussion] 2025-07-07 16:50:01
>>barrke+eP
Are the devs behind the models adding their own state somehow? Do they have code that figures out a plan and use the LLM on pieces of it and stitch them together? If they do, then there is a plan, it's just not output from a magical black box. Unless they are using a neural net to figure out what the plan should be first, I guess.

I know nothing about how things work at that level, so these might not even be reasonable questions.

◧◩◪◨⬒
15. ACCoun+5s3[view] [source] [discussion] 2025-07-08 09:55:37
>>andy12+Er1
KV cache is just that: a cache.

The inference logic of an LLM remains the same. There is no difference in outcomes between recalculating everything and caching. The only difference is in the amount of memory and computation required to do it.

replies(1): >>andy12+B84
◧◩◪◨⬒⬓
16. andy12+B84[view] [source] [discussion] 2025-07-08 15:57:35
>>ACCoun+5s3
The same can be said about any recurrent network. To predict the token n+1 you could recalculate the hidden state up to token n, or reuse the hidden state of token n from the previous forward pass. The only difference is the amount of memory and computation.

The thing is that, fundamentally, an auto-regressive transformer is a model whose state grows linearly with each token without compression, which is what bestows them with (theoretical) perfect recall.

[go to top]