zlacker

[parent] [thread] 5 comments
1. Comput+(OP)[view] [source] 2026-02-03 09:03:15
’’’Phrases like “think”, “think hard”, “ultrathink”, and “think more” are interpreted as regular prompt instructions and don’t allocate thinking tokens.’’’
replies(1): >>prodig+c
2. prodig+c[view] [source] 2026-02-03 09:04:05
>>Comput+(OP)
They dont allocate thinking tokens but they do change model behavior.
replies(1): >>Comput+s
◧◩
3. Comput+s[view] [source] [discussion] 2026-02-03 09:06:21
>>prodig+c
I was getting this in my Claude code app, it seems clear to me that they didn’t want users to do that anymore and it was deprecated. https://i.redd.it/jvemmk1wdndg1.jpeg
replies(1): >>prodig+E
◧◩◪
4. prodig+E[view] [source] [discussion] 2026-02-03 09:08:05
>>Comput+s
Thx for the correction. Changed a couple weeks ago. https://decodeclaude.com/ultrathink-deprecated/
replies(1): >>sunaoo+Sl
◧◩◪◨
5. sunaoo+Sl[view] [source] [discussion] 2026-02-03 11:50:53
>>prodig+E
Nice blog, this post is interesting: https://decodeclaude.com/compaction-deep-dive/ Didn't know about Microcompaction!
replies(1): >>prodig+oz
◧◩◪◨⬒
6. prodig+oz[view] [source] [discussion] 2026-02-03 13:21:31
>>sunaoo+Sl
If you're a big context/compaction fan and want another fun fact, did you know that instead of doing regular compaction (prompting the agent to summarize the conversation in a particular way and starting the new conversation with that), Codex passes around a compressed, encrypted object that supposedly preserves the latent space of the previous conversation in the new conversation.

https://openai.com/index/unrolling-the-codex-agent-loop/ https://platform.openai.com/docs/guides/conversation-state#c...

Context management is the new frontier for these labs.

[go to top]