zlacker

[return to "Scaling long-running autonomous coding"]
1. jphowa+E4[view] [source] 2026-01-14 22:36:08
>>samwil+(OP)
The browser it built, obviously the context window of the entire project is huge. They mention loads of parallel agents in the blog post, so I guess each agent is given a module to work on, and some tests? And then a 'manager' agent plugs this in without reading the code? Otherwise I can't see how, even with ChatGPT 5.2/Gemini 3, you could do this otherwise? In retrospect it seems an obvious approach and akin to how humans work in teams, but it's still interesting.
◧◩
2. simonw+x5[view] [source] 2026-01-14 22:39:34
>>jphowa+E4
GPT-5.2-Codex has a 400,000 token window. Claude 4.5 Opus is half of that, 200,000 tokens.

It turns out to matter a whole lot less than you would expect. Coding Agents are really good at using grep and writing out plans to files, which means they can operate successfully against way more code than fits in their context at a single time.

◧◩◪
3. jagged+vq[view] [source] 2026-01-15 00:32:52
>>simonw+x5
The other issue with "a huge token window" is that if you fill it, it seems like relevance for any specific part of the window is diminished - which makes it hard to override default model behavior.

Interestingly, recently it seems to me like codex is actually compressing early and often so that it stays in the smarter-feeling reasoning zone of the first 1/3rd of the window, which is a neat solution for this, albeit with the caveat of post-compression behavior differences cropping up more often.

[go to top]