zlacker

[parent] [thread] 2 comments
1. simonw+(OP)[view] [source] 2026-01-27 20:25:16
GPT-5.2 has a 400,000 token context window. Claude Opus 4.5 is just 200,000 tokens. To my surprise this doesn't seem to limit their ability to work with much larger codebases - the coding agent harnesses have got really good at grepping for just the code that they need to have in-context, similar to how a human engineer can make changes to a million lines of code without having to hold it all in their head at once.
replies(1): >>storys+hq
2. storys+hq[view] [source] 2026-01-27 22:03:48
>>simonw+(OP)
That explains the coherence, but I'm curious about the mechanics of the retrieval. Is it AST-based to map dependencies or are you just using vector search? I assume you still have to filter pretty aggressively to keep the token costs viable for a commercial tool.
replies(1): >>simonw+FA
◧◩
3. simonw+FA[view] [source] [discussion] 2026-01-27 22:50:18
>>storys+hq
No vector search, just grep.
[go to top]