zlacker

[return to "Scaling long-running autonomous coding"]
1. light_+0M[view] [source] 2026-01-20 08:41:14
>>srames+(OP)
Browsers are pretty much the best case scenario for autonomous coding agents. A totally unique situation that mostly doesn't occur in the real world.

At a minimum:

1. You've got an incredibly clearly defined problem at the high level.

2. Extremely thorough tests for every part that build up in complexity.

3. Libraries, APIs, and tooling that are all compatible with one another because all of these technologies are built to work together already.

4. It's inherently a soft problem, you can make partial progress on it.

5. There's a reference implementation you can compare against.

6. You've got extremely detailed documentation and design docs.

7. It's a problem that inherently decomposes into separate components in a clear way.

8. The models are already trained not just on examples for every module, but on example browsers as a whole.

9. The done condition for this isn't a working browser, it's displaying something.

This isn't a realistic setup for anything that 99.99% of people work on. It's not even a realistic setup for what actual developers of browsers do who must implement new or fuzzy things that aren't in the specs.

Note 9. That's critical. Getting to the point where you can show simple pages is one thing. Getting to the point where you have a working production browser engine, that's not just 80% more work, it's probably considerably more than 100x more work.

◧◩
2. maleld+py4[view] [source] 2026-01-21 11:06:41
>>light_+0M
It's a good benchmark for how agents can write very complex code. Browsers are likely among the most complex programs we have today (arguably more complex than many OSs). Even if the problem is well-defined, many sceptics would still say the complexity is beyond what agents can handle.
[go to top]