zlacker

[return to "Scaling long-running autonomous coding"]
1. simonw+35[view] [source] 2026-01-14 22:37:31
>>samwil+(OP)
"To test this system, we pointed it at an ambitious goal: building a web browser from scratch."

I shared my LLM predictions last week, and one of them was that by 2029 "Someone will build a new browser using mainly AI-assisted coding and it won’t even be a surprise" https://simonwillison.net/2026/Jan/8/llm-predictions-for-202... and https://www.youtube.com/watch?v=lVDhQMiAbR8&t=3913s

This project from Cursor is the second attempt I've seen at this now! The other is this one: https://www.reddit.com/r/Anthropic/comments/1q4xfm0/over_chr...

◧◩
2. carles+4O2[view] [source] 2026-01-15 16:48:38
>>simonw+35
Its impressive, but how sure are we that the code for the current open source browsers isn't part of the model's training data?
◧◩◪
3. simonw+Y43[view] [source] 2026-01-15 17:51:00
>>carles+4O2
It turns out the Cursor one is stitching together a ton of open source components already.

That said, I don't really find the critique that models have browser source code in their training data particularly interesting.

If they spat out a full, working implementation in response to a single prompt then sure, I'd be suspicious they were just regurgitating their training data.

But if you watch the transcripts for these kinds of projects you'll see them make thousands of independent changes, reacting to test failures and iterating towards an implementation that matches the overall goals of the project.

The fact that Firefox and Chrome and WebKit are likely buried in the training data somewhere might help them a bit, but it still looks to me more like an independent implementation that's influenced by those and many other sources.

[go to top]