It implies that the agents could only do this because they could regurgitate previous browsers from their training data.
Anyone who's watched a coding agent work will see why that's unlikely to be what's happening. If that's all they were doing, why did it take three days and thousands of changes and tool calls to get to a working result?
I also know that AI labs treat regurgitation of training data as a bug and invest a lot of effort into making it unlikely to happen.
I recommend avoiding the temptation to look at things like this and say "yeah, that's not impressive, it saw that in the training data already". It's not a useful mental model to hold.
But yes, with enough prodding they will eventually build you something that's been built before. Don't see why that's particularly impressive. It's in the training data.
But if even the AI agent seems to struggle, you may be doing something unprecedented.
That said, I think some credit is due. This is still a nice weekend project as far as LLMs go, and I respect that you had a specific goal in mind (showing a better approach than Cursor's nonsense, that gets better results in less time with less cost) and achieved it quickly and decisively. It has not really changed my priors on LLMs in any way, though. If anything it just confirms them, particularly that the "agent swarm" stuff is a complete non-starter and demonstrates how ridiculous that avenue of hype is.
They're equally useful for novel tasks because they don't work by copying large scale patterns from their training data - the recent models can break down virtually any programming task to a bunch of functions and components and cobble together working code.
If you can clearly define the task, they can work towards a solution with you.
The main benefit of concepts already in the training data is that it lets you slack off on clearly defining the task. At that point it's not the model "cheating", it's you.
I'd find it very interesting to see some compelling examples along those line.
Yeah, that's obviously a lot harder, but doable. I've built it for clients, since they pay me, but haven't launch/made public something of my own, where I could share the code, I guess might be useful next project now.
> This is just, yet another, proof-of-concept.
It's not even a PoC, it's a demonstration of how far off the mark Cursor are with their "experiment" where they were amazed by what "hundreds of agents" build for week(s).
> there's no telling how closely the code mirrors existing open-source implementations if you aren't versed on the subject
This is absolutely true, I tried to get some better answers on how one could even figure that out here: >>46784990
That transcript viewer itself is a pretty fun novel piece of software, see https://github.com/simonw/claude-code-transcripts
Denobox https://github.com/simonw/denobox is another recent agent project which I consider novel: https://orphanhost.github.io/?simonw/denobox/transcripts/ses...
Agent engineering seems to be (from the outside!) converging on quality lived experience. Compared to Stone Age manual coding it’s less about technical arguments and more about intuition.
Vibes in short.
You can’t explain sex to someone who has not had sex.
Any interaction with tools is partly about intuition. It’s a difference of degree.
You need to see the big picture and visions of the future state in order to ensure what is being built will be able to grow and breathe into that. This requires an engineer. An agent doesn’t think much about the future, they think about right now.
This browser toy built by the agent, it has NO future. Once it has written the code, the story is over.