It implies that the agents could only do this because they could regurgitate previous browsers from their training data.
Anyone who's watched a coding agent work will see why that's unlikely to be what's happening. If that's all they were doing, why did it take three days and thousands of changes and tool calls to get to a working result?
I also know that AI labs treat regurgitation of training data as a bug and invest a lot of effort into making it unlikely to happen.
I recommend avoiding the temptation to look at things like this and say "yeah, that's not impressive, it saw that in the training data already". It's not a useful mental model to hold.
But yes, with enough prodding they will eventually build you something that's been built before. Don't see why that's particularly impressive. It's in the training data.
But if even the AI agent seems to struggle, you may be doing something unprecedented.
They're equally useful for novel tasks because they don't work by copying large scale patterns from their training data - the recent models can break down virtually any programming task to a bunch of functions and components and cobble together working code.
If you can clearly define the task, they can work towards a solution with you.
The main benefit of concepts already in the training data is that it lets you slack off on clearly defining the task. At that point it's not the model "cheating", it's you.
You need to see the big picture and visions of the future state in order to ensure what is being built will be able to grow and breathe into that. This requires an engineer. An agent doesn’t think much about the future, they think about right now.
This browser toy built by the agent, it has NO future. Once it has written the code, the story is over.