zlacker

[parent] [thread] 14 comments
1. deadba+(OP)[view] [source] 2026-01-27 23:45:00
This is not that impressive, there are numerous examples of browsers for training data to reference.
replies(3): >>embedd+05 >>simonw+D5 >>usef-+si
2. embedd+05[view] [source] 2026-01-28 00:15:31
>>deadba+(OP)
Damn, ok, what should I attempt instead, that could impress even you?
replies(1): >>anonym+vD
3. simonw+D5[view] [source] 2026-01-28 00:18:55
>>deadba+(OP)
I don't buy this.

It implies that the agents could only do this because they could regurgitate previous browsers from their training data.

Anyone who's watched a coding agent work will see why that's unlikely to be what's happening. If that's all they were doing, why did it take three days and thousands of changes and tool calls to get to a working result?

I also know that AI labs treat regurgitation of training data as a bug and invest a lot of effort into making it unlikely to happen.

I recommend avoiding the temptation to look at things like this and say "yeah, that's not impressive, it saw that in the training data already". It's not a useful mental model to hold.

replies(1): >>deadba+rv
4. usef-+si[view] [source] 2026-01-28 01:49:45
>>deadba+(OP)
What would be impressive to you?
replies(1): >>deadba+lv
◧◩
5. deadba+lv[view] [source] [discussion] 2026-01-28 03:43:09
>>usef-+si
A browser so unique and strange it is literally unlike anything we've ever seen to date, using entirely new UI patterns and paradigms.
◧◩
6. deadba+rv[view] [source] [discussion] 2026-01-28 03:43:51
>>simonw+D5
It took three days because... agents suck.

But yes, with enough prodding they will eventually build you something that's been built before. Don't see why that's particularly impressive. It's in the training data.

replies(1): >>simonw+DB
◧◩◪
7. simonw+DB[view] [source] [discussion] 2026-01-28 04:49:04
>>deadba+rv
Not a useful mental model.
replies(1): >>deadba+XC
◧◩◪◨
8. deadba+XC[view] [source] [discussion] 2026-01-28 05:03:23
>>simonw+DB
It is useful. If you can whip up something complex fairly quickly with an AI agent, it’s likely because it’s already been done before.

But if even the AI agent seems to struggle, you may be doing something unprecedented.

replies(1): >>simonw+kF
◧◩
9. anonym+vD[view] [source] [discussion] 2026-01-28 05:08:11
>>embedd+05
Actually good software that is suitable for mass adoption would go a long way to convincing a lot of people. This is just, yet another, proof-of-concept. Something which LLMs obviously can do, and which never seems to translate to real-world software people use. Parsing and rendering text is really not the hard part of building a browser, and there's no telling how closely the code mirrors existing open-source implementations if you aren't versed on the subject.

That said, I think some credit is due. This is still a nice weekend project as far as LLMs go, and I respect that you had a specific goal in mind (showing a better approach than Cursor's nonsense, that gets better results in less time with less cost) and achieved it quickly and decisively. It has not really changed my priors on LLMs in any way, though. If anything it just confirms them, particularly that the "agent swarm" stuff is a complete non-starter and demonstrates how ridiculous that avenue of hype is.

replies(1): >>embedd+551
◧◩◪◨⬒
10. simonw+kF[view] [source] [discussion] 2026-01-28 05:25:06
>>deadba+XC
Except if you spend quality time with coding agents you realize that's not actually true.

They're equally useful for novel tasks because they don't work by copying large scale patterns from their training data - the recent models can break down virtually any programming task to a bunch of functions and components and cobble together working code.

If you can clearly define the task, they can work towards a solution with you.

The main benefit of concepts already in the training data is that it lets you slack off on clearly defining the task. At that point it's not the model "cheating", it's you.

replies(3): >>aix1+RK >>keybor+T71 >>deadba+Nn2
◧◩◪◨⬒⬓
11. aix1+RK[view] [source] [discussion] 2026-01-28 06:23:44
>>simonw+kF
Simon, do you happen to have some concrete examples of a model doing a great job at a clearly novel, clearly non-trivial coding task?

I'd find it very interesting to see some compelling examples along those line.

replies(1): >>simonw+P61
◧◩◪
12. embedd+551[view] [source] [discussion] 2026-01-28 09:24:26
>>anonym+vD
> Actually good software that is suitable for mass adoption would go a long way to convincing a lot of people.

Yeah, that's obviously a lot harder, but doable. I've built it for clients, since they pay me, but haven't launch/made public something of my own, where I could share the code, I guess might be useful next project now.

> This is just, yet another, proof-of-concept.

It's not even a PoC, it's a demonstration of how far off the mark Cursor are with their "experiment" where they were amazed by what "hundreds of agents" build for week(s).

> there's no telling how closely the code mirrors existing open-source implementations if you aren't versed on the subject

This is absolutely true, I tried to get some better answers on how one could even figure that out here: >>46784990

◧◩◪◨⬒⬓⬔
13. simonw+P61[view] [source] [discussion] 2026-01-28 09:35:45
>>aix1+RK
I think datasette-transactions https://github.com/datasette/datasette-transactions is pretty novel. Here's the transcript where Claude Code built it: https://gisthost.github.io/?a41ce6304367e2ced59cd237c576b817...

That transcript viewer itself is a pretty fun novel piece of software, see https://github.com/simonw/claude-code-transcripts

Denobox https://github.com/simonw/denobox is another recent agent project which I consider novel: https://orphanhost.github.io/?simonw/denobox/transcripts/ses...

◧◩◪◨⬒⬓
14. keybor+T71[view] [source] [discussion] 2026-01-28 09:46:04
>>simonw+kF
> Except if you spend quality time with coding agents you realize that's not actually true.

Agent engineering seems to be (from the outside!) converging on quality lived experience. Compared to Stone Age manual coding it’s less about technical arguments and more about intuition.

Vibes in short.

You can’t explain sex to someone who has not had sex.

Any interaction with tools is partly about intuition. It’s a difference of degree.

◧◩◪◨⬒⬓
15. deadba+Nn2[view] [source] [discussion] 2026-01-28 16:57:34
>>simonw+kF
Good long lived software is not a bunch of functions and components cobbled together.

You need to see the big picture and visions of the future state in order to ensure what is being built will be able to grow and breathe into that. This requires an engineer. An agent doesn’t think much about the future, they think about right now.

This browser toy built by the agent, it has NO future. Once it has written the code, the story is over.

[go to top]