zlacker

[return to "Scaling long-running autonomous coding"]
1. simonw+vf[view] [source] 2026-01-20 02:56:02
>>srames+(OP)
One of the big open questions for me right now concerns how library dependencies are used.

Most of the big ones are things like skia, harfbuzz, wgpu - all totally reasonable IMO.

The two that stand out for me as more notable are html5ever for parsing HTML and taffy for handling CSS grids and flexbox - that's vendored with an explanation of some minor changes here: https://github.com/wilsonzlin/fastrender/blob/19bf1036105d4e...

Taffy a solid library choice, but it's probably the most robust ammunition for anyone who wants to argue that this shouldn't count as a "from scratch" rendering engine.

I don't think it detracts much if at all from FastRender as an example of what an army of coding agents can help a single engineer achieve in a few weeks of work.

◧◩
2. janoel+Zh[view] [source] 2026-01-20 03:18:26
>>simonw+vf
Any views on the nature of "maintainability" shifting now? If a fleet of agents demonstrated the ability to bootstrap a project like that, would that be enough indication to you that orchestration would be able to carry the code base forward? I've seen fully llm'd codebases hit a certain critical weight where agents struggled to maintain coherent feature development, keeping patterns aligned, as well as spiralling into quick fixes.
◧◩◪
3. brianj+Qm[view] [source] 2026-01-20 04:11:51
>>janoel+Zh
I think there's a somewhat valid perspective that the Nth+1 model can simply clean up the previous models mess.

Essentially a bet that the rate of model improvement is going to be faster than the rate of decay from bad coding.

Now this hurts me personally to see as someone who actually enjoys having quality code but I don't see why it doesn't have a decent chance of holding

[go to top]