zlacker

[return to "Show HN: One Human + One Agent = One Browser From Scratch in 20K LOC"]
1. embedd+B4[view] [source] 2026-01-27 13:44:20
>>embedd+(OP)
I set some rules for myself: three days of total time, no 3rd party Rust crates, allowed to use commonly available OS libraries, has to support X11/Windows/macOS and can render some websites.

After three days, I have it working with around 20K LOC, whereas ~14K is the browser engine itself + X11, then 6K is just Windows+macOS support.

Source code + CI built binaries are available here if you wanna try it out: https://github.com/embedding-shapes/one-agent-one-browser

◧◩
2. chatma+jI1[view] [source] 2026-01-27 20:30:11
>>embedd+B4
Did you use Claude code? How many tokens did you burn? What’d it cost? What model did you use?
◧◩◪
3. embedd+KK1[view] [source] 2026-01-27 20:40:14
>>chatma+jI1
Codex, no idea about tokens, I'll upload the session data probably tomorrow so you could see exactly what was done. I pay ~200 EUR/month for the ChatGPT Pro plan, prorating days I guess it'll be ~19 EUR for three days. Model used for everything was gpt-5.2 with reasoning effort set to xhigh.
◧◩◪◨
4. storys+Fd2[view] [source] 2026-01-27 22:34:27
>>embedd+KK1
That 19 EUR figure is basically subscription arbitrage. If you ran that volume through the API with xhigh reasoning the cost would be significantly higher. It doesn't seem scalable for non-interactive agents unless you can stay on the flat-rate consumer plan.
◧◩◪◨⬒
5. embedd+Rs2[view] [source] 2026-01-27 23:57:25
>>storys+Fd2
Yeah, no way I'd do this if I paid per token. Next experiment will probably be local-only together with GPT-OSS-120b which according to my own benchmarks seems to still be the strongest local model I can run myself. It'll be even cheaper then (as long as we don't count the money it took to acquire the hardware).
◧◩◪◨⬒⬓
6. mercut+UP2[view] [source] 2026-01-28 02:48:56
>>embedd+Rs2
What toolchain are you going to use with the local model? I agree that’s a Strong model, but it’s so slow for be with large contexts I’ve stopped using it for coding.
[go to top]