zlacker

[parent] [thread] 14 comments
1. chatma+(OP)[view] [source] 2026-01-27 20:30:11
Did you use Claude code? How many tokens did you burn? What’d it cost? What model did you use?
replies(1): >>embedd+r2
2. embedd+r2[view] [source] 2026-01-27 20:40:14
>>chatma+(OP)
Codex, no idea about tokens, I'll upload the session data probably tomorrow so you could see exactly what was done. I pay ~200 EUR/month for the ChatGPT Pro plan, prorating days I guess it'll be ~19 EUR for three days. Model used for everything was gpt-5.2 with reasoning effort set to xhigh.
replies(5): >>soilty+3b >>forgot+Jb >>storys+mv >>onenep+C23 >>ASalaz+hA3
◧◩
3. soilty+3b[view] [source] [discussion] 2026-01-27 21:10:32
>>embedd+r2
Thank you in advance for that! I barely use AI to generate code so I feel pretty lost looking at projects like this.
◧◩
4. forgot+Jb[view] [source] [discussion] 2026-01-27 21:13:16
>>embedd+r2
>I'll upload the session data probably tomorrow so you could see exactly what was done.

That'll be dope. The tokens used (input,output,total) are actually saved within codex's jsonl files.

◧◩
5. storys+mv[view] [source] [discussion] 2026-01-27 22:34:27
>>embedd+r2
That 19 EUR figure is basically subscription arbitrage. If you ran that volume through the API with xhigh reasoning the cost would be significantly higher. It doesn't seem scalable for non-interactive agents unless you can stay on the flat-rate consumer plan.
replies(1): >>embedd+yK
◧◩◪
6. embedd+yK[view] [source] [discussion] 2026-01-27 23:57:25
>>storys+mv
Yeah, no way I'd do this if I paid per token. Next experiment will probably be local-only together with GPT-OSS-120b which according to my own benchmarks seems to still be the strongest local model I can run myself. It'll be even cheaper then (as long as we don't count the money it took to acquire the hardware).
replies(1): >>mercut+B71
◧◩◪◨
7. mercut+B71[view] [source] [discussion] 2026-01-28 02:48:56
>>embedd+yK
What toolchain are you going to use with the local model? I agree that’s a Strong model, but it’s so slow for be with large contexts I’ve stopped using it for coding.
replies(1): >>embedd+AG1
◧◩◪◨⬒
8. embedd+AG1[view] [source] [discussion] 2026-01-28 08:36:11
>>mercut+B71
I have my own agent harness, and the inference backend is vLLM.
replies(2): >>storys+7W1 >>mercut+4e4
◧◩◪◨⬒⬓
9. storys+7W1[view] [source] [discussion] 2026-01-28 10:32:51
>>embedd+AG1
Curious how you handle sharding and KV cache pressure for a 120b model. I guess you are doing tensor parallelism across consumer cards, or is it a unified memory setup?
replies(1): >>embedd+7Y1
◧◩◪◨⬒⬓⬔
10. embedd+7Y1[view] [source] [discussion] 2026-01-28 10:49:58
>>storys+7W1
I don't, fits on my card with the full context, I think the native MXFP4 weights takes ~70GB of VRAM (out of 96GB available, RTX Pro 6000), so I still have room to spare to run GPT-OSS-20B alongside for smaller tasks too, and Wayland+Gnome :)
replies(1): >>storys+Z92
◧◩◪◨⬒⬓⬔⧯
11. storys+Z92[view] [source] [discussion] 2026-01-28 12:24:54
>>embedd+7Y1
I thought the RTX 6000 Ada was 48GB? If you have 96GB available that implies a dual setup, so you must be relying on tensor parallelism to shard the model weights across the pair.
replies(1): >>embedd+cc2
◧◩◪◨⬒⬓⬔⧯▣
12. embedd+cc2[view] [source] [discussion] 2026-01-28 12:40:08
>>storys+Z92
RTX Pro 6000 - 96GB VRAM - Single card
◧◩
13. onenep+C23[view] [source] [discussion] 2026-01-28 16:44:31
>>embedd+r2
Thanks in advance, I can't wait to see your prompts and how you architected this...
◧◩
14. ASalaz+hA3[view] [source] [discussion] 2026-01-28 18:52:21
>>embedd+r2
> I'll upload the session data probably tomorrow so you could see exactly what was done.

I've been very skeptical of the real usefulness of code assistants, much in part from my own experience. They work great for brand new code bases, but struggle with maintenance. Seeing your final result, I'm eager to see the process, specially the iteration.

◧◩◪◨⬒⬓
15. mercut+4e4[view] [source] [discussion] 2026-01-28 22:16:04
>>embedd+AG1
Can you tell me more about your agent harness? If it’s open source, I’d love to take it for a spin.

I would happily use local models if I could get them to perform, but they’re super slow if I bump their context window high, and I haven’t seen good orchestrators that keep context limited enough.

[go to top]