zlacker

[return to "Tell HN: I cut Claude API costs from $70/month to pennies"]
1. LTL_FT+mC[view] [source] 2026-01-26 07:03:37
>>ok_orc+(OP)
It sounds like you don’t need immediate llm responses and can batch process your data nightly? Have you considered running a local llm? May not need to pay for api calls. Today’s local models are quite good. I started off with cpu and even that was fine for my pipelines.
◧◩
2. ok_orc+om2[view] [source] 2026-01-26 18:16:02
>>LTL_FT+mC
I haven't thought about that, but really want to dig in more now. Any places you recommend starting?
◧◩◪
3. LTL_FT+xS9[view] [source] 2026-01-28 17:13:26
>>ok_orc+om2
I started off using gpt-oss-120b on cpu. It uses about 60-65gb of memory or so but my workstation has 128gb of ram. If I had less ram, I would start off with the gpt-oss-20b model and go from there. Look for MoE models as they are more efficient to run.

My old threadripper pro was seeing about 15tps, which was quite acceptable for the background tasks I was running.

[go to top]