zlacker

[return to "Claude Code: connect to a local model when your quota runs out"]
1. paxys+c7c[view] [source] 2026-02-04 21:59:44
>>fugu2+(OP)
> Reduce your expectations about speed and performance!

Wildly understating this part.

Even the best local models (ones you run on beefy 128GB+ RAM machines) get nowhere close to the sheer intelligence of Claude/Gemini/Codex. At worst these models will move you backwards and just increase the amount of work Claude has to do when your limits reset.

◧◩
2. zozbot+i8c[view] [source] 2026-02-04 22:05:11
>>paxys+c7c
The best open models such as Kimi 2.5 are about as smart today as the big proprietary models were one year ago. That's not "nothing" and is plenty good enough for everyday work.
◧◩◪
3. reilly+A9c[view] [source] 2026-02-04 22:11:11
>>zozbot+i8c
Which takes a $20k thunderbolt cluster of 2 512GB RAM Mac Studio Ultras to run at full quality…
◧◩◪◨
4. teaear+Y9c[view] [source] 2026-02-04 22:13:37
>>reilly+A9c
Which while expensive is dirt cheap compared to a comparable NVidia or AMD system.
◧◩◪◨⬒
5. Schema+2fc[view] [source] 2026-02-04 22:39:29
>>teaear+Y9c
It's still very expensive compared to using the hosted models which are currently massively subsidised. Have to wonder what the fair market price for these hosted models will be after the free money dries up.
◧◩◪◨⬒⬓
6. cactus+Nic[view] [source] 2026-02-04 23:01:22
>>Schema+2fc
Inference is profitable. Maybe we hit a limit and we don't need as many expensive training runs in the future.
◧◩◪◨⬒⬓⬔
7. teaear+vtc[view] [source] 2026-02-05 00:08:37
>>cactus+Nic
For sure Claude Code isn’t profitable
◧◩◪◨⬒⬓⬔⧯
8. bdangu+7zc[view] [source] 2026-02-05 00:50:04
>>teaear+vtc
Neither was Uber and … and …
[go to top]