zlacker

[return to "Claude Code: connect to a local model when your quota runs out"]
1. paxys+c7c[view] [source] 2026-02-04 21:59:44
>>fugu2+(OP)
> Reduce your expectations about speed and performance!

Wildly understating this part.

Even the best local models (ones you run on beefy 128GB+ RAM machines) get nowhere close to the sheer intelligence of Claude/Gemini/Codex. At worst these models will move you backwards and just increase the amount of work Claude has to do when your limits reset.

◧◩
2. zozbot+i8c[view] [source] 2026-02-04 22:05:11
>>paxys+c7c
The best open models such as Kimi 2.5 are about as smart today as the big proprietary models were one year ago. That's not "nothing" and is plenty good enough for everyday work.
◧◩◪
3. paxys+ecc[view] [source] 2026-02-04 22:24:43
>>zozbot+i8c
LOCAL models. No one is running Kimi 2.5 on their Macbook or RTX 4090.
◧◩◪◨
4. Dennis+6uc[view] [source] 2026-02-05 00:12:02
>>paxys+ecc
On Macbooks, no. But there are a few lunatics like this guy:

https://www.youtube.com/watch?v=bFgTxr5yst0

◧◩◪◨⬒
5. HarHar+RQd[view] [source] 2026-02-05 12:52:41
>>Dennis+6uc
Wow!

I've never heard of this guy before, but I see he's got 5M YouTube subscribers, which I guess is the clout you need to have Apple loan (I assume) you $50K worth of Mac Studios!

I'll be interesting to see how model sizes, capability, and local compute prices evolve.

A bit off topic, but I was in best buy the other day and was shocked to see 65" TVs selling for $300 ... I can remember the first large flat screen TVs (plasma?) selling for 100x that ($30K) when they first came out.

[go to top]