Wildly understating this part.
Even the best local models (ones you run on beefy 128GB+ RAM machines) get nowhere close to the sheer intelligence of Claude/Gemini/Codex. At worst these models will move you backwards and just increase the amount of work Claude has to do when your limits reset.
Whether it's a giant corporate model or something you run locally, there is no intelligence there. It's still just a lying engine. It will tell you the string of tokens most likely to come after your prompt based on training data that was stolen and used against the wishes of its original creators.
That might incentivize it to perform slightly better from the get go.
I'll add on https://unsloth.ai/docs/models/qwen3-coder-next
The full model is supposedly comparable to Sonnet 4.5 But, you can run the 4 bit quant on consumer hardware as long as your RAM + VRAM has room to hold 46GB. 8 bit needs 85.
But as a counterpoint: there are whole communities of people in this space who get significant value from models they run locally. I am one of them.
It is probably enough to handle a lot of what people use the big-3 closed models for. Somewhat slower and somewhat dumber, granted, but still extraordinarily capable. It punches way above its weight class for an 80B model.
That said, Claude Code is designed to work with Anthropic's models. Agents have a buttload of custom work going on in the background to massage specific models to do things well.
I was using GLM on ZAI coding plan (jerry rigged Claude Code for $3/month), but finding myself asking Sonnet to rewrite 90% of the code GLM was giving me. At some point I was like "what the hell am I doing" and just switched.
To clarify, the code I was getting before mostly worked, it was just a lot less pleasant to look at and work with. Might be a matter of taste, but I found it had a big impact on my morale and productivity.
Are there a lot of options how "how far" do you quantize? How much VRAM does it take to get the 92-95% you are speaking of?
PC or Mac? A PC, ya, no way, not without beefy GPUs with lots of VRAM. A mac? Depends on the CPU, an M3 Ultra with 128GB of unified RAM is going to get closer, at least. You can have decent experiences with a Max CPU + 64GB of unified RAM (well, that's my setup at least).
So many: https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overvie...
> How much VRAM does it take to get the 92-95% you are speaking of?
For inference, it's heavily dependent on the size of the weights (plus context). Quantizing an f32 or f16 model to q4/mxfp4 won't necessarily use 92-95% less VRAM, but it's pretty close for smaller contexts.
Kimi K2.5 is a trillion parameter model. You can't run it locally on anything other than extremely well equipped hardware. Even heavily quantized you'd still need 512GB of unified memory, and the quantization would impact the performance.
Also the proprietary models a year ago were not that good for anything beyond basic tasks.
This is a very common sequence of events.
The frontier hosted models are so much better than everything else that it's not worth messing around with anything lesser if doing this professionally. The $20/month plans go a long way if context is managed carefully. For a professional developer or consultant, the $200/month plan is peanuts relative to compensation.
I've also been testing OpenClaw. It burned 8M tokens during my half hour of testing, which would have been like $50 with Opus on the API. (Which is why everyone was using it with the sub, until Anthropic apparently banned that.)
I was using GLM on Cerebras instead, so it was only $10 per half hour ;) Tried to get their Coding plan ("unlimited" for $50/mo) but sold out...
(My fallback is I got a whole year of GLM from ZAI for $20 for the year, it's just a bit too slow for interactive use.)
Unless you include it in "frontier", but that has usually been used to refer to "Big 3".
Now as the other replies say, you should very likely run a quantized version anyway.
No one's running Sonnet/Gemini/GPT-5 locally though.
I have claude pro $20/mo and sometimes run out. I just set ANTHROPIC_BASE_URL to a localllm API endpoint that connects to a cheaper Openai model. I can continue with smaller tasks with no problem. This has been done for a long time.
(At todays ram prices upgrading to that for me would pay for a _lot_ of tokens...)
Kimi K2.5 is good, but it's still behind the main models like Claude's offerings and GPT-5.2. Yes, I know what the benchmarks say, but the benchmarks for open weight models have been overpromising for a long time and Kimi K2.5 is no exception.
Kimi K2.5 is also not something you can easily run locally without investing $5-10K or more. There are hosted options you can pay for, but like the parent commenter observed: By the time you're pinching pennies on LLM costs, what are you even achieving? I could see how it could make sense for students or people who aren't doing this professionally, but anyone doing this professionally really should skip straight to the best models available.
Unless you're billing hourly and looking for excuses to generate more work I guess?
I need to do more testing before I can agree that it is performing at a Sonnet-equivalent level (it was never claimed to be Opus-class.) But it is pretty cool to get beaten in a programming contest by my own video card. For those who get it, no explanation is necessary; for those who don't, no explanation is possible.
And unlike the hosted models, the ones you run locally will still work just as well several years from now. No ads, no spying, no additional censorship, no additional usage limits or restrictions. You'll get no such guarantee from Google, OpenAI and the other major players.
Number of params == “variables” in memory
VRAM footprint ~= number of params * size of a param
A 4B model at 8 bits will result in 4GB vram give or take, same as params. At 4 bits ~= 2GB and so on. Kimi is about 512GB at 4 bits.
For VS Code code completion in Continue using a Qwen3-coder 7b model. For CLI work Qwen coder 32b for sidebar. 8 bit quant for both.
I need to take a look at Qwen3-coder-next, it is supposed to have made things much faster with a larger model.
I expect it'll come along but I'm not gonna spend the $$$$ necessary to try to DIY it just yet.
https://buildai.substack.com/i/181542049/the-mac-mini-moment
If you’re worried about others being able to clone your business processes if you share them with a frontier provider then the cost of a Mac Studio to run Kimi is probably a justifiable tax right off.
Instead have Claude know when to offload work to local models and what model is best suited for the job. It will shape the prompt for the model. Then have Claude review the results. Massive reduction in costs.
btw, at least on Macbooks you can run good models with just M1 32GB of memory.
Although I'm starting to like LMStudio more, as it has more features that Ollama is missing.
You can then get Claude to create the MCP server to talk to either. Then a CLAUDE.md that tells it to read the models you have downloaded, determine their use and when to offload. Claude will make all that for you as well.
I'm not looking for a vibe coding "one-shot" full project model. I'm not looking to replace GPT 5.2 or Opus 4.5. But having a local instance running some Ralph loop overnight on a specific aspect for the price of electricity is alluring.
Before that I used Qwen3-30B which is good enough for some quick javascript or Python, like 'add a new endpoint /api/foobar which does foobaz'. Also very decent for a quick summary of code.
It is 530Tok/s PP and 50Tok/s TG. If you have it spit out lots of the code that is just copy of the input, then it does 200Tok/s, i.e. 'add a new endpoint /api/foobar which does foobaz and return the whole file'
> Thermal throttling: Thunderbolt 5 cables get hot under sustained 15GB/s load. After 10 minutes, bandwidth drops to 12GB/s. After 20 minutes, 10GB/s. Your 5.36 tokens/sec becomes 4.1 tokens/sec. Active cooling on cables helps but you’re fighting physics.
Thermal throttling of network cables is a new thing to me…
I still have to occasionally switch to Opus in Opencode planning mode, but not having to rely on Sonnet anymore makes my Claude subscription last much longer.
I've never heard of this guy before, but I see he's got 5M YouTube subscribers, which I guess is the clout you need to have Apple loan (I assume) you $50K worth of Mac Studios!
I'll be interesting to see how model sizes, capability, and local compute prices evolve.
A bit off topic, but I was in best buy the other day and was shocked to see 65" TVs selling for $300 ... I can remember the first large flat screen TVs (plasma?) selling for 100x that ($30K) when they first came out.
The big powerful models think about tasks, then offload some stuff to a drastically cheaper cloud model or the model running on your hardware.
Great demo video though. Nice to see some benchmarks of Exo with this cluster across various models.