zlacker

[parent] [thread] 81 comments
1. paxys+(OP)[view] [source] 2026-02-04 21:59:44
> Reduce your expectations about speed and performance!

Wildly understating this part.

Even the best local models (ones you run on beefy 128GB+ RAM machines) get nowhere close to the sheer intelligence of Claude/Gemini/Codex. At worst these models will move you backwards and just increase the amount of work Claude has to do when your limits reset.

replies(17): >>zozbot+61 >>nik282+N7 >>bicx+7a >>dheera+ra >>bityar+ii >>DANmod+Rm >>richst+Fo >>0xbadc+xs >>andai+zu >>mycall+Ov >>seanmc+ow >>anon37+ND >>mlrtim+uI >>majorm+DY >>acchow+411 >>EagnaI+Na1 >>ameliu+rz1
2. zozbot+61[view] [source] 2026-02-04 22:05:11
>>paxys+(OP)
The best open models such as Kimi 2.5 are about as smart today as the big proprietary models were one year ago. That's not "nothing" and is plenty good enough for everyday work.
replies(6): >>reilly+o2 >>teaear+E2 >>paxys+25 >>corysa+Lh >>0xbadc+8r >>Aurorn+eB
◧◩
3. reilly+o2[view] [source] [discussion] 2026-02-04 22:11:11
>>zozbot+61
Which takes a $20k thunderbolt cluster of 2 512GB RAM Mac Studio Ultras to run at full quality…
replies(5): >>teaear+M2 >>bigyab+ko >>0xbadc+mr >>deaux+7H >>polyno+A11
◧◩
4. teaear+E2[view] [source] [discussion] 2026-02-04 22:12:59
>>zozbot+61
Having used K2.5 I’d judge it to be a little better than that. Maybe as good as proprietary models from last June?
◧◩◪
5. teaear+M2[view] [source] [discussion] 2026-02-04 22:13:37
>>reilly+o2
Which while expensive is dirt cheap compared to a comparable NVidia or AMD system.
replies(2): >>blharr+d7 >>Schema+Q7
◧◩
6. paxys+25[view] [source] [discussion] 2026-02-04 22:24:43
>>zozbot+61
LOCAL models. No one is running Kimi 2.5 on their Macbook or RTX 4090.
replies(2): >>Dennis+Um >>deaux+kH
◧◩◪◨
7. blharr+d7[view] [source] [discussion] 2026-02-04 22:35:19
>>teaear+M2
What speed are you getting at that level of hardware though?
8. nik282+N7[view] [source] 2026-02-04 22:38:52
>>paxys+(OP)
> intelligence

Whether it's a giant corporate model or something you run locally, there is no intelligence there. It's still just a lying engine. It will tell you the string of tokens most likely to come after your prompt based on training data that was stolen and used against the wishes of its original creators.

◧◩◪◨
9. Schema+Q7[view] [source] [discussion] 2026-02-04 22:39:29
>>teaear+M2
It's still very expensive compared to using the hosted models which are currently massively subsidised. Have to wonder what the fair market price for these hosted models will be after the free money dries up.
replies(2): >>cactus+Bb >>whatsu+UJ
10. bicx+7a[view] [source] 2026-02-04 22:53:05
>>paxys+(OP)
Exactly. The comparison benchmark in the local LLM community is often GPT _3.5_, and most home machines can’t achieve that level.
11. dheera+ra[view] [source] 2026-02-04 22:54:46
>>paxys+(OP)
Maybe add to the Claude system prompt that it should work efficiently or else its unfinished work will be handed off to to a stupider junior LLM when its limits run out, and it will be forced to deal with the fallout the next day.

That might incentivize it to perform slightly better from the get go.

replies(1): >>kridsd+Oh
◧◩◪◨⬒
12. cactus+Bb[view] [source] [discussion] 2026-02-04 23:01:22
>>Schema+Q7
Inference is profitable. Maybe we hit a limit and we don't need as many expensive training runs in the future.
replies(2): >>teaear+jm >>paxys+Rn
◧◩
13. corysa+Lh[view] [source] [discussion] 2026-02-04 23:38:21
>>zozbot+61
The article mentions https://unsloth.ai/docs/basics/claude-codex

I'll add on https://unsloth.ai/docs/models/qwen3-coder-next

The full model is supposedly comparable to Sonnet 4.5 But, you can run the 4 bit quant on consumer hardware as long as your RAM + VRAM has room to hold 46GB. 8 bit needs 85.

◧◩
14. kridsd+Oh[view] [source] [discussion] 2026-02-04 23:38:29
>>dheera+ra
"You must always take two steps forward, for when you are off the clock, your adversary will take one step back."
15. bityar+ii[view] [source] 2026-02-04 23:40:55
>>paxys+(OP)
Correct, a rack full of datacenter equipment is not going to compete with anything that fits on your desk or lap. Well spotted.

But as a counterpoint: there are whole communities of people in this space who get significant value from models they run locally. I am one of them.

replies(2): >>Gravey+zj >>kamov+Rj
◧◩
16. Gravey+zj[view] [source] [discussion] 2026-02-04 23:50:19
>>bityar+ii
Would you mind sharing your hardware setup and use case(s)?
replies(2): >>Camper+9k >>dust42+Oi1
◧◩
17. kamov+Rj[view] [source] [discussion] 2026-02-04 23:53:37
>>bityar+ii
What do you use local models for? I'm asking generally about possible applications of these smaller models
replies(1): >>Lio+9a1
◧◩◪
18. Camper+9k[view] [source] [discussion] 2026-02-04 23:55:28
>>Gravey+zj
Not the GP but the new Qwen-Coder-Next release feels like a step change, at 60 tokens per second on a single 96GB Blackwell. And that's at full 8-bit quantization and 256K context, which I wasn't sure was going to work at all.

It is probably enough to handle a lot of what people use the big-3 closed models for. Somewhat slower and somewhat dumber, granted, but still extraordinarily capable. It punches way above its weight class for an 80B model.

replies(3): >>zozbot+gl >>redwoo+Nl >>paxys+LI
◧◩◪◨
19. zozbot+gl[view] [source] [discussion] 2026-02-05 00:01:22
>>Camper+9k
IIRC, that new Qwen model has 3B active parameters so it's going to run well enough even on far less than 96GB VRAM. (Though more VRAM may of course help wrt. enabling the full available context length.) Very impressive work from the Qwen folks.
◧◩◪◨
20. redwoo+Nl[view] [source] [discussion] 2026-02-05 00:04:15
>>Camper+9k
Agree, these new models are a game changer. I switched from Claude to Qwen3-Coder-Next for day-to-day on dev projects and don't see a big difference. Just use Claude when I need comprehensive planning or review. Running Qwen3-Coder-Next-Q8 with 256K context.
◧◩◪◨⬒⬓
21. teaear+jm[view] [source] [discussion] 2026-02-05 00:08:37
>>cactus+Bb
For sure Claude Code isn’t profitable
replies(1): >>bdangu+Vr
22. DANmod+Rm[view] [source] 2026-02-05 00:11:52
>>paxys+(OP)
and you really should be measuring based on the worst-case scenario for tools like this.
◧◩◪
23. Dennis+Um[view] [source] [discussion] 2026-02-05 00:12:02
>>paxys+25
On Macbooks, no. But there are a few lunatics like this guy:

https://www.youtube.com/watch?v=bFgTxr5yst0

replies(2): >>HarHar+FJ1 >>danw19+4L1
◧◩◪◨⬒⬓
24. paxys+Rn[view] [source] [discussion] 2026-02-05 00:18:23
>>cactus+Bb
Inference APIs are probably profitable, but I doubt the $20-$100 monthly plans are.
◧◩◪
25. bigyab+ko[view] [source] [discussion] 2026-02-05 00:23:00
>>reilly+o2
"Full quality" being a relative assessment, here. You're still deeply compute constrained, that machine would crawl at longer contexts.
26. richst+Fo[view] [source] 2026-02-05 00:25:24
>>paxys+(OP)
This. It's a false economy if you value your time even slightly, pay for the extra tokens and use the premium models.
◧◩
27. 0xbadc+8r[view] [source] [discussion] 2026-02-05 00:43:51
>>zozbot+61
Kimi K2.5 is fourth place for intelligence right now. And it's not as good as the top frontier models at coding, but it's better than Claude 4.5 Sonnet. https://artificialanalysis.ai/models
◧◩◪
28. 0xbadc+mr[view] [source] [discussion] 2026-02-05 00:46:00
>>reilly+o2
Most benchmarks show very little improvement of "full quality" over a quantized lower-bit model. You can shrink the model to a fraction of its "full" size and get 92-95% same performance, with less VRAM use.
replies(1): >>Muffin+Rv
◧◩◪◨⬒⬓⬔
29. bdangu+Vr[view] [source] [discussion] 2026-02-05 00:50:04
>>teaear+jm
Neither was Uber and … and …
replies(1): >>plagia+6z
30. 0xbadc+xs[view] [source] 2026-02-05 00:52:55
>>paxys+(OP)
The best local models are literally right behind Claude/Gemini/Codex. Check the benchmarks.

That said, Claude Code is designed to work with Anthropic's models. Agents have a buttload of custom work going on in the background to massage specific models to do things well.

replies(1): >>girvo+OA
31. andai+zu[view] [source] 2026-02-05 01:12:06
>>paxys+(OP)
Yeah this is why I ended up getting Claude subscription in the first place.

I was using GLM on ZAI coding plan (jerry rigged Claude Code for $3/month), but finding myself asking Sonnet to rewrite 90% of the code GLM was giving me. At some point I was like "what the hell am I doing" and just switched.

To clarify, the code I was getting before mostly worked, it was just a lot less pleasant to look at and work with. Might be a matter of taste, but I found it had a big impact on my morale and productivity.

replies(5): >>Muffin+Mv >>Aurorn+KB >>icedch+tM >>davidw+Za1 >>PeterS+Ng1
◧◩
32. Muffin+Mv[view] [source] [discussion] 2026-02-05 01:22:34
>>andai+zu
Did you eventually move to a $20/mo Claude plan, $100/mo Claude plan, $200/mo, or API based? if API based, how much are you averaging a month?
replies(2): >>andai+zC >>holodu+We1
33. mycall+Ov[view] [source] 2026-02-05 01:22:54
>>paxys+(OP)
There is tons of improvements in the near future. Even Claude Code developer said he aimed at delivering a product that was built for future models he betted would improve enough to fulfill his assumptions. Parallel vLLM MoE local LLMs on a Strix Halo 128GB has some life in it yet.
◧◩◪◨
34. Muffin+Rv[view] [source] [discussion] 2026-02-05 01:23:17
>>0xbadc+mr
> You can shrink the model to a fraction of its "full" size and get 92-95% same performance, with less VRAM use.

Are there a lot of options how "how far" do you quantize? How much VRAM does it take to get the 92-95% you are speaking of?

replies(1): >>bigyab+8x
35. seanmc+ow[view] [source] 2026-02-05 01:27:40
>>paxys+(OP)
> (ones you run on beefy 128GB+ RAM machines)

PC or Mac? A PC, ya, no way, not without beefy GPUs with lots of VRAM. A mac? Depends on the CPU, an M3 Ultra with 128GB of unified RAM is going to get closer, at least. You can have decent experiences with a Max CPU + 64GB of unified RAM (well, that's my setup at least).

replies(1): >>Quantu+qx
◧◩◪◨⬒
36. bigyab+8x[view] [source] [discussion] 2026-02-05 01:33:54
>>Muffin+Rv
> Are there a lot of options how "how far" do you quantize?

So many: https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overvie...

> How much VRAM does it take to get the 92-95% you are speaking of?

For inference, it's heavily dependent on the size of the weights (plus context). Quantizing an f32 or f16 model to q4/mxfp4 won't necessarily use 92-95% less VRAM, but it's pretty close for smaller contexts.

replies(1): >>Muffin+HA
◧◩
37. Quantu+qx[view] [source] [discussion] 2026-02-05 01:36:12
>>seanmc+ow
Which models do you use, and how do you run them?
replies(1): >>seanmc+PU
◧◩◪◨⬒⬓⬔⧯
38. plagia+6z[view] [source] [discussion] 2026-02-05 01:50:19
>>bdangu+Vr
Businesses will desire me for my insomnia once Anthropics starts charging congestion pricing.
◧◩◪◨⬒⬓
39. Muffin+HA[view] [source] [discussion] 2026-02-05 02:01:59
>>bigyab+8x
Thank you. Could you give a tl;dr on "the full model needs ____ this much VRAM and if you do _____ the most common quantization method it will run in ____ this much VRAM" rough estimate please?
replies(1): >>omneit+VT
◧◩
40. girvo+OA[view] [source] [discussion] 2026-02-05 02:03:15
>>0xbadc+xs
The benchmarks simply do not match my experience though. I don’t put that much stock in them anymore.
replies(1): >>Balina+hL1
◧◩
41. Aurorn+eB[view] [source] [discussion] 2026-02-05 02:07:07
>>zozbot+61
> The best open models such as Kimi 2.5 are about as smart today as the big proprietary models were one year ago

Kimi K2.5 is a trillion parameter model. You can't run it locally on anything other than extremely well equipped hardware. Even heavily quantized you'd still need 512GB of unified memory, and the quantization would impact the performance.

Also the proprietary models a year ago were not that good for anything beyond basic tasks.

◧◩
42. Aurorn+KB[view] [source] [discussion] 2026-02-05 02:10:02
>>andai+zu
> but finding myself asking Sonnet to rewrite 90% of the code GLM was giving me. At some point I was like "what the hell am I doing" and just switched.

This is a very common sequence of events.

The frontier hosted models are so much better than everything else that it's not worth messing around with anything lesser if doing this professionally. The $20/month plans go a long way if context is managed carefully. For a professional developer or consultant, the $200/month plan is peanuts relative to compensation.

replies(3): >>deaux+LG >>bushba+9X >>undeve+Z21
◧◩◪
43. andai+zC[view] [source] [discussion] 2026-02-05 02:16:50
>>Muffin+Mv
The $20 one, but it's hobby use for me, would probably need the $200 one if I was full time. Ran into the 5 hour limit in like 30 minutes the other day.

I've also been testing OpenClaw. It burned 8M tokens during my half hour of testing, which would have been like $50 with Opus on the API. (Which is why everyone was using it with the sub, until Anthropic apparently banned that.)

I was using GLM on Cerebras instead, so it was only $10 per half hour ;) Tried to get their Coding plan ("unlimited" for $50/mo) but sold out...

(My fallback is I got a whole year of GLM from ZAI for $20 for the year, it's just a bit too slow for interactive use.)

44. anon37+ND[view] [source] 2026-02-05 02:29:05
>>paxys+(OP)
It's true that open models are a half-step behind the frontier, but I can't say that I've seen "sheer intelligence" from the models you mentioned. Just a couple of days ago Gemini 3 Pro was happily writing naive graph traversal code without any cycle detection or safety measures. If nothing else, I would have thought these models could nail basic algorithms by now?
replies(1): >>cracki+Jb1
◧◩◪
45. deaux+LG[view] [source] [discussion] 2026-02-05 02:55:18
>>Aurorn+KB
Until last week, you would've been right. Kimi K2.5 is absolutely competitive for coding.

Unless you include it in "frontier", but that has usually been used to refer to "Big 3".

replies(3): >>bigiai+oJ >>Aurorn+EK >>mirolj+uI1
◧◩◪
46. deaux+7H[view] [source] [discussion] 2026-02-05 02:59:05
>>reilly+o2
And that's at unusable speeds - it takes about triple that amount to run it decently fast at int4.

Now as the other replies say, you should very likely run a quantized version anyway.

◧◩◪
47. deaux+kH[view] [source] [discussion] 2026-02-05 03:01:40
>>paxys+25
Some people spend $50k on a new car, others spend it on running Kimi K2.5 at good speeds locally.

No one's running Sonnet/Gemini/GPT-5 locally though.

48. mlrtim+uI[view] [source] 2026-02-05 03:11:35
>>paxys+(OP)
The local ones yeah...

I have claude pro $20/mo and sometimes run out. I just set ANTHROPIC_BASE_URL to a localllm API endpoint that connects to a cheaper Openai model. I can continue with smaller tasks with no problem. This has been done for a long time.

◧◩◪◨
49. paxys+LI[view] [source] [discussion] 2026-02-05 03:13:51
>>Camper+9k
"Single 96GB Blackwell" is still $15K+ worth of hardware. You'd have to use it at full capacity for 5-10 years to break even when compared to "Max" plans from OpenAI/Anthropic/Google. And you'd still get nowhere near the quality of something like Opus. Yes there are plenty of valid arguments in favor of self hosting, but at the moment value simply isn't one of them.
replies(1): >>Camper+HP
◧◩◪◨
50. bigiai+oJ[view] [source] [discussion] 2026-02-05 03:19:32
>>deaux+LG
Looks like you need at least a quarter terabyte or so of ram to run that though?

(At todays ram prices upgrading to that for me would pay for a _lot_ of tokens...)

◧◩◪◨⬒
51. whatsu+UJ[view] [source] [discussion] 2026-02-05 03:23:38
>>Schema+Q7
I wonder if the "distributed AI computing" touted by some of the new crypto projects [0] works and is relatively cheaper.

0. https://www.daifi.ai/

◧◩◪◨
52. Aurorn+EK[view] [source] [discussion] 2026-02-05 03:28:29
>>deaux+LG
> Kimi K2.5 is absolutely competitive for coding.

Kimi K2.5 is good, but it's still behind the main models like Claude's offerings and GPT-5.2. Yes, I know what the benchmarks say, but the benchmarks for open weight models have been overpromising for a long time and Kimi K2.5 is no exception.

Kimi K2.5 is also not something you can easily run locally without investing $5-10K or more. There are hosted options you can pay for, but like the parent commenter observed: By the time you're pinching pennies on LLM costs, what are you even achieving? I could see how it could make sense for students or people who aren't doing this professionally, but anyone doing this professionally really should skip straight to the best models available.

Unless you're billing hourly and looking for excuses to generate more work I guess?

replies(2): >>deaux+bU >>triage+Cc1
◧◩
53. icedch+tM[view] [source] [discussion] 2026-02-05 03:44:41
>>andai+zu
Same. I messed around with a bunch of local models on a box with 128GB of VRAM and the code quality was always meh. Local AI is a fun hobby though. But if you want to just get stuff done it’s not the way to go.
◧◩◪◨⬒
54. Camper+HP[view] [source] [discussion] 2026-02-05 04:20:05
>>paxys+LI
Eh, they can be found in the $8K neighborhood, $9K at most. As zozbot234 suggests, a much cheaper card would probably be fine for this particular model.

I need to do more testing before I can agree that it is performing at a Sonnet-equivalent level (it was never claimed to be Opus-class.) But it is pretty cool to get beaten in a programming contest by my own video card. For those who get it, no explanation is necessary; for those who don't, no explanation is possible.

And unlike the hosted models, the ones you run locally will still work just as well several years from now. No ads, no spying, no additional censorship, no additional usage limits or restrictions. You'll get no such guarantee from Google, OpenAI and the other major players.

◧◩◪◨⬒⬓⬔
55. omneit+VT[view] [source] [discussion] 2026-02-05 05:07:23
>>Muffin+HA
It’s a trivial calculation to make (+/- 10%).

Number of params == “variables” in memory

VRAM footprint ~= number of params * size of a param

A 4B model at 8 bits will result in 4GB vram give or take, same as params. At 4 bits ~= 2GB and so on. Kimi is about 512GB at 4 bits.

◧◩◪◨⬒
56. deaux+bU[view] [source] [discussion] 2026-02-05 05:10:36
>>Aurorn+EK
I disagree, based on having used it extensively over the last week. I find it to be at least as strong as Sonnet 4.5 and 5.2-Codex on the majority of tasks, often better. Note that even among the big 3, each of them has a domain where they're better than the other two. It's not better than Codex (x-)high at debugging non-UI code - but neither is Opus or Gemini. It's not better than Gemini at UI design - but neither is Opus or Codex. It's not better than Opus at tool usage and delegation - but neither is Gemini or Codex.
replies(1): >>ianlev+Fb1
◧◩◪
57. seanmc+PU[view] [source] [discussion] 2026-02-05 05:16:56
>>Quantu+qx
I have a M3 max 64GB.

For VS Code code completion in Continue using a Qwen3-coder 7b model. For CLI work Qwen coder 32b for sidebar. 8 bit quant for both.

I need to take a look at Qwen3-coder-next, it is supposed to have made things much faster with a larger model.

◧◩◪
58. bushba+9X[view] [source] [discussion] 2026-02-05 05:39:08
>>Aurorn+KB
For many companies. They’d be better to pay $200/month and layoff 1% of the workforce to pay for it.
replies(1): >>apercu+mE1
59. majorm+DY[view] [source] 2026-02-05 05:54:33
>>paxys+(OP)
The amount of "prompting" stuff (meta-prompting?) the "thinking" models do behind the scenes even beyond what the harnesses do is massive; you could of course rebuild it locally, but it's gonna make it just that much slower.

I expect it'll come along but I'm not gonna spend the $$$$ necessary to try to DIY it just yet.

60. acchow+411[view] [source] 2026-02-05 06:19:43
>>paxys+(OP)
I agree. You could spin for 100 hours on a sub-par model or get it done in 10 minutes with a frontier model
◧◩◪
61. polyno+A11[view] [source] [discussion] 2026-02-05 06:23:52
>>reilly+o2
Depending on what your usage requirements are, Mac Minis running UMA over RDMA is becoming a feasible option. At roughly 1/10 of the cost you're getting much much more than 1/10 the performance. (YMMV)

https://buildai.substack.com/i/181542049/the-mac-mini-moment

replies(1): >>danw19+mG1
◧◩◪
62. undeve+Z21[view] [source] [discussion] 2026-02-05 06:38:07
>>Aurorn+KB
what tools / processes do you use to manage context
◧◩◪
63. Lio+9a1[view] [source] [discussion] 2026-02-05 07:49:03
>>kamov+Rj
Well for starters you get a real guarantee of privacy.

If you’re worried about others being able to clone your business processes if you share them with a frontier provider then the cost of a Mac Studio to run Kimi is probably a justifiable tax right off.

64. EagnaI+Na1[view] [source] 2026-02-05 07:54:57
>>paxys+(OP)
The secret is to not run out of quota.

Instead have Claude know when to offload work to local models and what model is best suited for the job. It will shape the prompt for the model. Then have Claude review the results. Massive reduction in costs.

btw, at least on Macbooks you can run good models with just M1 32GB of memory.

replies(2): >>BuildT+Ic1 >>kilroy+QK1
◧◩
65. davidw+Za1[view] [source] [discussion] 2026-02-05 07:55:43
>>andai+zu
Similar experience to me. I tend to let glm-4.7 have a go at the problem then if it keeps having to try I'll switch to Sonnet or Opus to solve it. Glm is good for the low hanging fruit and planning
◧◩◪◨⬒⬓
66. ianlev+Fb1[view] [source] [discussion] 2026-02-05 08:03:28
>>deaux+bU
Yeah Kimi-K2.5 is the first open weights model that actually feels competitive with the closed models, and I've tried a lot of them now.
◧◩
67. cracki+Jb1[view] [source] [discussion] 2026-02-05 08:03:47
>>anon37+ND
Did it have reason to assume the graph to be a certain type, such as directed or acyclic?
◧◩◪◨⬒
68. triage+Cc1[view] [source] [discussion] 2026-02-05 08:11:34
>>Aurorn+EK
Disagree it's behind gpt top models. It's just slightly behind opus
◧◩
69. BuildT+Ic1[view] [source] [discussion] 2026-02-05 08:12:43
>>EagnaI+Na1
I don't suppose you could point to any resources on where I could get started. I have a M2 with 64gb of unified memory and it'd be nice to make it work rather than burning Github credits.
replies(1): >>EagnaI+xf1
◧◩◪
70. holodu+We1[view] [source] [discussion] 2026-02-05 08:32:42
>>Muffin+Mv
I now have 3 x 100 plans. Only then I an able to full time use it. Otherwise I hit the limits. I am q heavy user. Often work on 5 apps at the same time.
replies(1): >>auggie+Ln1
◧◩◪
71. EagnaI+xf1[view] [source] [discussion] 2026-02-05 08:36:15
>>BuildT+Ic1
https://ollama.com

Although I'm starting to like LMStudio more, as it has more features that Ollama is missing.

https://lmstudio.ai

You can then get Claude to create the MCP server to talk to either. Then a CLAUDE.md that tells it to read the models you have downloaded, determine their use and when to offload. Claude will make all that for you as well.

◧◩
72. PeterS+Ng1[view] [source] [discussion] 2026-02-05 08:43:33
>>andai+zu
My very first tests of local Qwen-coder-next yesterday found it quite capable of acceptably improving Python functions when given clear objectives.

I'm not looking for a vibe coding "one-shot" full project model. I'm not looking to replace GPT 5.2 or Opus 4.5. But having a local instance running some Ralph loop overnight on a specific aspect for the price of electricity is alluring.

◧◩◪
73. dust42+Oi1[view] [source] [discussion] 2026-02-05 08:59:11
>>Gravey+zj
The brand new Qwen3-Coder-Next runs at 300Tok/s PP and 40Tok/s on M1 64GB with 4-bit MLX quant. Together with Qwen Code (fork of Gemini) it is actually pretty capable.

Before that I used Qwen3-30B which is good enough for some quick javascript or Python, like 'add a new endpoint /api/foobar which does foobaz'. Also very decent for a quick summary of code.

It is 530Tok/s PP and 50Tok/s TG. If you have it spit out lots of the code that is just copy of the input, then it does 200Tok/s, i.e. 'add a new endpoint /api/foobar which does foobaz and return the whole file'

◧◩◪◨
74. auggie+Ln1[view] [source] [discussion] 2026-02-05 09:38:54
>>holodu+We1
Shouldn't the 200 plan give you 4x?? Why 3 x 100 then?
75. ameliu+rz1[view] [source] 2026-02-05 11:17:35
>>paxys+(OP)
And at best?
◧◩◪◨
76. apercu+mE1[view] [source] [discussion] 2026-02-05 12:08:05
>>bushba+9X
The issue is they often choose the wrong 1%.
◧◩◪◨
77. danw19+mG1[view] [source] [discussion] 2026-02-05 12:26:54
>>polyno+A11
I did not expect this to be a limiting factor in the mac mini RDMA setup ! -

> Thermal throttling: Thunderbolt 5 cables get hot under sustained 15GB/s load. After 10 minutes, bandwidth drops to 12GB/s. After 20 minutes, 10GB/s. Your 5.36 tokens/sec becomes 4.1 tokens/sec. Active cooling on cables helps but you’re fighting physics.

Thermal throttling of network cables is a new thing to me…

◧◩◪◨
78. mirolj+uI1[view] [source] [discussion] 2026-02-05 12:43:07
>>deaux+LG
I've been using MiniMax-M2.1 lately. Although benchmarks show it comparable with Kimi 2.5 and Sonnet 4.5, I find it more pleasant to use.

I still have to occasionally switch to Opus in Opencode planning mode, but not having to rely on Sonnet anymore makes my Claude subscription last much longer.

◧◩◪◨
79. HarHar+FJ1[view] [source] [discussion] 2026-02-05 12:52:41
>>Dennis+Um
Wow!

I've never heard of this guy before, but I see he's got 5M YouTube subscribers, which I guess is the clout you need to have Apple loan (I assume) you $50K worth of Mac Studios!

I'll be interesting to see how model sizes, capability, and local compute prices evolve.

A bit off topic, but I was in best buy the other day and was shocked to see 65" TVs selling for $300 ... I can remember the first large flat screen TVs (plasma?) selling for 100x that ($30K) when they first came out.

◧◩
80. kilroy+QK1[view] [source] [discussion] 2026-02-05 13:00:48
>>EagnaI+Na1
I strongly think you're on to something here. I wish Apple would invest heavily in something like this.

The big powerful models think about tasks, then offload some stuff to a drastically cheaper cloud model or the model running on your hardware.

◧◩◪◨
81. danw19+4L1[view] [source] [discussion] 2026-02-05 13:02:48
>>Dennis+Um
He must be mad, accepting $50k of free (probably loaned?) hardware from Apple !

Great demo video though. Nice to see some benchmarks of Exo with this cluster across various models.

◧◩◪
82. Balina+hL1[view] [source] [discussion] 2026-02-05 13:04:44
>>girvo+OA
I've repeatedly seen Opus 4.5 manufacture malpractice and then disable the checks complaining about it in order to be able to declare the job done, so I would agree with you about benchmarks versus experience.
[go to top]