zlacker

[parent] [thread] 4 comments
1. Camper+(OP)[view] [source] 2026-02-04 23:55:28
Not the GP but the new Qwen-Coder-Next release feels like a step change, at 60 tokens per second on a single 96GB Blackwell. And that's at full 8-bit quantization and 256K context, which I wasn't sure was going to work at all.

It is probably enough to handle a lot of what people use the big-3 closed models for. Somewhat slower and somewhat dumber, granted, but still extraordinarily capable. It punches way above its weight class for an 80B model.

replies(3): >>zozbot+71 >>redwoo+E1 >>paxys+Co
2. zozbot+71[view] [source] 2026-02-05 00:01:22
>>Camper+(OP)
IIRC, that new Qwen model has 3B active parameters so it's going to run well enough even on far less than 96GB VRAM. (Though more VRAM may of course help wrt. enabling the full available context length.) Very impressive work from the Qwen folks.
3. redwoo+E1[view] [source] 2026-02-05 00:04:15
>>Camper+(OP)
Agree, these new models are a game changer. I switched from Claude to Qwen3-Coder-Next for day-to-day on dev projects and don't see a big difference. Just use Claude when I need comprehensive planning or review. Running Qwen3-Coder-Next-Q8 with 256K context.
4. paxys+Co[view] [source] 2026-02-05 03:13:51
>>Camper+(OP)
"Single 96GB Blackwell" is still $15K+ worth of hardware. You'd have to use it at full capacity for 5-10 years to break even when compared to "Max" plans from OpenAI/Anthropic/Google. And you'd still get nowhere near the quality of something like Opus. Yes there are plenty of valid arguments in favor of self hosting, but at the moment value simply isn't one of them.
replies(1): >>Camper+yv
◧◩
5. Camper+yv[view] [source] [discussion] 2026-02-05 04:20:05
>>paxys+Co
Eh, they can be found in the $8K neighborhood, $9K at most. As zozbot234 suggests, a much cheaper card would probably be fine for this particular model.

I need to do more testing before I can agree that it is performing at a Sonnet-equivalent level (it was never claimed to be Opus-class.) But it is pretty cool to get beaten in a programming contest by my own video card. For those who get it, no explanation is necessary; for those who don't, no explanation is possible.

And unlike the hosted models, the ones you run locally will still work just as well several years from now. No ads, no spying, no additional censorship, no additional usage limits or restrictions. You'll get no such guarantee from Google, OpenAI and the other major players.

[go to top]