zlacker

[return to "Qwen3-Coder-Next"]
1. simonw+l3[view] [source] 2026-02-03 16:15:21
>>daniel+(OP)
This GGUF is 48.4GB - https://huggingface.co/Qwen/Qwen3-Coder-Next-GGUF/tree/main/... - which should be usable on higher end laptops.

I still haven't experienced a local model that fits on my 64GB MacBook Pro and can run a coding agent like Codex CLI or Claude code well enough to be useful.

Maybe this will be the one? This Unsloth guide from a sibling comment suggests it might be: https://unsloth.ai/docs/models/qwen3-coder-next

◧◩
2. dehrma+Rg[view] [source] 2026-02-03 17:09:35
>>simonw+l3
I wonder if the future in ~5 years is almost all local models? High-end computers and GPUs can already do it for decent models, but not sota models. 5 years is enough time to ramp up memory production, consumers to level-up their hardware, and models to optimize down to lower-end hardware while still being really good.
◧◩◪
3. johnsm+CO[view] [source] 2026-02-03 19:19:49
>>dehrma+Rg
Opensource or local models will always heavily lag frontier.

Who pays for a free model? GPU training isn't free!

I remember early on people saying 100B+ models will run on your phone like nowish. They were completely wrong and I don't think it's going to ever really change.

People always will want the fastest, best, easiest setup method.

"Good enough" massively changes when your marketing team is managing k8s clusters with frontier systems in the near future.

◧◩◪◨
4. kybern+Ja1[view] [source] 2026-02-03 20:58:54
>>johnsm+CO
Gpt3.5 as used in the first commercially available chat gpt is believed to be hundreds of billions of parameters. There are now models I can run on my phone that feel like they have similar levels of capability.

Phones are never going to run the largest models locally because they just don't have the size, but we're seeing improvements in capability at small sizes over time that mean that you can run a model on your phone now that would have required hundreds of billions of parameters less than 6 years ago.

◧◩◪◨⬒
5. onion2+mg1[view] [source] 2026-02-03 21:29:48
>>kybern+Ja1
The G in GPT stands for Generalized. You don't need that for specialist models, so the size can be much smaller. Even coding models are quite general as they don't focus on a language or a domain. I imagine a model specifically for something like React could be very effective with a couple of billion parameters, especially if it was a distill of a more general model.
◧◩◪◨⬒⬓
6. christ+aB1[view] [source] 2026-02-03 23:22:37
>>onion2+mg1
Thats what i want and orchestrator model that operates with a small context and then very specialized small models for react etc
[go to top]