zlacker

[return to "Qwen3-Coder-Next"]
1. simonw+l3[view] [source] 2026-02-03 16:15:21
>>daniel+(OP)
This GGUF is 48.4GB - https://huggingface.co/Qwen/Qwen3-Coder-Next-GGUF/tree/main/... - which should be usable on higher end laptops.

I still haven't experienced a local model that fits on my 64GB MacBook Pro and can run a coding agent like Codex CLI or Claude code well enough to be useful.

Maybe this will be the one? This Unsloth guide from a sibling comment suggests it might be: https://unsloth.ai/docs/models/qwen3-coder-next

◧◩
2. vessen+H3[view] [source] 2026-02-03 16:16:46
>>simonw+l3
I'm thinking the next step would be to include this as a 'junior dev' and let Opus farm simple stuff out to it. It could be local, but also if it's on cerebras, it could be realllly fast.
◧◩◪
3. ttoino+j4[view] [source] 2026-02-03 16:19:31
>>vessen+H3
Cerebras already has GLM 4.7 in the code plans
◧◩◪◨
4. vessen+Z4[view] [source] 2026-02-03 16:22:21
>>ttoino+j4
Yep. But this is like 10x faster; 3B active parameters.
[go to top]