zlacker

[return to "Qwen3-Coder-Next"]
1. simonw+l3[view] [source] 2026-02-03 16:15:21
>>daniel+(OP)
This GGUF is 48.4GB - https://huggingface.co/Qwen/Qwen3-Coder-Next-GGUF/tree/main/... - which should be usable on higher end laptops.

I still haven't experienced a local model that fits on my 64GB MacBook Pro and can run a coding agent like Codex CLI or Claude code well enough to be useful.

Maybe this will be the one? This Unsloth guide from a sibling comment suggests it might be: https://unsloth.ai/docs/models/qwen3-coder-next

◧◩
2. embedd+R7[view] [source] 2026-02-03 16:33:40
>>simonw+l3
> I still haven't experienced a local model that fits on my 64GB MacBook Pro and can run a coding agent like Codex CLI or Claude code well enough to be useful

I've had mild success with GPT-OSS-120b (MXFP4, ends up taking ~66GB of VRAM for me with llama.cpp) and Codex.

I'm wondering if maybe one could crowdsource chat logs for GPT-OSS-120b running with Codex, then seed another post-training run to fine-tune the 20b variant with the good runs from 120b, if that'd make a big difference. Both models with the reasoning_effort set to high are actually quite good compared to other downloadable models, although the 120b is just about out of reach for 64GB so getting the 20b better for specific use cases seems like it'd be useful.

◧◩◪
3. andai+DP[view] [source] 2026-02-03 19:23:25
>>embedd+R7
Are you running 120B agentic? I tried using it in a few different setups and it failed hard in every one. It would just give up after a second or two every time.

I wonder if it has to do with the message format, since it should be able to do tool use afaict.

◧◩◪◨
4. nekita+Sh2[view] [source] 2026-02-04 04:35:41
>>andai+DP
This is a common problem for people trying to run the GPT-oss models themselves. Reposting my comment here:

GPT-oss-120B was also completely failing for me, until someone on reddit pointed out that you need to pass back in the reasoning tokens when generating a response. One way to do this is described here:

https://openrouter.ai/docs/guides/best-practices/reasoning-t...

Once I did that it started functioning extremely well, and it's the main model I use for my homemade agents.

Many LLM libraries/services/frontends don't pass these reasoning tokens back to the model correctly, which is why people complain about this model so much. It also highlights the importance of rolling these things yourself and understanding what's going on under the hood, because there's so many broken implementations floating around.

◧◩◪◨⬒
5. andai+RK5[view] [source] 2026-02-05 02:19:33
>>nekita+Sh2
I used it with OpenAI's Codex, which had official support for it, and it was still ass. (Maybe they screwed up this part too? Haha)
[go to top]