zlacker

[return to "Qwen3-Coder-Next"]
1. simonw+l3[view] [source] 2026-02-03 16:15:21
>>daniel+(OP)
This GGUF is 48.4GB - https://huggingface.co/Qwen/Qwen3-Coder-Next-GGUF/tree/main/... - which should be usable on higher end laptops.

I still haven't experienced a local model that fits on my 64GB MacBook Pro and can run a coding agent like Codex CLI or Claude code well enough to be useful.

Maybe this will be the one? This Unsloth guide from a sibling comment suggests it might be: https://unsloth.ai/docs/models/qwen3-coder-next

◧◩
2. mark_l+lv1[view] [source] 2026-02-03 22:47:46
>>simonw+l3
I configured Claude Code to use a local model (ollama run glm-4.7-flash) that runs really well on a 32G M2Pro macmini. Maybe my standards are too low, but I was using that combination to clean up the code, make improvements, and add docs and tests to a bunch of old git repo experiment projects.
◧◩◪
3. redund+1X1[view] [source] 2026-02-04 01:33:27
>>mark_l+lv1
Did you have to do anything special to get it to work? I tried and it would just bug out, things like respond with JSON strings summarizing what I asked of it or just outright getting things wrong entirely. For example, I asked it to summarize what a specific .js file did and it provided me with new code it made up based on the file name...
◧◩◪◨
4. mark_l+JX1[view] [source] 2026-02-04 01:38:35
>>redund+1X1
Yes, I had to set the Ollama context size to 32K
◧◩◪◨⬒
5. redund+N92[view] [source] 2026-02-04 03:13:41
>>mark_l+JX1
Thank you, it's working as expected now!
[go to top]