zlacker

[return to "Qwen3-Coder-Next"]
1. SamDc7+n32[view] [source] 2026-02-04 02:18:48
>>daniel+(OP)
This is model 12188, which claims to rival SOTA models while not even being in the same league.

In terms of intelligence per compute, it’s probably the best model I can realistically run locally on my laptop for coding. It’s solid for scripting and small projects.

I tried it on a mid-size codebase (~50k LOC), and the context window filled up almost immediately, making it basically unusable unless you’re extremely explicit about which files to touch. I tested it with a 8k context window but will try again with 32k and see if it becomes more practical.

I think the main blocker for using local coding models more is the context window. A lot of work is going into making small models “smarter,” but for agentic coding that only gets you so far. No matter how smart the model is, an agent will blow through the context as soon as it reads a handful of files.

◧◩
2. mirekr+E73[view] [source] 2026-02-04 11:49:09
>>SamDc7+n32
What are you talking about? Qwen3-Coder-Next supports 256k context. Did you wanted to say that you don't have enough memory to run it locally yourself?
◧◩◪
3. SamDc7+uI5[view] [source] 2026-02-05 02:00:38
>>mirekr+E73
Yes!

I tried to go as far as 32k on the context window but beyond that it won't be usable on my laptop (Ryzen AI 365, 32gb RAM and 6gb of VRAM)

[go to top]