zlacker

[return to "Qwen3-Coder-Next"]
1. simonw+l3[view] [source] 2026-02-03 16:15:21
>>daniel+(OP)
This GGUF is 48.4GB - https://huggingface.co/Qwen/Qwen3-Coder-Next-GGUF/tree/main/... - which should be usable on higher end laptops.

I still haven't experienced a local model that fits on my 64GB MacBook Pro and can run a coding agent like Codex CLI or Claude code well enough to be useful.

Maybe this will be the one? This Unsloth guide from a sibling comment suggests it might be: https://unsloth.ai/docs/models/qwen3-coder-next

◧◩
2. kristo+R51[view] [source] 2026-02-03 20:36:33
>>simonw+l3
We need a new word, not "local model" but "my own computers model" CapEx based

This distinction is important because some "we support local model" tools have things like ollama orchestration or use the llama.cpp libraries to connect to models on the same physical machine.

That's not my definition of local. Mine is "local network". so call it the "LAN model" until we come up with something better. "Self-host" exists but this usually means more "open-weights" as opposed to clamping the performance of the model.

It should be defined as ~sub-$10k, using Steve Jobs megapenny unit.

Essentially classify things as how many megapennies of spend a machine is that won't OOM on it.

That's what I mean when I say local: running inference for 'free' somewhere on hardware I control that's at most single digit thousands of dollars. And if I was feeling fancy, could potentially fine-tune on the days scale.

A modern 5090 build-out with a threadripper, nvme, 256GB RAM, this will run you about 10k +/- 1k. The MLX route is about $6000 out the door after tax (m3-ultra 60 core with 256GB).

Lastly it's not just "number of parameters". Not all 32B Q4_K_M models load at the same rate or use the same amount of memory. The internal architecture matters and the active parameter count + quantization is becoming a poorer approximation given the SOTA innovations.

What might be needed is some standardized eval benchmark against standardized hardware classes with basic real world tasks like toolcalling, code generation, and document procesing. There's plenty of "good enough" models out there for a large category of every day tasks, now I want to find out what runs the best

Take a gen6 thinkpad P14s/macbook pro and a 5090/mac studio, run the benchmark and then we can say something like "time-to-first-token/token-per-second/memory-used/total-time-of-test" and rate this as independent from how accurate the model was.

◧◩◪
3. echelo+lg1[view] [source] 2026-02-03 21:29:46
>>kristo+R51
I don't even need "open weights" to run on hardware I own.

I am fine renting an H100 (or whatever), as long as I theoretically have access to and own everything running.

I do not want my career to become dependent upon Anthropic.

Honestly, the best thing for "open" might be for us to build open pipes and services and models where we can rent cloud. Large models will outpace small models: LLMs, video models, "world" models, etc.

I'd even be fine time-sharing a running instance of a large model in a large cloud. As long as all the constituent pieces are open where I could (in theory) distill it, run it myself, spin up my own copy, etc.

I do not deny that big models are superior. But I worry about the power the large hyperscalers are getting while we focus on small "open" models that really can't match the big ones.

We should focus on competing with large models, not artisanal homebrew stuff that is irrelevant.

◧◩◪◨
4. Aurorn+lt1[view] [source] 2026-02-03 22:37:46
>>echelo+lg1
> I do not want my career to become dependent upon Anthropic

As someone who switches between Anthropic and ChatGPT depending on the month and has dabbled with other providers and some local LLMs, I think this fear is unfounded.

It's really easy to switch between models. The different models have some differences that you notice over time but the techniques you learn in one place aren't going to lock you into a provider anywhere.

◧◩◪◨⬒
5. airstr+ev1[view] [source] 2026-02-03 22:47:06
>>Aurorn+lt1
right, but ChatGPT might not exist at some point, and if we don't force feed the open inference ecosystem and infrastructure back into the mouths of the AI devourer that is this hype cycle, we'll simply be accepting our inevitable, painful death
◧◩◪◨⬒⬓
6. Aurorn+602[view] [source] 2026-02-04 01:54:31
>>airstr+ev1
> right, but ChatGPT might not exist at some point

There are multiple frontier models to choose from.

They’re not all going to disappear.

◧◩◪◨⬒⬓⬔
7. Bukhma+Kh2[view] [source] 2026-02-04 04:33:41
>>Aurorn+602
This seems absurdly naive to me with the path big tech has taken in the last 5 years. There’s literally infinite upside and almost no downside to constraining the ecosystem for the big players.

You don’t think that eventually Google/OpenAI are going to go to the government and say, “it’s really dangerous to have all these foreign/unreglated models being used everywhere could you please get rid of them?”. Suddenly they have an oligopoly on the market.

[go to top]