zlacker

[return to "Qwen3-Coder-Next"]
1. alexel+t9[view] [source] 2026-02-03 16:39:12
>>daniel+(OP)
Is this going to need 1x or 2x of those RTX PRO 6000s to allow for a decent KV for an active context length of 64-100k?

It's one thing running the model without any context, but coding agents build it up close to the max and that slows down generation massively in my experience.

◧◩
2. redrov+GT[view] [source] 2026-02-03 19:41:39
>>alexel+t9
I have a 3090 and a 4090 and it all fits in in VRAM with Q4_0 and quantized KV, 96k ctx. 1400 pp, 80 tps.
[go to top]