zlacker

[parent] [thread] 2 comments
1. alexel+(OP)[view] [source] 2026-02-03 16:39:12
Is this going to need 1x or 2x of those RTX PRO 6000s to allow for a decent KV for an active context length of 64-100k?

It's one thing running the model without any context, but coding agents build it up close to the max and that slows down generation massively in my experience.

replies(2): >>segmon+Zh >>redrov+dK
2. segmon+Zh[view] [source] 2026-02-03 17:51:52
>>alexel+(OP)
1 6000 should be fine, Q6_K_XL gguf will be almost on par with the raw weights and should let you have 128k-256k context.
3. redrov+dK[view] [source] 2026-02-03 19:41:39
>>alexel+(OP)
I have a 3090 and a 4090 and it all fits in in VRAM with Q4_0 and quantized KV, 96k ctx. 1400 pp, 80 tps.
[go to top]