zlacker

[parent] [thread] 3 comments
1. storys+(OP)[view] [source] 2026-01-28 10:32:51
Curious how you handle sharding and KV cache pressure for a 120b model. I guess you are doing tensor parallelism across consumer cards, or is it a unified memory setup?
replies(1): >>embedd+02
2. embedd+02[view] [source] 2026-01-28 10:49:58
>>storys+(OP)
I don't, fits on my card with the full context, I think the native MXFP4 weights takes ~70GB of VRAM (out of 96GB available, RTX Pro 6000), so I still have room to spare to run GPT-OSS-20B alongside for smaller tasks too, and Wayland+Gnome :)
replies(1): >>storys+Sd
◧◩
3. storys+Sd[view] [source] [discussion] 2026-01-28 12:24:54
>>embedd+02
I thought the RTX 6000 Ada was 48GB? If you have 96GB available that implies a dual setup, so you must be relying on tensor parallelism to shard the model weights across the pair.
replies(1): >>embedd+5g
◧◩◪
4. embedd+5g[view] [source] [discussion] 2026-01-28 12:40:08
>>storys+Sd
RTX Pro 6000 - 96GB VRAM - Single card
[go to top]