zlacker

[parent] [thread] 2 comments
1. cmrdpo+(OP)[view] [source] 2026-02-03 23:59:54
I tried the FP8 in vLLM on my Spark and although it fit in memory, I started swapping once I actually tried to run any queries, and, yeah, could not have a context larger than 8k.

I figured out later this is because vLLM apparently de-quantizes to BF16 at runtime, so pointless to run the FP8?

I get about 30-35 tok/second using llama.cpp and a 4-bit quant. And a 200+k context, using only 50GB of RAM.

replies(1): >>justab+59
2. justab+59[view] [source] 2026-02-04 00:55:34
>>cmrdpo+(OP)
Running llama.cpp rather than vLLM, it's happy enough to run the FP8 variant with 200k+ context using about 90GB vram
replies(1): >>cmrdpo+Aj
◧◩
3. cmrdpo+Aj[view] [source] [discussion] 2026-02-04 02:08:06
>>justab+59
yeah, what did you get for tok/sec there though? Memory bandwidth is the limitation with these devices. With 4 bit I didn't get over 35-39 tok/sec, and averaged more like 30 when doing actual tool use with opencode. I can't imagine fp8 being faster.
[go to top]