zlacker

[parent] [thread] 1 comments
1. justab+(OP)[view] [source] 2026-02-04 00:55:34
Running llama.cpp rather than vLLM, it's happy enough to run the FP8 variant with 200k+ context using about 90GB vram
replies(1): >>cmrdpo+va
2. cmrdpo+va[view] [source] 2026-02-04 02:08:06
>>justab+(OP)
yeah, what did you get for tok/sec there though? Memory bandwidth is the limitation with these devices. With 4 bit I didn't get over 35-39 tok/sec, and averaged more like 30 when doing actual tool use with opencode. I can't imagine fp8 being faster.
[go to top]