zlacker
[parent]
[thread]
2 comments
1. mitter+(OP)
[view]
[source]
2026-02-03 19:51:17
what do you run this on if I may ask? lmstudio, ollama, lama? which cli?
replies(2):
>>MrDrMc+XM
>>redwoo+941
◧
2. MrDrMc+XM
[view]
[source]
2026-02-04 00:03:07
>>mitter+(OP)
Can't speak for parent, but I've had decent luck with llama.cpp on my triple Ryzen AI Pro 9700 XTs.
◧
3. redwoo+941
[view]
[source]
2026-02-04 01:53:50
>>mitter+(OP)
I run Qwen3-Coder-Next (Qwen3-Coder-Next-UD-Q4_K_XL) on the Framework ITX board (Max+ 395 - 128GB) custom build. Avg. eval at 200-300 t/s and output at 35-40 t/s running with llama.cpp using rocm. Prefer Claude Code for cli.
[go to top]