zlacker

[parent] [thread] 3 comments
1. LTL_FT+(OP)[view] [source] 2026-01-23 19:12:02
I mean, you could put together a cluster of dgx sparks (8 of them) and hit 100tps with high concurrency:

https://forums.developer.nvidia.com/t/6x-spark-setup/354399/...

Or a single user at about 10tps.

This is probably around $30k if you go with the 1tb models.

replies(2): >>Camper+q6 >>bayind+27
2. Camper+q6[view] [source] 2026-01-23 19:43:01
>>LTL_FT+(OP)
10 tps, maybe, given the Spark's hobbled memory bandwidth. That's too slow, though. That thread is all about training, which is more compute-intensive.

A couple of DGX Stations are more likely to work well for what I have in mind. But at this point, I'd be pleasantly surprised if those ever ship. If they do, they will be more like $200K each than $100K.

replies(1): >>LTL_FT+mG
3. bayind+27[view] [source] 2026-01-23 19:45:38
>>LTL_FT+(OP)
I'd love more people to try to enable local LLMs at the speeds they wish to use and face the music of the fans, heat and power bills.

When people talk about the cost and requirements of AI, other people can't grasp what they are talking about.

◧◩
4. LTL_FT+mG[view] [source] [discussion] 2026-01-23 22:53:32
>>Camper+q6
I linked results where the user ran Kimi k2 across his 8-node cluster. Inference results are listed for 1,10,100 concurrent requests.

Edit to add:

Yeah, those stations with the GB300 look more along the lines of what I would want as well but I agree, they’re probably way beyond my reach.

[go to top]