zlacker

[parent] [thread] 17 comments
1. reilly+(OP)[view] [source] 2026-02-04 22:11:11
Which takes a $20k thunderbolt cluster of 2 512GB RAM Mac Studio Ultras to run at full quality…
replies(5): >>teaear+o >>bigyab+Wl >>0xbadc+Yo >>deaux+JE >>polyno+cZ
2. teaear+o[view] [source] 2026-02-04 22:13:37
>>reilly+(OP)
Which while expensive is dirt cheap compared to a comparable NVidia or AMD system.
replies(2): >>blharr+P4 >>Schema+s5
◧◩
3. blharr+P4[view] [source] [discussion] 2026-02-04 22:35:19
>>teaear+o
What speed are you getting at that level of hardware though?
◧◩
4. Schema+s5[view] [source] [discussion] 2026-02-04 22:39:29
>>teaear+o
It's still very expensive compared to using the hosted models which are currently massively subsidised. Have to wonder what the fair market price for these hosted models will be after the free money dries up.
replies(2): >>cactus+d9 >>whatsu+wH
◧◩◪
5. cactus+d9[view] [source] [discussion] 2026-02-04 23:01:22
>>Schema+s5
Inference is profitable. Maybe we hit a limit and we don't need as many expensive training runs in the future.
replies(2): >>teaear+Vj >>paxys+tl
◧◩◪◨
6. teaear+Vj[view] [source] [discussion] 2026-02-05 00:08:37
>>cactus+d9
For sure Claude Code isn’t profitable
replies(1): >>bdangu+xp
◧◩◪◨
7. paxys+tl[view] [source] [discussion] 2026-02-05 00:18:23
>>cactus+d9
Inference APIs are probably profitable, but I doubt the $20-$100 monthly plans are.
8. bigyab+Wl[view] [source] 2026-02-05 00:23:00
>>reilly+(OP)
"Full quality" being a relative assessment, here. You're still deeply compute constrained, that machine would crawl at longer contexts.
9. 0xbadc+Yo[view] [source] 2026-02-05 00:46:00
>>reilly+(OP)
Most benchmarks show very little improvement of "full quality" over a quantized lower-bit model. You can shrink the model to a fraction of its "full" size and get 92-95% same performance, with less VRAM use.
replies(1): >>Muffin+tt
◧◩◪◨⬒
10. bdangu+xp[view] [source] [discussion] 2026-02-05 00:50:04
>>teaear+Vj
Neither was Uber and … and …
replies(1): >>plagia+Iw
◧◩
11. Muffin+tt[view] [source] [discussion] 2026-02-05 01:23:17
>>0xbadc+Yo
> You can shrink the model to a fraction of its "full" size and get 92-95% same performance, with less VRAM use.

Are there a lot of options how "how far" do you quantize? How much VRAM does it take to get the 92-95% you are speaking of?

replies(1): >>bigyab+Ku
◧◩◪
12. bigyab+Ku[view] [source] [discussion] 2026-02-05 01:33:54
>>Muffin+tt
> Are there a lot of options how "how far" do you quantize?

So many: https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overvie...

> How much VRAM does it take to get the 92-95% you are speaking of?

For inference, it's heavily dependent on the size of the weights (plus context). Quantizing an f32 or f16 model to q4/mxfp4 won't necessarily use 92-95% less VRAM, but it's pretty close for smaller contexts.

replies(1): >>Muffin+jy
◧◩◪◨⬒⬓
13. plagia+Iw[view] [source] [discussion] 2026-02-05 01:50:19
>>bdangu+xp
Businesses will desire me for my insomnia once Anthropics starts charging congestion pricing.
◧◩◪◨
14. Muffin+jy[view] [source] [discussion] 2026-02-05 02:01:59
>>bigyab+Ku
Thank you. Could you give a tl;dr on "the full model needs ____ this much VRAM and if you do _____ the most common quantization method it will run in ____ this much VRAM" rough estimate please?
replies(1): >>omneit+xR
15. deaux+JE[view] [source] 2026-02-05 02:59:05
>>reilly+(OP)
And that's at unusable speeds - it takes about triple that amount to run it decently fast at int4.

Now as the other replies say, you should very likely run a quantized version anyway.

◧◩◪
16. whatsu+wH[view] [source] [discussion] 2026-02-05 03:23:38
>>Schema+s5
I wonder if the "distributed AI computing" touted by some of the new crypto projects [0] works and is relatively cheaper.

0. https://www.daifi.ai/

◧◩◪◨⬒
17. omneit+xR[view] [source] [discussion] 2026-02-05 05:07:23
>>Muffin+jy
It’s a trivial calculation to make (+/- 10%).

Number of params == “variables” in memory

VRAM footprint ~= number of params * size of a param

A 4B model at 8 bits will result in 4GB vram give or take, same as params. At 4 bits ~= 2GB and so on. Kimi is about 512GB at 4 bits.

18. polyno+cZ[view] [source] 2026-02-05 06:23:52
>>reilly+(OP)
Depending on what your usage requirements are, Mac Minis running UMA over RDMA is becoming a feasible option. At roughly 1/10 of the cost you're getting much much more than 1/10 the performance. (YMMV)

https://buildai.substack.com/i/181542049/the-mac-mini-moment

[go to top]