zlacker

[parent] [thread] 6 comments
1. gigate+(OP)[view] [source] 2026-02-03 16:48:18
I’ve a 128GB m3 max MacBook Pro. Running the gpt oss model on it via lmstudio once the context gets large enough the fans spin to 100 and it’s unbearable.
replies(2): >>pixelp+3d >>embedd+ai
2. pixelp+3d[view] [source] 2026-02-03 17:41:37
>>gigate+(OP)
Laptops are fundamentally a poor form factor for high performance computing.
3. embedd+ai[view] [source] 2026-02-03 18:01:00
>>gigate+(OP)
Yeah, Apple hardware don't seem ideal for LLMs that are large, give it a go with a dedicated GPU if you're inclined and you'll see a big difference :)
replies(2): >>polite+bU >>marci+Mx2
◧◩
4. polite+bU[view] [source] [discussion] 2026-02-03 20:37:14
>>embedd+ai
What are some good GPUs to look for if you're getting started?
replies(1): >>wincy+ac2
◧◩◪
5. wincy+ac2[view] [source] [discussion] 2026-02-04 05:36:43
>>polite+bU
If you want to actually run models on a computer at home? The RTX 6000 Blackwell Pro Workstation, hands down. 96GB of VRAM, fits into a standard case (I mean, it’s big, as it’s essentially the same form factor as an RTX 5090 just with a lot denser VRAM).

My RTX 5090 can fit OSS-20B but it’s a bit underwhelming, and for $3000 if I didn’t also use it for gaming I’d have been pretty disappointed.

replies(1): >>gigate+qp4
◧◩
6. marci+Mx2[view] [source] [discussion] 2026-02-04 08:45:34
>>embedd+ai
Their issue with the mac was the sound of fans spinning. I doubt a dedicated gpu will resolved that.
◧◩◪◨
7. gigate+qp4[view] [source] [discussion] 2026-02-04 19:23:08
>>wincy+ac2
At anywhere from 9-12k euros [1] I’d be better off paying 200 a month for the super duper lots of tokens tier at 2400 a year and get model improvements and token improvements etc etc for “free” than buy up such a card and it be obsolete on purchase as newer better cards are always coming out.

[1] https://www.idealo.de/preisvergleich/OffersOfProduct/2063285...

[go to top]