zlacker

[parent] [thread] 4 comments
1. embedd+(OP)[view] [source] 2026-02-03 18:01:00
Yeah, Apple hardware don't seem ideal for LLMs that are large, give it a go with a dedicated GPU if you're inclined and you'll see a big difference :)
replies(2): >>polite+1C >>marci+Cf2
2. polite+1C[view] [source] 2026-02-03 20:37:14
>>embedd+(OP)
What are some good GPUs to look for if you're getting started?
replies(1): >>wincy+0U1
◧◩
3. wincy+0U1[view] [source] [discussion] 2026-02-04 05:36:43
>>polite+1C
If you want to actually run models on a computer at home? The RTX 6000 Blackwell Pro Workstation, hands down. 96GB of VRAM, fits into a standard case (I mean, it’s big, as it’s essentially the same form factor as an RTX 5090 just with a lot denser VRAM).

My RTX 5090 can fit OSS-20B but it’s a bit underwhelming, and for $3000 if I didn’t also use it for gaming I’d have been pretty disappointed.

replies(1): >>gigate+g74
4. marci+Cf2[view] [source] 2026-02-04 08:45:34
>>embedd+(OP)
Their issue with the mac was the sound of fans spinning. I doubt a dedicated gpu will resolved that.
◧◩◪
5. gigate+g74[view] [source] [discussion] 2026-02-04 19:23:08
>>wincy+0U1
At anywhere from 9-12k euros [1] I’d be better off paying 200 a month for the super duper lots of tokens tier at 2400 a year and get model improvements and token improvements etc etc for “free” than buy up such a card and it be obsolete on purchase as newer better cards are always coming out.

[1] https://www.idealo.de/preisvergleich/OffersOfProduct/2063285...

[go to top]