zlacker

[return to "My AI skeptic friends are all nuts"]
1. cesarb+Zl[view] [source] 2025-06-02 23:25:17
>>tablet+(OP)
This article does not touch on the thing which worries me the most with respect to LLMs: the dependence.

Unless you can run the LLM locally, on a computer you own, you are now completely dependent on a remote centralized system to do your work. Whoever controls that system can arbitrarily raise the prices, subtly manipulate the outputs, store and do anything they want with the inputs, or even suddenly cease to operate. And since, according to this article, only the latest and greatest LLM is acceptable (and I've seen that exact same argument six months ago), running locally is not viable (I've seen, in a recent discussion, someone mention a home server with something like 384G of RAM just to run one LLM locally).

To those of us who like Free Software because of the freedom it gives us, this is a severe regression.

◧◩
2. rvnx+Th1[view] [source] 2025-06-03 09:24:06
>>cesarb+Zl
With the Mac Studio you get 512 GB of unified memory (shared between CPU and GPU), this is enough to run some exciting models.

In 20 years, memory has doubled 32x

It means that we could have 16 TB memory computers in 2045.

It can unlock a lot of possibilities. If even 1 TB is not enough by then (better architecture, more compact representation of data, etc).

◧◩◪
3. fennec+Dj1[view] [source] 2025-06-03 09:41:15
>>rvnx+Th1
Yeah, for £10,000. And you get 512GB of bandwidth starved memory.

Still, I suppose that's better than what nvidia has on offer atm (even if a rack of gpus gives you much, much higher memory throughput).

◧◩◪◨
4. theshr+do1[view] [source] 2025-06-03 10:33:00
>>fennec+Dj1
AKCSHUALLY the M-series CPU memory upgrades are expensive because the memory is on-chip and the bandwidth is a lot bigger than on comparable PC hardware.

In some cases it's more cost effective to get M-series Mac Minis vs nVidia GPUs

◧◩◪◨⬒
5. lolind+Mx1[view] [source] 2025-06-03 11:56:12
>>theshr+do1
They know that, but all accounts I've read acknowledge that the unified memory is worse than dedicated VRAM. It's just much better than running LLMs on CPU and the only way for a regular consumer to get to 64GB+ of graphical memory.
[go to top]