zlacker

[return to "My AI skeptic friends are all nuts"]
1. cesarb+Zl[view] [source] 2025-06-02 23:25:17
>>tablet+(OP)
This article does not touch on the thing which worries me the most with respect to LLMs: the dependence.

Unless you can run the LLM locally, on a computer you own, you are now completely dependent on a remote centralized system to do your work. Whoever controls that system can arbitrarily raise the prices, subtly manipulate the outputs, store and do anything they want with the inputs, or even suddenly cease to operate. And since, according to this article, only the latest and greatest LLM is acceptable (and I've seen that exact same argument six months ago), running locally is not viable (I've seen, in a recent discussion, someone mention a home server with something like 384G of RAM just to run one LLM locally).

To those of us who like Free Software because of the freedom it gives us, this is a severe regression.

◧◩
2. Flemlo+WP1[view] [source] 2025-06-03 13:48:53
>>cesarb+Zl
I think 384gb of ram is surprisingly reasonable tbh.

200-300$/month are already 7k in 3 years.

And I do expect some hardware chip based models in a few years like a GPU.

AiPU we're you can replace the hardware ai chip.

◧◩◪
3. Boiled+kk2[view] [source] 2025-06-03 16:49:59
>>Flemlo+WP1
> I think 384gb of ram is surprisingly reasonable tbh.

> 200-300$/month are already 7k in 3 years.

Except at current crazy rates of improvement, cloud based models will in reality likely be ~50x better, and you'll still have the same system.

◧◩◪◨
4. simonw+At2[view] [source] 2025-06-03 17:43:04
>>Boiled+kk2
I've had the same system (M2 64GB MacBook Pro) for three years.

2.5 years ago it could just about run LLaMA 1, and that model sucked.

Today it can run Mistral Small 3.1, Gemma 3 27B, Llama 3.3 70B - same exact hardware, but those models are competitive with the best available cloud-hosted model from two years ago (GPT-4).

The best hosted models (o3, Claude 4, Gemini 2.5 etc) are still way better than the best models I can run on my 3-year-old laptop, but the rate of improvements for those local models (on the same system) has been truly incredible.

[go to top]