zlacker

[return to "My AI skeptic friends are all nuts"]
1. baobun+44[view] [source] 2025-06-02 21:34:31
>>tablet+(OP)
The privacy aspect and other security risks tho? So far all the praise I hear on productivity are from people using cloud-hosted models.

Claude, Gemini, Copilot and and ChatGPT are non-starters for privacy-minded folks.

So far, local experiements with agents have left me underwhelmed. Tried everything on ollama that can run on my dedicated Ryzen 8700G with 96GB DDR5. I'm ready to blow ~10-15k USD on a better rig if I see value in it but if I extrapolate current results I believe it'll be another CPU generation before I can expect positive productivity output from properly securely running local models when factoring in the setup and meta.

◧◩
2. simonw+Ob[view] [source] 2025-06-02 22:21:12
>>baobun+44
Almost all of the cloud vendors have policies saying that they will not train on your input if you are a paying customer.

The single biggest productivity boost you can get in LLM world is believing them when they make those promises to you!

◧◩◪
3. JohnKe+fn[view] [source] 2025-06-02 23:33:37
>>simonw+Ob
> The single biggest productivity boost you can get in LLM world is believing them when they make those promises to you!

I'm having a hard time interpreting what you mean here. It sounds like something straight out of a cult.

◧◩◪◨
4. simonw+1K[view] [source] 2025-06-03 03:15:36
>>JohnKe+fn
An LLM vendor says to you "we promise not to train on your input". You have two options:

1. Believe them. Use their products and benefit from them.

2. Disbelieve them. Refuse to use their products. Miss out on benefiting from them.

I pick option 1. I think that's the better option to pick if you want to be able to benefit from what this technology can do for you.

Personally I think "these people are lying about everything" is a stronger indication of a cult mindset. Not everyone is your enemy.

◧◩◪◨⬒
5. baobun+hO[view] [source] 2025-06-03 04:10:45
>>simonw+1K
Well, I've been personally lied to about privacy claims by at least Google, Meta, Amazon, Microsoft. Some of which has been observed in courts. OpenAI communication has obviously been dishonest and shady at times if you keep track. All of the above have fallen in line with current administration and any future demands they may have to pin down or cut off anyone opposing certain military acts against civilians or otherwise deemed politically problematic. DeepSeek's public security snafu does not instil confidence that they can keep their platform secure even if they tried. And so on.

Fool me twice, you can't get fooled again.

◧◩◪◨⬒⬓
6. taurat+UX[view] [source] 2025-06-03 05:49:13
>>baobun+hO
The worst part to me is how little anyone seems to care about privacy - it just is how the world is. The US economy (or at least almost all e-marketing) seems to run on the idea that there's no such thing as privacy by default. Its not a subject that is talked about nearly enough. Everything is already known by uncle sam regardless. Its really strange, or maybe fortunate, that we're basically at a place that we often worried about but things haven't gone totally wrong yet. Corporate governance has been not that terrible (they know that its a golden goose they can't unkill). We'll see what happens in the next decade though - a company like google with so much data but losing marketshare might be tempted to be evil, or in todays parlance, have a feduciary responsibility to juice peoples data.
[go to top]