zlacker

[parent] [thread] 5 comments
1. sdelli+(OP)[view] [source] 2026-02-04 20:44:16
The key hurdle for AI to leap is establishing trust with users. No one trusts the big players (for good reason) and it is causing serious anxiety among the investors. It seems Claude acknowledges this and is looking to make trust a critical part of their marketing messaging by saying no ads or product placement. The problem is that serving ads is only one facet of trust. There are trust issues around privacy, intellectual property, transparency, training data, security, accuracy, and simply "being evil" that Claude's marketing doesn't acknowledge or address. Trust, on the scale they need, is going to be very hard for any of them to establish, if not impossible.
replies(2): >>popalc+13 >>jstumm+Q4
2. popalc+13[view] [source] 2026-02-04 20:55:51
>>sdelli+(OP)
Impossible. The only way to know what is happening is to have the code run on your own infra.
replies(1): >>sfink+It
3. jstumm+Q4[view] [source] 2026-02-04 21:06:06
>>sdelli+(OP)
What do you mean? Google is roughly the most trusted organization in the world by revealed preference. The 800(?) million ChatGPT users – I have a hard time reading that as a trust problem.
replies(1): >>olivie+gj
◧◩
4. olivie+gj[view] [source] [discussion] 2026-02-04 22:18:16
>>jstumm+Q4
Usage metrics don't reveal preference in all cases. The fact these companies are sketchy/untrustworthy is practically a meme, including among non-tech people. Their services are widely relied upon, but they enjoy very little subjective good will
◧◩
5. sfink+It[view] [source] [discussion] 2026-02-04 23:15:51
>>popalc+13
That still doesn't mean much unless you're doing your own training or getting the weights from a trusted source, and neither of those mean much without knowing something about the data being trained on.

If someone is trying to influence your results, running the inference on your own infrastructure prevents some attack vectors but not some of the more plausible and worrying ones.

replies(1): >>popalc+bI
◧◩◪
6. popalc+bI[view] [source] [discussion] 2026-02-05 00:54:35
>>sfink+It
I don't think people are concerned about the models' math being biased/tainted (people know of it but that largely doesn't factor into the "security concerns" that people cite.) Typically, it's about how do we know that our data is not going to be seen by a 3rd party. That's what I'm speaking to. Running on your own infra, you can guarantee there are no phone-homes.
[go to top]