zlacker

[return to "X offices raided in France as UK opens fresh investigation into Grok"]
1. miki12+dl3[view] [source] 2026-02-04 05:31:41
>>vikave+(OP)
This vindicates the pro-AI censorship crowd I guess.

It definitely makes it clear what is expected of AI companies. Your users aren't responsible for what they use your model for, you are, so you'd better make sure your model can't ever be used for anything nefarious. If you can't do that without keeping the model closed and verifying everyone's identities... well, that's good for your profits I guess.

◧◩
2. popalc+5u3[view] [source] 2026-02-04 06:53:17
>>miki12+dl3
It's a bit of a leap to say that the model must be censored. SD and all the open image gen models are capable of all kinds of things, but nobody has gone after the open model trainers. They have gone after the companies making profits from providing services.
◧◩◪
3. vinter+pD3[view] [source] 2026-02-04 08:14:27
>>popalc+5u3
So far, yes, but as far as I can tell their case against the AI giants aren't based on it being for-profit services in any way.
◧◩◪◨
4. popalc+2y6[view] [source] 2026-02-05 00:51:35
>>vinter+pD3
The for-profit part may or may not be a qualifier, but the architecture of a centralized service means they automatically become the scene of the crime -- either dissemination or storing of illegal material. Whereas if Stability creates a model, and others use their model locally, the relationship of Stability to the crime is ad-hoc. They aren't an accessory.
[go to top]