zlacker

[return to "X offices raided in France as UK opens fresh investigation into Grok"]
1. miki12+dl3[view] [source] 2026-02-04 05:31:41
>>vikave+(OP)
This vindicates the pro-AI censorship crowd I guess.

It definitely makes it clear what is expected of AI companies. Your users aren't responsible for what they use your model for, you are, so you'd better make sure your model can't ever be used for anything nefarious. If you can't do that without keeping the model closed and verifying everyone's identities... well, that's good for your profits I guess.

◧◩
2. popalc+5u3[view] [source] 2026-02-04 06:53:17
>>miki12+dl3
It's a bit of a leap to say that the model must be censored. SD and all the open image gen models are capable of all kinds of things, but nobody has gone after the open model trainers. They have gone after the companies making profits from providing services.
◧◩◪
3. Kaiser+NP3[view] [source] 2026-02-04 09:49:18
>>popalc+5u3
Again its all about reasonable.

Firstly does the open model explicitly/tacitly allow CSAM generation?

Secondly, when the trainers are made aware of the problem, do they ignore it or attempt to put in place protections?

Thirdly, do they pull in data that is likely to allow that kind of content to be generated?

Fourthly, when they are told that this is happening, do they pull the model?

Fithly, do they charge for access/host the service and allow users to generate said content on their own servers?

[go to top]