It definitely makes it clear what is expected of AI companies. Your users aren't responsible for what they use your model for, you are, so you'd better make sure your model can't ever be used for anything nefarious. If you can't do that without keeping the model closed and verifying everyone's identities... well, that's good for your profits I guess.
Yes they could have an uncensored model, but then they would need proper moderation and delete this kind of content instantly or ban users that produce it. Or don’t allow it in the first place.
It doesn’t matter how CSAM is produced, the only thing that matters is that is on the platform.
I am flabbergasted people even defend this
Did X do enough to prevent its website being used to distribute illegal content - consensual sexual material of both adults and children?
Now reintroduce AI generation, where X plays a more active role in facilitating the creation of that illegal content.
Firstly does the open model explicitly/tacitly allow CSAM generation?
Secondly, when the trainers are made aware of the problem, do they ignore it or attempt to put in place protections?
Thirdly, do they pull in data that is likely to allow that kind of content to be generated?
Fourthly, when they are told that this is happening, do they pull the model?
Fithly, do they charge for access/host the service and allow users to generate said content on their own servers?
If it was about blocking the social media they'd just block it, like they did with Russia Today, CUII-Liste Lina, or Pavel Durov.
It's the same playbook that is used again and again. For war, civil liberties crackdowns, lockdowns, COVID, etc, etc: 0) I want (1); start playbook: A) Something bad is here, B) You need to feel X + Panic about it, C) We are solving it via (1). Because you reacted at B, you will support C. Problem, reaction, solution. Gives the playmakers the (1) they want.
We all know this is going on. But I guess we like knowing someone is pulling the strings. We like being led and maybe even manipulated because perhaps in the familiar system (which yields the undeniable goods of our current way of life), there is safety and stability? How else to explain.
Maybe the need to be entertained with drama is a hackable side effect of stable societies populated by people who evolved as warriors, hunters and survivors.
I think the HN crowd is more nuanced than you're giving them credit for: https://hn.algolia.com/?q=chat+control
Not that this would _ever_ happen on Hacker News. :|
Twitter publicly advertised it can create CSAM?
I have been off twitter for several years and I am open to being wrong here but that sounds unlikely.