It definitely makes it clear what is expected of AI companies. Your users aren't responsible for what they use your model for, you are, so you'd better make sure your model can't ever be used for anything nefarious. If you can't do that without keeping the model closed and verifying everyone's identities... well, that's good for your profits I guess.
Twitter publicly advertised it can create CSAM?
I have been off twitter for several years and I am open to being wrong here but that sounds unlikely.