It definitely makes it clear what is expected of AI companies. Your users aren't responsible for what they use your model for, you are, so you'd better make sure your model can't ever be used for anything nefarious. If you can't do that without keeping the model closed and verifying everyone's identities... well, that's good for your profits I guess.
Firstly does the open model explicitly/tacitly allow CSAM generation?
Secondly, when the trainers are made aware of the problem, do they ignore it or attempt to put in place protections?
Thirdly, do they pull in data that is likely to allow that kind of content to be generated?
Fourthly, when they are told that this is happening, do they pull the model?
Fithly, do they charge for access/host the service and allow users to generate said content on their own servers?