It definitely makes it clear what is expected of AI companies. Your users aren't responsible for what they use your model for, you are, so you'd better make sure your model can't ever be used for anything nefarious. If you can't do that without keeping the model closed and verifying everyone's identities... well, that's good for your profits I guess.
It's the same playbook that is used again and again. For war, civil liberties crackdowns, lockdowns, COVID, etc, etc: 0) I want (1); start playbook: A) Something bad is here, B) You need to feel X + Panic about it, C) We are solving it via (1). Because you reacted at B, you will support C. Problem, reaction, solution. Gives the playmakers the (1) they want.
We all know this is going on. But I guess we like knowing someone is pulling the strings. We like being led and maybe even manipulated because perhaps in the familiar system (which yields the undeniable goods of our current way of life), there is safety and stability? How else to explain.
Maybe the need to be entertained with drama is a hackable side effect of stable societies populated by people who evolved as warriors, hunters and survivors.