No platform ever should allow CSAM content.
And the fact that they didn’t even care and haven’t want to spend money for implementing guardrails or moderation is deeply concerning.
This has imho nothing to do with model censorship, but everything with allowing that kind of content on a platform
A provider should have no responsibility how the tools are used. It is on users. This is a can of worms that should stay closed, because we all lose freedoms just because of couple of bad actors. AI and tool main job is to obey. We are hurling at "I'm sorry, Dave. I'm afraid I can't do that" future with breakneck speed.
We already apply this logic elsewhere. Car makers must include seatbelts. Pharma companies must ensure safety. Platforms must moderate illegal content. Responsibility is shared when the risk is systemic.
Yes, AI chatbots have to do everything in their power to avoid users easily generating such content.
AND
Yes, people that do so (even if done so on your self-hosted model) have to be punished.
I believe it is OK that Grok is being investigated because the point is to figure out whether this was intentional or not.
Just my opinion.