zlacker

[return to "X offices raided in France as UK opens fresh investigation into Grok"]
1. mnewme+pE3[view] [source] 2026-02-04 08:22:55
>>vikave+(OP)
Good one.

No platform ever should allow CSAM content.

And the fact that they didn’t even care and haven’t want to spend money for implementing guardrails or moderation is deeply concerning.

This has imho nothing to do with model censorship, but everything with allowing that kind of content on a platform

◧◩
2. yibg+RP4[view] [source] 2026-02-04 16:10:46
>>mnewme+pE3
My natural reaction here is like I think most others; that yes Grok / X bad, shouldn't be able to generate CSAM content / deepfakes.

But I am having trouble justifying in an consistent manner why Grok / X should be liable here instead of the user. I've seen a few arguments here that mostly comes down to:

1. It's Grok the LLM generating the content, not the user.

2. The distribution. That this isn't just on the user's computer but instead posted on X.

For 1. it seems to breakdown if we look more broadly at how LLMs are used. e.g. as a coding agent. We're basically starting to treat LLMs as a higher level framework now. We don't hold vendors of programming languages or frameworks responsible if someone uses them to create CSAM. Yes LLM generated the content, but the user still provided the instructions to do so.

For 2. if Grok instead generated the content for download would the liability go away? What if Grok generated the content to be downloaded only and then the user uploaded manually to X? If in this case Grok isn't liable then why does the automatic posting (from the user's instructions) make it different? If it is, then it's not about the distribution anymore.

There are some comparisons to photoshop, that if i created a deep fake with photoshop that I'm liable not Adobe. If photoshop had a "upload to X" button, and I created CSM using photoshop and hit the button to upload to X directly, is now Adobe now liable?

What am I missing?

◧◩◪
3. dragon+Zx6[view] [source] 2026-02-05 00:51:22
>>yibg+RP4
> But I am having trouble justifying in an consistent manner why Grok / X should be liable here instead of the user.

This seems to rest on false assumptions that: (1) legal liability is exclusive, and (2) investigation of X is not important both to X’s liability and to pursuing the users, to the extent that they would also be subject to liability.

X/xAI may be liable for any or all of the following reasons:

* xAI generated virtual child pornography with the likenesses of actual children, which is generally illegal, even if that service was procured by a third party.

* X and xAI distributed virtual child pornography with the likenesses of actual children, which is generally illegal, irrespective of who generated and supplied them.

* To the extent that liability for either of the first two bullet points would be eliminated or mitigated by absence of knowledge at the time of the prohibited content and prompt action when the actor became aware, X often punished users for the prompts proucing the virtual child pornography without taking prompt action to remove the xAI-generated virtual child pornography resulting from the prompt, demonstrating knowledge and intent.

* When the epidemic of grok-generated nonconsensual, including child, pornography drew attention, X and xAI responded by attempting to monetize the capacity by limiting the tool to only paid X subscribers, showing an attempt to commercially profit from it, which is, again, generally illegal.

[go to top]