zlacker

[return to "X offices raided in France as UK opens fresh investigation into Grok"]
1. Altern+ut[view] [source] 2026-02-03 13:39:21
>>vikave+(OP)
> Prosecutors say they are now investigating whether X has broken the law across multiple areas.

This step could come before a police raid.

This looks like plain political pressure. No lives were saved, and no crime was prevented by harassing local workers.

◧◩
2. 317070+qy3[view] [source] 2026-02-04 07:35:27
>>Altern+ut
Well, there is evidence that this company made and distributed CSAM and pornographic deepfakes to make a profit. There is no evidence lacking there for the investigators.

So the question becomes if it was done knowingly or recklessly, hence a police raid for evidence.

See also [0] for a legal discussion in the German context.

[0] https://arxiv.org/html/2601.03788v1

◧◩◪
3. skissa+6I3[view] [source] 2026-02-04 08:50:44
>>317070+qy3
> Well, there is evidence that this company made and distributed CSAM

I think one big issue with this statement – "CSAM" lacks a precise legal definition; the precise legal term(s) vary from country to country, with differing definitions. While sexual imagery of real minors is highly illegal everywhere, there's a whole lot of other material – textual stories, drawings, animation, AI-generated images of nonexistent minors – which can be extremely criminal on one side of an international border, de facto legal on the other.

And I'm not actually sure what the legal definition is in France; the relevant article of the French Penal Code 227-23 [0] seems superficially similar to the legal definition of "child pornography" in the United States (post-Ashcroft vs Free Speech Coalition), and so some–but (maybe) not all–of the "CSAM" Grok is accused of generating wouldn't actually fall under it. (But of course, I don't know how French courts interpret it, so maybe what it means in practice is something broader than my reading of the text suggests.)

And I think this is part of the issue – xAI's executives are likely focused on compliance with US law on these topics, less concerned with complying with non-US law, in spite of the fact that CSAM laws in much of the rest of the world are much broader than in the US. That's less of an issue for Anthropic/Google/OpenAI, since their executives don't have the same "anything that's legal" attitude which xAI often has. And, as I said – while that's undoubtedly true in general, I'm unsure to what extent it is actually true for France in particular.

[0] https://www.legifrance.gouv.fr/codes/section_lc/LEGITEXT0000...

◧◩◪◨
4. graeme+cp5[view] [source] 2026-02-04 18:38:41
>>skissa+6I3
> And I think this is part of the issue – xAI's executives are likely focused on compliance with US law on these topics, less concerned with complying with non-US law

True, but outright child porn is illegal everywhere (as you said) and the borderline legal stuff is something most of your audience is quite happy to have removed. I cannot imagine you are going to get a lot of complaints if you remove AI generated sexual images of minors, for example so it seems reasonable to play it safe.

> That's less of an issue for Anthropic/Google/OpenAI, since their executives don't have the same "anything that's legal" attitude which xAI often has.

This is also common, but it is irritating too as it means the rest of the world is stuck with silly American attitudes about things like nudity and alcohol - for example Youtube videos blurring out bits of Greek statues because they are scared of being demonetised. These are things people take kids to see in museums!

[go to top]