This step could come before a police raid.
This looks like plain political pressure. No lives were saved, and no crime was prevented by harassing local workers.
The company made and released a tool with seemingly no guard-rails, which was used en masse to generate deepfakes and child pornography.
Siezing records is usually a major step in an investigation. Its how you get evidence.
Sure it could just be harrasment, but this is also how normal police work looks. France has a reasonable judicial system so absent of other evidence i'm inclined to believe this was legit.
One the one hand, it seems "obvious" that Grok should somehow be legally required to have guardrails stopping it from producing kiddie porn.
On the other hand, it also seems "obvious" that laws forcing 3D printers to detect and block attempts to print firearms are patently bullshit.
The thing is, I'm not sure how I can reconcile those two seemingly-obvious statements in a principled manner.
If you use a service like Grok, then you use somebody elses computer / things. X is the owner from computer that produced CP. So of course X is at least also a bit liable for producing CP.
Do you have any evidence for that? As far as I can tell, this is false. The only thing I saw was Grok changing photos of adults into them wearing bikinis, which is far less bad.
I wouldn't even consider this a reason if it wasn't for the fact that OpenAI and Google, and hell literally every image model out there all have the same "this guy edited this underage girls face into a bikini" problem (this was the most public example I've heard so I'm going with that as my example). People still jailbreak chatgpt, and they've poured how much money into that?
For obvious reasons, decent people are not about to go out and try to general child sexual abuse material to prove a point to you, if that’s what you’re asking for.
https://x.com/i/grok/share/1cd2a181583f473f811c0d58996232ab
The claim that they released a tool with "seemingly no guardrailes" is therefore clearly false. I think what instead has happened here is that some people found a hack to circumvent some of those guardrails via something like a jailbreak.
https://www.bbc.co.uk/news/articles/cvg1mzlryxeo
Also, X seem to disagree with you and admit that CSAM was being generated:
https://arstechnica.com/tech-policy/2026/01/x-blames-users-f...
Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:
https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...
This is because of government pressure (see Ofcom link).
I’d say you’re making yourself look foolish but you seem happy to defend nonces so I’ll not waste my time.
Without such clear legal definitions going after Grok while not going after photoshop is just an act of political pressure.
So the question becomes if it was done knowingly or recklessly, hence a police raid for evidence.
See also [0] for a legal discussion in the German context.
What you’re implying here is that Musk should be immune from any prosecution simply because he is right wing, which…
If you’re hosting content, why shouldn’t you be responsible, because your business model is impossible if you’re held to account for what’s happening on your premises?
Without safe harbor, people might have to jump through the hoops of buying their own domain name, and hosting content themselves, would that be so bad?
This isn’t about AI or CSAM (Have we seen any other AI companies raided by governments for enabling creation of deepfakes, dangerous misinformation, illegal images, or for flagrant industrial-scale copyright infringement?)
I think one big issue with this statement – "CSAM" lacks a precise legal definition; the precise legal term(s) vary from country to country, with differing definitions. While sexual imagery of real minors is highly illegal everywhere, there's a whole lot of other material – textual stories, drawings, animation, AI-generated images of nonexistent minors – which can be extremely criminal on one side of an international border, de facto legal on the other.
And I'm not actually sure what the legal definition is in France; the relevant article of the French Penal Code 227-23 [0] seems superficially similar to the legal definition of "child pornography" in the United States (post-Ashcroft vs Free Speech Coalition), and so some–but (maybe) not all–of the "CSAM" Grok is accused of generating wouldn't actually fall under it. (But of course, I don't know how French courts interpret it, so maybe what it means in practice is something broader than my reading of the text suggests.)
And I think this is part of the issue – xAI's executives are likely focused on compliance with US law on these topics, less concerned with complying with non-US law, in spite of the fact that CSAM laws in much of the rest of the world are much broader than in the US. That's less of an issue for Anthropic/Google/OpenAI, since their executives don't have the same "anything that's legal" attitude which xAI often has. And, as I said – while that's undoubtedly true in general, I'm unsure to what extent it is actually true for France in particular.
[0] https://www.legifrance.gouv.fr/codes/section_lc/LEGITEXT0000...
I'd guess Elon is responsible for that product decision.
There is no functionality for the users to review and approve "Grok" responses to their tweets.
This is how it works, at least in civil law countries. If the prosecutor has reasonable suspicious that a crime is taking place they send the so-called "judiciary police" to gather evidence. If they find none (or they're inconclusive etc...) the charges are dropped, otherwise they ask the court to go to trial.
On some occasions I take on judiciary police duties for animal welfare. Just last week I participated in a raid. We were not there to arrest anyone, just to gather evidence so the prosecutor could decide whether to press charges and go to trial.
Grok makes it trivial to create fake CSAM or other explicit images. Before, if someone spent a week on photoshop to do the same, It won't be Adobe that gets the blame.
Same for 3D printers. Before, anyone could make a gun provided they have the right tools (which is very expensive), now it's being argued that 3D printers are making this more accessible. Although I would argue it's always been easy to make a gun, all you need is a piece of pipe. So I don't entirely buy the moral panic against 3D printers.
Where that threshold lies I don't know. But I think that's the crux if it. Technology is making previously difficult things easier, to the benefit of all humanity. It's just unfortunate that some less-nice things have also been included.
That post doesn't contain such an admission, it instead talks about forbidden prompting.
> Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:
That article links to this article: https://x.com/Safety/status/2011573102485127562 - which contradicts your claim that there were no guardrails before. And as I said, I already tried it a while ago, and Grok also refused to create images of naked adults then.
I would prefer 10,000 service providers to one big one that gets to read all the plaintext communication of the entire planet.
Also, safe harbor doesn't apply because this is published under the @grok handle! It's being published by X under one of their brand names, it's absurd to argue that they're unaware or not consenting to its publication.
In response to what? If CSAM is not being generated, why aren't X just saying that? Instead they're saying "please don't do it."
> which contradicts your claim that there were no guardrails before.
From the linked post:
> However content is created or whether users are free or paid subscribers, our Safety team are working around the clock to add additional safeguards
Which was posted a full week after the initial story broke and after Ofcom started investigative action. So no, it does not contradict my point which was:
> Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:
As you quoted.
I really can't decide if you're stupid, think I and other readers are stupid, or so dedicated to defending paedophilia that you'll just tell flat lies to everyone reading your comment.
You have to understand that Europe doesn't give a shit about techbro libertarians and their desire for a new Lamborghini.
They have a court order obviously to collect evidence.
You have offered zero evidence to indicate there is 'political pressure' and that statement by prosecutors doesn't hint at that.
'No crime was prevented by harassing workers' is essentially non sequitor in this context.
It could be that that this is political nonsense, but there would have to be more details.
These issues are really hard but we have to confront them. X can alter electoral outcomes. That's where we are at.
As it stands, I have a bunch of photos on my phone that would almost certainly get flagged by over-eager/overly sensitive child porn detection — close friends and family sending me photos of their kids at the beach. I've helped bathe and dress some of those kids. There's nothing nefarious about any of it, but it's close enough that services wouldn't take the risk, and that would be a loss to us all.
I honestly don't follow it. People creating nudes of others and using the Internet to distribute it can be sued for defamation, sure. I don't think the people hosting the service should be liable themselves, just like people hosting Tor nodes shouldn't be liable by what users of the Tor Network do.
you cannot elaborately use a software to produce an effect that is patently illegal and accurate to your usage, and then pretend the software is to blame
Biased against the man asking Epstein which day would be best for the "wildest" party.
* Internet Watch Foundation
* The BBC
* The Guardian
* X themselves
* Ofcom
And believe the word of an anonymous internet account who claims to have tried to undress women using Grok for "research."
Which is good, that is the sane position to take these days.
The most common French word is Pédopornographie. But my impression is the definition of that word under French law is possibly narrower than some definitions of the English acronym “CSAM”. Canadian law is much broader, and so what’s legally pédopornographie (English “child pornogaphy”) in Canada may be much closer to broad “CSAM” definitions
> The point is, X did things that are illegal in France, no matter what you call them.
Which French law are you alleging they violated? Article 227-23 du Code pénal, or something else? And how exactly are you claiming they violated it?
Note the French authorities at this time are not accusing them of violating the law. An investigation is simply a concern or suspicion of a legal violation, not a formal accusation; one possible outcome of an investigation is a formal accusation, another is the conclusion that they (at least technically) didn’t violate the law after all. I don’t think the French legal process has reached a conclusion either way yet.
One relevant case is the unpublished Court of Cassation decision 06-86.763 dated 12 septembre 2007 [0] which upheld a conviction of child pornography for importing and distributing the anime film “Twin Angels - le retour des bêtes célestes - Vol. 3". [0] However, the somewhat odd situation is that it appears that film is catalogued by the French national library, [1] although I don’t know if a catalogue entry definitively proves they possess the item. Also, art. 227-23 distinguishes between material depicting under 15s (illegal to even possess) and material depicting under 18s (only illegal to possess if one has intent to distribute); this prosecution appears to be have been brought under the latter category only-even though the individual was depicted as being under 15-suggesting this anime might not be illegal to possess in France if one has no intent to distribute it.
But this is the point - one needs to look at the details of exactly what the law says and how exactly the authorities apply it, rather than vague assertions of criminality which might not actually be true.
[0] https://www.legifrance.gouv.fr/juri/id/JURITEXT000007640077/
(Snark aside, in your opinion are there comments on HN that dang would be criminally liable for if it weren't for safe harbor?)
True, but outright child porn is illegal everywhere (as you said) and the borderline legal stuff is something most of your audience is quite happy to have removed. I cannot imagine you are going to get a lot of complaints if you remove AI generated sexual images of minors, for example so it seems reasonable to play it safe.
> That's less of an issue for Anthropic/Google/OpenAI, since their executives don't have the same "anything that's legal" attitude which xAI often has.
This is also common, but it is irritating too as it means the rest of the world is stuck with silly American attitudes about things like nudity and alcohol - for example Youtube videos blurring out bits of Greek statues because they are scared of being demonetised. These are things people take kids to see in museums!
The difference is that the entire political Left hate and fear Elon and are desperately trying to destroy him.