When notified, he immediately:
* "implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing" - https://www.bbc.co.uk/news/articles/ce8gz8g2qnlo
* locked image generation down to paid accounts only (i.e. those individuals that can be identified via their payment details).
Have the other AI companies followed suit? They were also allowing users to undress real people, but it seems the media is ignoring that and focussing their ire only on Musk's companies...https://www.bbc.com/news/articles/c98p1r4e6m8o
> Have the other AI companies followed suit? They were also allowing users to undress real people
No they weren’t? There were numerous examples of people feeding the same prompts to different AIs and having their requests refused. Not to mention, X was also publicly distributing that material, something other AI companies were not doing. Which is an entirely different legal liability.
You would be _amazed_ at the things that people commit to email and similar.
Here's a Facebook one (leaked, not extracted by authorities): https://www.reuters.com/investigates/special-report/meta-ai-...
https://www.the-independent.com/news/world/americas/crime/us...
Quite.
> That's why some countries, notably Japan, allow the production of hand-drawn material that in the US would be considered CSAM.
Really? By what US definition of CSAM?
https://rainn.org/get-the-facts-about-csam-child-sexual-abus...
"Child sexual abuse material (CSAM) is not “child pornography.” It’s evidence of child sexual abuse—and it’s a crime to create, distribute, or possess. "
lol, they summoned Elon for a hearing on 420
"Summons for voluntary interviews on April 20, 2026, in Paris have been sent to Mr. Elon Musk and Ms. Linda Yaccarino, in their capacity as de facto and de jure managers of the X platform at the time of the events,
I assume the raid is hoping to find communications to establish that timeline, maybe internal concerns that were ignored? Also internal metrics that might show they were aware of the problem. External analysts said Grok was generating a CSAM image every minute!!
https://www.washingtonpost.com/technology/2026/02/02/elon-mu...
Covered here: https://www.theguardian.com/news/2022/jul/10/uber-bosses-tol...
The second Donald Trump threatened to invade a nation allied with France is the second anyone who works with Trump became a legitimate military target.
Like a cruel child dismembering a spider one limb at a time France and other nations around the world will meticulously destroy whatever resources people like Musk have and the influence it gives him over their countries.
If Musk displays a sufficient level of resistance to these actions the French will simply assassinate him.
“Google intended to subvert the discovery process, and that Chat evidence was ‘lost with the intent to prevent its use in litigation’ and ‘with the intent to deprive another party of the information’s use in the litigation.’”
https://storage.courtlistener.com/recap/gov.uscourts.cand.37...
VW is another case where similar things happens:
https://www.bloomberg.com/news/articles/2017-01-12/vw-offici...
The thing is: Companies don’t got to jail, employees do.
Claim that you suspect there may be abuse, it will trigger a case for a "worrying situation".
Then it's a procedural lottery:
-> If you get lucky, they will investigate, meet the people, and dismiss the case.
-> If you get unlucky, they will take the baby, and it's only then after a long investigation and a "family assistant" (that will check you every day), that you can recover your baby.
Typically, ex-wife who doesn't like the ex-husband, but it can be a neighbor etc.
One worker explains that they don't really have time to investigate when processing reports: https://www.youtube.com/watch?v=VG9y_-4kGQA and they have to act very fast, and by default, it is safer to remove from family.
The boss of such agency doesn't even take the time to answer to the journalists there...
-> Example of such case (this man is innocent): https://www.lefigaro.fr/faits-divers/var-un-homme-se-mobilis...
but I can't blame them either, it's not easy to make the right calls.
"today it's my husband to take care of him because sometimes my baby makes me angry that I want to kill him"
but she was saying it normally, like any normal person does when they are angry.-> Whoops, someone talked with 119 to refer a "worrying" situation, baby removed. It's already two years.
There are some non-profit fighting against such: https://lenfanceaucoeur.org/quest-ce-que-le-placement-abusif...
That being said, it's a very small % obviously not let's not exaggerate but it's quite sneaky.
[0] https://www.cbc.ca/news/canada/manitoba/winnipeg-mom-cfs-bac...
[1] https://indianexpress.com/article/india/ariha-family-visit-t...
https://www.cbsnews.com/miami/news/venezuela-survey-trump-ma...
https://www.tampafp.com/rand-paul-and-marco-rubio-clash-over...
> https://www.washingtonpost.com/technology/2026/02/02/elon-mu...
That article has no mention of CSAM. As expected, since you can bet the Post has lawyers checking.
Seriously, every powerful state engages in state terrorism from time to time because they can, and the embarrassment of discovery is weighed against the benefit of eliminating a problem. France is no exception : https://en.wikipedia.org/wiki/Sinking_of_the_Rainbow_Warrior
https://www.theguardian.com/technology/2018/jul/15/elon-musk...
Anyway cut to the chase, I just checked out Mathew Greens post on the subject, he is on my list of default "trust what he says about cryptography" along with some others like djb, nadia henninger etc
Embarrased to say I did not realise, I should of known! 10+ years ago I used to lurk the IRC dev chans of every relevant cypherpunk project, including of text secure and otr-chat when I saw signal being made and before that was witnessing chats with devs and ian goldberg and stuff, I just assumed Telegram was multiparty OTR,
OOPS!
Long winded post because that is embarrassing (as someone who studied cryptography undergrad in 2009 mathematics, 2010 did postgrad wargames and computer security course and worse - whose word once about 2012-2013 was taken on these matters by activists, journalists, researchers with pretty knarly threat model - like for instance - some guardian stories and former researcher into torture - i'm also the person that wrote the bits of 'how to hold a crypto party' that made it a protocol without an organisation and made clear the threat model was anyone could be there, oops oops oops
Yes thanks for letting me know I hang my head in shame for missing that one or some how believing that one without much investigation, thankfully it was just my own personal use to contact like friend in the states where they aren't already on signal etc.
EVERYONE: DON'T TRUST TELEGRAM AS END TO END ENCRYPTED CHAT https://blog.cryptographyengineering.com/2024/08/25/telegram...
Anyway as they say "use it or lose it" yeah my assumptions here no longer valid or considered to have educated opinion if I got something that basic wrong.
https://www.faa.gov/air_traffic/publications/atpubs/aim_html...
> complicité de détention d’images de mineurs présentant un caractère pédopornographique
> complicité de diffusion, offre ou mise à disposition en bande organisée d'image de mineurs présentant un caractère pédopornographique
[1]: https://www.tribunal-de-paris.justice.fr/sites/default/files...
For obvious reasons, decent people are not about to go out and try to general child sexual abuse material to prove a point to you, if that’s what you’re asking for.
[1]: https://www.lemonde.fr/pixels/article/2022/07/10/uber-files-...
[2]: https://www.radiofrance.fr/franceinter/le-rapport-d-enquete-...
https://x.com/i/grok/share/1cd2a181583f473f811c0d58996232ab
The claim that they released a tool with "seemingly no guardrailes" is therefore clearly false. I think what instead has happened here is that some people found a hack to circumvent some of those guardrails via something like a jailbreak.
https://www.bbc.co.uk/news/articles/cvg1mzlryxeo
Also, X seem to disagree with you and admit that CSAM was being generated:
https://arstechnica.com/tech-policy/2026/01/x-blames-users-f...
Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:
https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...
This is because of government pressure (see Ofcom link).
I’d say you’re making yourself look foolish but you seem happy to defend nonces so I’ll not waste my time.
So the question becomes if it was done knowingly or recklessly, hence a police raid for evidence.
See also [0] for a legal discussion in the German context.
I think one big issue with this statement – "CSAM" lacks a precise legal definition; the precise legal term(s) vary from country to country, with differing definitions. While sexual imagery of real minors is highly illegal everywhere, there's a whole lot of other material – textual stories, drawings, animation, AI-generated images of nonexistent minors – which can be extremely criminal on one side of an international border, de facto legal on the other.
And I'm not actually sure what the legal definition is in France; the relevant article of the French Penal Code 227-23 [0] seems superficially similar to the legal definition of "child pornography" in the United States (post-Ashcroft vs Free Speech Coalition), and so some–but (maybe) not all–of the "CSAM" Grok is accused of generating wouldn't actually fall under it. (But of course, I don't know how French courts interpret it, so maybe what it means in practice is something broader than my reading of the text suggests.)
And I think this is part of the issue – xAI's executives are likely focused on compliance with US law on these topics, less concerned with complying with non-US law, in spite of the fact that CSAM laws in much of the rest of the world are much broader than in the US. That's less of an issue for Anthropic/Google/OpenAI, since their executives don't have the same "anything that's legal" attitude which xAI often has. And, as I said – while that's undoubtedly true in general, I'm unsure to what extent it is actually true for France in particular.
[0] https://www.legifrance.gouv.fr/codes/section_lc/LEGITEXT0000...
> The term “child pornography” is currently used in federal statutes and is defined as any visual depiction of sexually explicit conduct involving a person less than 18 years old. While this phrase still appears in federal law, “child sexual abuse material” is preferred, as it better reflects the abuse that is depicted in the images and videos and the resulting trauma to the child. In fact, in 2016, an international working group, comprising a collection of countries and international organizations working to combat child exploitation, formally recognized “child sexual abuse material” as the preferred term.
Child porn is csam.
[1]: https://www.justice.gov/d9/2023-06/child_sexual_abuse_materi...
That post doesn't contain such an admission, it instead talks about forbidden prompting.
> Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:
That article links to this article: https://x.com/Safety/status/2011573102485127562 - which contradicts your claim that there were no guardrails before. And as I said, I already tried it a while ago, and Grok also refused to create images of naked adults then.
When Bernhard Hugo Goetz shot four teenagers on an NYC subway in the 80s, his PCP-laced marijuana use and stash back at his apartment came up in both sets of trials in the 80s and later in the 90s.
It was ignored (although not the alleged drug use of the teenagers) as Goetz was dubbed The Subway Vigilante and became a hero to the right.
~ https://en.wikipedia.org/wiki/1984_New_York_City_Subway_shoo...
His victims were upscaled to "super predators".
It all depends on the severity of the offence, which itself depends on the category of the material, including whether or not it is CSAM.
The Supreme Court has today delivered its judgment in the case where the court of appeals and district court sentenced a person for child pornography offenses to 80 day fines on the grounds that he had called Japanese manga drawings into his computer. Supreme Court dismiss the indictment.
The judgment concluded that the cartoons in and of itself may be considered pornographic, and that they represent children. But these are fantasy figures that can not be mistaken for real children.
https://bleedingcool.com/comics/swedish-supreme-court-exoner...
You have to understand that Europe doesn't give a shit about techbro libertarians and their desire for a new Lamborghini.
[0] https://nypost.com/2025/12/15/business/facebook-most-cited-i... [1] https://en.wikipedia.org/wiki/Suchir_Balaji
A smoking gun would be, for instance, Facebook observing that most of their ads are scam, that the cost of fixing this exceeds by far "the cost of any regulatory settlement involving scam ads.", and to conclude that the company’s leadership decided to act only in response to impending regulatory action.
https://www.reuters.com/investigations/meta-is-earning-fortu...
https://www.bbc.com/news/articles/cze3p1j710ko
Reports on sextortion, self-generated indecent images, and grooming via social media/messaging apps:
Snapchat 54%
Instagram 11%
Facebook 7%
WhatsApp 6-9%
X 1-2%
More normally it looks like e.g. this in the UK: https://news.sky.com/video/police-raid-hundreds-of-businesse...
CyberGEND more often seem to do smalltime copyright infringement enforcement, but there are a number of authorities with the right to conduct raids.
Here's the mentioned thread: https://x.com/elonmusk/status/2011527119097249996
Mmkay.
https://en.wikipedia.org/wiki/Twitter_under_Elon_Musk#Child_...
"As of June 2023, an investigation by the Stanford Internet Observatory at Stanford University reported "a lapse in basic enforcement" against child porn by Twitter within "recent months". The number of staff on Twitter's trust and safety teams were reduced, for example, leaving one full-time staffer to handle all child sexual abuse material in the Asia-Pacific region in November 2022."
"In 2024, the company unsuccessfully attempted to avoid the imposition of fines in Australia regarding the government's inquiries about child safety enforcement; X Corp reportedly said they had no obligation to respond to the inquiries since they were addressed to "Twitter Inc", which X Corp argued had "ceased to exist"."
Iffy on that front, actually. https://en.wikipedia.org/wiki/Arrest_and_indictment_of_Pavel...
The most common French word is Pédopornographie. But my impression is the definition of that word under French law is possibly narrower than some definitions of the English acronym “CSAM”. Canadian law is much broader, and so what’s legally pédopornographie (English “child pornogaphy”) in Canada may be much closer to broad “CSAM” definitions
> The point is, X did things that are illegal in France, no matter what you call them.
Which French law are you alleging they violated? Article 227-23 du Code pénal, or something else? And how exactly are you claiming they violated it?
Note the French authorities at this time are not accusing them of violating the law. An investigation is simply a concern or suspicion of a legal violation, not a formal accusation; one possible outcome of an investigation is a formal accusation, another is the conclusion that they (at least technically) didn’t violate the law after all. I don’t think the French legal process has reached a conclusion either way yet.
One relevant case is the unpublished Court of Cassation decision 06-86.763 dated 12 septembre 2007 [0] which upheld a conviction of child pornography for importing and distributing the anime film “Twin Angels - le retour des bêtes célestes - Vol. 3". [0] However, the somewhat odd situation is that it appears that film is catalogued by the French national library, [1] although I don’t know if a catalogue entry definitively proves they possess the item. Also, art. 227-23 distinguishes between material depicting under 15s (illegal to even possess) and material depicting under 18s (only illegal to possess if one has intent to distribute); this prosecution appears to be have been brought under the latter category only-even though the individual was depicted as being under 15-suggesting this anime might not be illegal to possess in France if one has no intent to distribute it.
But this is the point - one needs to look at the details of exactly what the law says and how exactly the authorities apply it, rather than vague assertions of criminality which might not actually be true.
[0] https://www.legifrance.gouv.fr/juri/id/JURITEXT000007640077/
I think the HN crowd is more nuanced than you're giving them credit for: https://hn.algolia.com/?q=chat+control
Section 230. https://en.wikipedia.org/wiki/Section_230
As always, Washington doing the hard work of making sure corpos never need to fix anything, ever.
For everyone to make up their own opinion about this poster's honesty, here's where his quote is from [1]. Chosen quotes:
> CSAM includes both real and synthetic content, such as images created with artificial intelligence tools.
> It doesn’t matter if the child agreed to it. It doesn’t matter if they sent the image themselves. If a minor is involved, it’s CSAM—and it’s illegal.
[1]: https://rainn.org/get-the-facts-about-csam-child-sexual-abus...
I used this when an employer was forcing me to use Windows and I needed Linux tools to work efficiently so I connected home. Goes through firewalls, proxies, etc.
Anyway if you want to host this not at home but a cloud provider there was HavenCo https://en.wikipedia.org/wiki/HavenCo don't ask me how I know about it, just curiosity.