zlacker

X offices raided in France as UK opens fresh investigation into Grok

submitted by vikave+(OP) on 2026-02-03 10:08:52 | 586 points 627 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
◧◩
9. cbeach+Vh[view] [source] [discussion] 2026-02-03 12:22:54
>>afavou+kh
> when notified, doing nothing about it

When notified, he immediately:

  * "implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing" - https://www.bbc.co.uk/news/articles/ce8gz8g2qnlo 

  * locked image generation down to paid accounts only (i.e. those individuals that can be identified via their payment details).
Have the other AI companies followed suit? They were also allowing users to undress real people, but it seems the media is ignoring that and focussing their ire only on Musk's companies...
◧◩◪
11. afavou+kj[view] [source] [discussion] 2026-02-03 12:29:34
>>cbeach+Vh
You and I must have different definitions of the word “immediately”. The article you posted is from January 15th. Here is a story from January 2nd:

https://www.bbc.com/news/articles/c98p1r4e6m8o

> Have the other AI companies followed suit? They were also allowing users to undress real people

No they weren’t? There were numerous examples of people feeding the same prompts to different AIs and having their requests refused. Not to mention, X was also publicly distributing that material, something other AI companies were not doing. Which is an entirely different legal liability.

◧◩
24. rsynno+4p[view] [source] [discussion] 2026-02-03 13:12:43
>>techbl+8k
> what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?

You would be _amazed_ at the things that people commit to email and similar.

Here's a Facebook one (leaked, not extracted by authorities): https://www.reuters.com/investigates/special-report/meta-ai-...

◧◩◪
34. fanati+Fz[view] [source] [discussion] 2026-02-03 14:12:49
>>omnimu+0k
Even if it is, being affiliated with the US military doesn't make you immune to local laws.

https://www.the-independent.com/news/world/americas/crime/us...

◧◩◪◨
47. chrisj+rT[view] [source] [discussion] 2026-02-03 15:45:13
>>derrid+ej
Er...

"Study uncovers presence of CSAM in popular AI training dataset"

https://www.theregister.com/2023/12/20/csam_laion_dataset/.

◧◩◪◨
48. chrisj+401[view] [source] [discussion] 2026-02-03 16:10:56
>>logicc+km
> The point of banning real CSAM is to stop the production of it, because the production is inherently harmful. The production of AI or human generated CSAM-like images does not inherently require the harm of children, so it's fundamentally a different consideration.

Quite.

> That's why some countries, notably Japan, allow the production of hand-drawn material that in the US would be considered CSAM.

Really? By what US definition of CSAM?

https://rainn.org/get-the-facts-about-csam-child-sexual-abus...

"Child sexual abuse material (CSAM) is not “child pornography.” It’s evidence of child sexual abuse—and it’s a crime to create, distribute, or possess. "

57. verdve+Kv1[view] [source] 2026-02-03 18:15:48
>>vikave+(OP)
France24 article on this: https://www.france24.com/en/france/20260203-paris-prosecutor...

lol, they summoned Elon for a hearing on 420

"Summons for voluntary interviews on April 20, 2026, in Paris have been sent to Mr. Elon Musk and Ms. Linda Yaccarino, in their capacity as de facto and de jure managers of the X platform at the time of the events,

◧◩◪◨⬒⬓
63. chrisj+5A1[view] [source] [discussion] 2026-02-03 18:31:43
>>lokar+as1
It was... until it diverted. >>46870196
76. r721+zL1[view] [source] 2026-02-03 19:15:59
>>vikave+(OP)
Another discussion: >>46872894
◧◩
87. arppac+vX1[view] [source] [discussion] 2026-02-03 20:07:00
>>techbl+8k
There was a WaPo article yesterday, that talked about how xAI deliberately loosened Grok’s safety guardrails and relaxed restrictions on sexual content in an effort to make the chatbot more engaging and “sticky” for users. xAI employees had to sign new waivers in the summer, and start working with harmful content, in order to train and enable those features.

I assume the raid is hoping to find communications to establish that timeline, maybe internal concerns that were ignored? Also internal metrics that might show they were aware of the problem. External analysts said Grok was generating a CSAM image every minute!!

https://www.washingtonpost.com/technology/2026/02/02/elon-mu...

◧◩◪
111. wasabi+C72[view] [source] [discussion] 2026-02-03 20:54:57
>>ronsor+KB1
It wasn't erasing as far I know, but locking all computers.

Covered here: https://www.theguardian.com/news/2022/jul/10/uber-bosses-tol...

◧◩◪◨
116. strong+i92[view] [source] [discussion] 2026-02-03 21:03:45
>>mr_mit+c62
From HN, of course! >>32057651
◧◩◪◨⬒⬓⬔⧯
119. Teever+P92[view] [source] [discussion] 2026-02-03 21:05:56
>>ronsor+T62
Become? https://en.wikipedia.org/wiki/Sinking_of_the_Rainbow_Warrior

The second Donald Trump threatened to invade a nation allied with France is the second anyone who works with Trump became a legitimate military target.

Like a cruel child dismembering a spider one limb at a time France and other nations around the world will meticulously destroy whatever resources people like Musk have and the influence it gives him over their countries.

If Musk displays a sufficient level of resistance to these actions the French will simply assassinate him.

◧◩◪◨⬒
123. nieman+Da2[view] [source] [discussion] 2026-02-03 21:11:05
>>hn_go_+J42
That can start with self deleting messages if you are under court order, and has happens before:

“Google intended to subvert the discovery process, and that Chat evidence was ‘lost with the intent to prevent its use in litigation’ and ‘with the intent to deprive another party of the information’s use in the litigation.’”

https://storage.courtlistener.com/recap/gov.uscourts.cand.37...

VW is another case where similar things happens:

https://www.bloomberg.com/news/articles/2017-01-12/vw-offici...

The thing is: Companies don’t got to jail, employees do.

◧◩◪◨
167. rvnx+lp2[view] [source] [discussion] 2026-02-03 22:28:24
>>hiprob+Ao2
In France it's possible without legal consequences (though immoral), if you call 119, you can push to have a baby taken from a family for no reason except that you do not like someone.

Claim that you suspect there may be abuse, it will trigger a case for a "worrying situation".

Then it's a procedural lottery:

-> If you get lucky, they will investigate, meet the people, and dismiss the case.

-> If you get unlucky, they will take the baby, and it's only then after a long investigation and a "family assistant" (that will check you every day), that you can recover your baby.

Typically, ex-wife who doesn't like the ex-husband, but it can be a neighbor etc.

One worker explains that they don't really have time to investigate when processing reports: https://www.youtube.com/watch?v=VG9y_-4kGQA and they have to act very fast, and by default, it is safer to remove from family.

The boss of such agency doesn't even take the time to answer to the journalists there...

-> Example of such case (this man is innocent): https://www.lefigaro.fr/faits-divers/var-un-homme-se-mobilis...

but I can't blame them either, it's not easy to make the right calls.

◧◩◪◨⬒⬓
190. rvnx+ys2[view] [source] [discussion] 2026-02-03 22:44:31
>>gf000+ar2
I've seen that during harassment; in one YouTube live the woman claimed:

    "today it's my husband to take care of him because sometimes my baby makes me angry that I want to kill him"
but she was saying it normally, like any normal person does when they are angry.

-> Whoops, someone talked with 119 to refer a "worrying" situation, baby removed. It's already two years.

There are some non-profit fighting against such: https://lenfanceaucoeur.org/quest-ce-que-le-placement-abusif...

That being said, it's a very small % obviously not let's not exaggerate but it's quite sneaky.

◧◩◪◨⬒⬓
208. Sanjay+Oz2[view] [source] [discussion] 2026-02-03 23:27:21
>>agoodu+1s2
Canada and Germany are no different.

[0] https://www.cbc.ca/news/canada/manitoba/winnipeg-mom-cfs-bac...

[1] https://indianexpress.com/article/india/ariha-family-visit-t...

◧◩◪◨⬒⬓
215. almost+MD2[view] [source] [discussion] 2026-02-03 23:47:23
>>Sanjay+kA2
Arrested and the vast majority of Venezuela love that it happened.

https://www.cbsnews.com/miami/news/venezuela-survey-trump-ma...

◧◩◪◨⬒⬓⬔
222. Sanjay+qG2[view] [source] [discussion] 2026-02-04 00:02:00
>>almost+MD2
Rand Paul asked Rubio what would happen if the shoe was on the other foot. Every US President from Truman onwards is a war criminal.

https://www.tampafp.com/rand-paul-and-marco-rubio-clash-over...

◧◩◪
226. chrisj+oJ2[view] [source] [discussion] 2026-02-04 00:20:31
>>arppac+vX1
> External analysts said Grok was generating a CSAM image every minute!!

> https://www.washingtonpost.com/technology/2026/02/02/elon-mu...

That article has no mention of CSAM. As expected, since you can bet the Post has lawyers checking.

◧◩◪◨⬒⬓⬔⧯
238. anigbr+ZM2[view] [source] [discussion] 2026-02-04 00:42:31
>>ronsor+T62
People were surprised when the US started just droning boats in the Caribbean and wiping out survivors, but then the government explained that it was law enforcement and not terrorism or piracy, so everyone stopped worrying about it.

Seriously, every powerful state engages in state terrorism from time to time because they can, and the embarrassment of discovery is weighed against the benefit of eliminating a problem. France is no exception : https://en.wikipedia.org/wiki/Sinking_of_the_Rainbow_Warrior

◧◩◪◨⬒⬓⬔
245. zzrrt+FP2[view] [source] [discussion] 2026-02-04 00:58:15
>>anigbr+TK2
Guards can plausibly arrest you without seriously injuring you. But according to https://aviation.stackexchange.com/a/68361 there are no safe options if the pilot really doesn’t want to comply, so there is no “forcing” a plane to land somewhere, just making it very clear that powerful people really want you to stop and might be able to give more consequences on the ground if you don’t.
◧◩◪◨⬒
249. anigbr+jT2[view] [source] [discussion] 2026-02-04 01:23:20
>>father+BB2
It's odd to be so prim about someone who is notorious for irrational trolling for the sake of mob entertainment.

https://www.theguardian.com/technology/2018/jul/15/elon-musk...

◧◩◪◨
265. derrid+E73[view] [source] [discussion] 2026-02-04 03:14:07
>>direwo+iE2
ok thank you! I did not know that, I'm ashamed to admit! sort of like studying physics at university a decade later forgetting V=IR when I actually needed it for some solar install. I took "technical hiatus" about 5 years and recently coming back.

Anyway cut to the chase, I just checked out Mathew Greens post on the subject, he is on my list of default "trust what he says about cryptography" along with some others like djb, nadia henninger etc

Embarrased to say I did not realise, I should of known! 10+ years ago I used to lurk the IRC dev chans of every relevant cypherpunk project, including of text secure and otr-chat when I saw signal being made and before that was witnessing chats with devs and ian goldberg and stuff, I just assumed Telegram was multiparty OTR,

OOPS!

Long winded post because that is embarrassing (as someone who studied cryptography undergrad in 2009 mathematics, 2010 did postgrad wargames and computer security course and worse - whose word once about 2012-2013 was taken on these matters by activists, journalists, researchers with pretty knarly threat model - like for instance - some guardian stories and former researcher into torture - i'm also the person that wrote the bits of 'how to hold a crypto party' that made it a protocol without an organisation and made clear the threat model was anyone could be there, oops oops oops

Yes thanks for letting me know I hang my head in shame for missing that one or some how believing that one without much investigation, thankfully it was just my own personal use to contact like friend in the states where they aren't already on signal etc.

EVERYONE: DON'T TRUST TELEGRAM AS END TO END ENCRYPTED CHAT https://blog.cryptographyengineering.com/2024/08/25/telegram...

Anyway as they say "use it or lose it" yeah my assumptions here no longer valid or considered to have educated opinion if I got something that basic wrong.

◧◩◪◨⬒⬓
293. nieman+ko3[view] [source] [discussion] 2026-02-04 06:05:08
>>gruez+Ar2
In the USA they would be allowed to down any aircraft not complying with national air interception rules, that would not be murder. It would be equivalent to not dropping a gun once prompted by an officer and being shot as a result.

https://www.faa.gov/air_traffic/publications/atpubs/aim_html...

◧◩◪◨⬒⬓
297. pyrale+bq3[view] [source] [discussion] 2026-02-04 06:18:08
>>chrisj+8O2
The first two points of the official document, which I re-quote below, are about CSAM.

> complicité de détention d’images de mineurs présentant un caractère pédopornographique

> complicité de diffusion, offre ou mise à disposition en bande organisée d'image de mineurs présentant un caractère pédopornographique

[1]: https://www.tribunal-de-paris.justice.fr/sites/default/files...

◧◩◪◨
298. scott_+cq3[view] [source] [discussion] 2026-02-04 06:18:10
>>cubefo+3o3
Did you miss the numerous news reports? Example: https://www.theguardian.com/technology/2026/jan/08/ai-chatbo...

For obvious reasons, decent people are not about to go out and try to general child sexual abuse material to prove a point to you, if that’s what you’re asking for.

◧◩◪◨⬒⬓
299. pyrale+vq3[view] [source] [discussion] 2026-02-04 06:20:54
>>camina+0E2
Macron's involvement with Uber is public information at this point.

[1]: https://www.lemonde.fr/pixels/article/2022/07/10/uber-files-...

[2]: https://www.radiofrance.fr/franceinter/le-rapport-d-enquete-...

◧◩◪◨⬒
300. cubefo+hr3[view] [source] [discussion] 2026-02-04 06:29:04
>>scott_+cq3
First of all, the Guardian is known to be heavily biased again Musk. They always try hard to make everything about him sound as negative as possible. Second, last time I tried, Grok even refused to create pictures of naked adults. I just tried again and this is still the case:

https://x.com/i/grok/share/1cd2a181583f473f811c0d58996232ab

The claim that they released a tool with "seemingly no guardrailes" is therefore clearly false. I think what instead has happened here is that some people found a hack to circumvent some of those guardrails via something like a jailbreak.

◧◩◪◨⬒⬓
305. scott_+Qv3[view] [source] [discussion] 2026-02-04 07:10:54
>>cubefo+hr3
For more evidence:

https://www.bbc.co.uk/news/articles/cvg1mzlryxeo

Also, X seem to disagree with you and admit that CSAM was being generated:

https://arstechnica.com/tech-policy/2026/01/x-blames-users-f...

Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:

https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...

This is because of government pressure (see Ofcom link).

I’d say you’re making yourself look foolish but you seem happy to defend nonces so I’ll not waste my time.

◧◩
314. 317070+qy3[view] [source] [discussion] 2026-02-04 07:35:27
>>Altern+ut
Well, there is evidence that this company made and distributed CSAM and pornographic deepfakes to make a profit. There is no evidence lacking there for the investigators.

So the question becomes if it was done knowingly or recklessly, hence a police raid for evidence.

See also [0] for a legal discussion in the German context.

[0] https://arxiv.org/html/2601.03788v1

◧◩◪
333. skissa+6I3[view] [source] [discussion] 2026-02-04 08:50:44
>>317070+qy3
> Well, there is evidence that this company made and distributed CSAM

I think one big issue with this statement – "CSAM" lacks a precise legal definition; the precise legal term(s) vary from country to country, with differing definitions. While sexual imagery of real minors is highly illegal everywhere, there's a whole lot of other material – textual stories, drawings, animation, AI-generated images of nonexistent minors – which can be extremely criminal on one side of an international border, de facto legal on the other.

And I'm not actually sure what the legal definition is in France; the relevant article of the French Penal Code 227-23 [0] seems superficially similar to the legal definition of "child pornography" in the United States (post-Ashcroft vs Free Speech Coalition), and so some–but (maybe) not all–of the "CSAM" Grok is accused of generating wouldn't actually fall under it. (But of course, I don't know how French courts interpret it, so maybe what it means in practice is something broader than my reading of the text suggests.)

And I think this is part of the issue – xAI's executives are likely focused on compliance with US law on these topics, less concerned with complying with non-US law, in spite of the fact that CSAM laws in much of the rest of the world are much broader than in the US. That's less of an issue for Anthropic/Google/OpenAI, since their executives don't have the same "anything that's legal" attitude which xAI often has. And, as I said – while that's undoubtedly true in general, I'm unsure to what extent it is actually true for France in particular.

[0] https://www.legifrance.gouv.fr/codes/section_lc/LEGITEXT0000...

◧◩◪◨⬒⬓⬔⧯
348. pyrale+rO3[view] [source] [discussion] 2026-02-04 09:39:36
>>chrisj+LJ3
Quote from US doj [1]:

> The term “child pornography” is currently used in federal statutes and is defined as any visual depiction of sexually explicit conduct involving a person less than 18 years old. While this phrase still appears in federal law, “child sexual abuse material” is preferred, as it better reflects the abuse that is depicted in the images and videos and the resulting trauma to the child. In fact, in 2016, an international working group, comprising a collection of countries and international organizations working to combat child exploitation, formally recognized “child sexual abuse material” as the preferred term.

Child porn is csam.

[1]: https://www.justice.gov/d9/2023-06/child_sexual_abuse_materi...

◧◩◪◨⬒⬓⬔
351. cubefo+qP3[view] [source] [discussion] 2026-02-04 09:46:15
>>scott_+Qv3
> Also, X seem to disagree with you and admit that CSAM was being generated

That post doesn't contain such an admission, it instead talks about forbidden prompting.

> Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:

That article links to this article: https://x.com/Safety/status/2011573102485127562 - which contradicts your claim that there were no guardrails before. And as I said, I already tried it a while ago, and Grok also refused to create images of naked adults then.

◧◩◪◨⬒
394. defros+E04[view] [source] [discussion] 2026-02-04 11:13:53
>>termin+oZ3
The right didn't give a shit about weed in the 80's or the 90's depending entirely upon who had it.

When Bernhard Hugo Goetz shot four teenagers on an NYC subway in the 80s, his PCP-laced marijuana use and stash back at his apartment came up in both sets of trials in the 80s and later in the 90s.

It was ignored (although not the alleged drug use of the teenagers) as Goetz was dubbed The Subway Vigilante and became a hero to the right.

~ https://en.wikipedia.org/wiki/1984_New_York_City_Subway_shoo...

His victims were upscaled to "super predators".

◧◩◪◨⬒⬓⬔⧯▣
417. chrisj+h44[view] [source] [discussion] 2026-02-04 11:42:16
>>mortar+dU3
On the contrary, in Europe there is a huge difference. Child porn might get you mere community service, a fine - or even less, as per the landmark court ruling below.

It all depends on the severity of the offence, which itself depends on the category of the material, including whether or not it is CSAM.

The Supreme Court has today delivered its judgment in the case where the court of appeals and district court sentenced a person for child pornography offenses to 80 day fines on the grounds that he had called Japanese manga drawings into his computer. Supreme Court dismiss the indictment.

The judgment concluded that the cartoons in and of itself may be considered pornographic, and that they represent children. But these are fantasy figures that can not be mistaken for real children.

https://bleedingcool.com/comics/swedish-supreme-court-exoner...

◧◩◪◨⬒⬓⬔⧯
418. expedi+m44[view] [source] [discussion] 2026-02-04 11:43:04
>>direwo+iX3
https://en.wikipedia.org/wiki/EncroChat

You have to understand that Europe doesn't give a shit about techbro libertarians and their desire for a new Lamborghini.

◧◩◪◨⬒
433. UncleS+h94[view] [source] [discussion] 2026-02-04 12:17:52
>>speed_+Lj2
France has a little more than that...

https://en.wikipedia.org/wiki/Force_de_dissuasion

435. krautb+W94[view] [source] 2026-02-04 12:22:52
>>vikave+(OP)
Raid all of them. Raid Google. Raid Facebook. Raid Apple. Raid Microsoft. Big tech has gotten away with everything from fraud[0] to murder[1] for decades. Black outfits. Rappel lines. Automatics. Touch that server Prakesh, and you won't live to touch another.

[0] https://nypost.com/2025/12/15/business/facebook-most-cited-i... [1] https://en.wikipedia.org/wiki/Suchir_Balaji

◧◩◪◨⬒
443. _ph_+oc4[view] [source] [discussion] 2026-02-04 12:40:56
>>beAbU+WN3
I think a company which runs a printing business would have some obligations to make sure they are not fulfilling print orders for guns. Another interesting example are printers and copiers, which do refuse to copy cash. Which is partly facilitated with the EURion constellation (https://en.wikipedia.org/wiki/EURion_constellation) and other means.
◧◩◪
476. plopil+8v4[view] [source] [discussion] 2026-02-04 14:35:53
>>rsynno+4p
I mean, the example you link is probably an engineer doing their job of signalling to hierarchy that something went deeply wrong. Of course, the lack of action of Facebook afterwards is a proof that they did not care, but not as much as a smoking gun.

A smoking gun would be, for instance, Facebook observing that most of their ads are scam, that the cost of fixing this exceeds by far "the cost of any regulatory settlement involving scam ads.", and to conclude that the company’s leadership decided to act only in response to impending regulatory action.

https://www.reuters.com/investigations/meta-is-earning-fortu...

◧◩
482. bright+9D4[view] [source] [discussion] 2026-02-04 15:12:48
>>mnewme+pE3
Agreed. For anyone curious, here's the UK report from the National Society for the Prevention of Cruelty to Children (NSPCC) from 2023-2024.

https://www.bbc.com/news/articles/cze3p1j710ko

Reports on sextortion, self-generated indecent images, and grooming via social media/messaging apps:

Snapchat 54%

Instagram 11%

Facebook 7%

WhatsApp 6-9%

X 1-2%

◧◩◪
483. bright+KD4[view] [source] [discussion] 2026-02-04 15:15:50
>>tw85+Ux4
I meant to reply to you with this: >>46886801
◧◩◪◨
485. pjc50+aE4[view] [source] [discussion] 2026-02-04 15:17:44
>>NooneA+4g4
Well, yes, it is actually pretty normal for suspected criminal businesses. What's unusual is that this one has their own publicity engine. Americans are just having trouble coping with the idea of a corporation being held liable for crimes.

More normally it looks like e.g. this in the UK: https://news.sky.com/video/police-raid-hundreds-of-businesse...

CyberGEND more often seem to do smalltime copyright infringement enforcement, but there are a number of authorities with the right to conduct raids.

◧◩◪◨⬒
526. apinks+8Y4[view] [source] [discussion] 2026-02-04 16:45:22
>>mooreb+B74
Nah, Musk put out a public challenge in January asking anyone able to generate illegal / porno images to reply and tell him how they were able to bypass the safegaurds. Thousands of people tried and failed. I think the most people were able to get is stuff you'd see in an R-rated movie, and even then only for fictional requests as the latest versions of Grok refuse to undress or redress any real person into anything inappropriate.

Here's the mentioned thread: https://x.com/elonmusk/status/2011527119097249996

◧◩
535. coffee+q05[view] [source] [discussion] 2026-02-04 16:54:52
>>patric+sh4
https://ourworldindata.org/grapher/freedom-of-expression-ind...
◧◩◪
536. ceejay+O05[view] [source] [discussion] 2026-02-04 16:56:55
>>tw85+Ux4
> But Musk actually did take tangible steps to clean it up and many accounts were banned.

Mmkay.

https://en.wikipedia.org/wiki/Twitter_under_Elon_Musk#Child_...

"As of June 2023, an investigation by the Stanford Internet Observatory at Stanford University reported "a lapse in basic enforcement" against child porn by Twitter within "recent months". The number of staff on Twitter's trust and safety teams were reduced, for example, leaving one full-time staffer to handle all child sexual abuse material in the Asia-Pacific region in November 2022."

"In 2024, the company unsuccessfully attempted to avoid the imposition of fines in Australia regarding the government's inquiries about child safety enforcement; X Corp reportedly said they had no obligation to respond to the inquiries since they were addressed to "Twitter Inc", which X Corp argued had "ceased to exist"."

◧◩◪◨
542. Levitz+k25[view] [source] [discussion] 2026-02-04 17:03:06
>>fyredg+yK4
>Unlike the current American administration who condones raids on homes without warrants and justifies violence with lies, this France raid follows something called rule of law.

Iffy on that front, actually. https://en.wikipedia.org/wiki/Arrest_and_indictment_of_Pavel...

◧◩◪◨⬒
548. skissa+I45[view] [source] [discussion] 2026-02-04 17:14:58
>>direwo+YW3
> It wouldn't be called CSAM in France because it would be called a French word. Arguing definitions is arguing semantics.

The most common French word is Pédopornographie. But my impression is the definition of that word under French law is possibly narrower than some definitions of the English acronym “CSAM”. Canadian law is much broader, and so what’s legally pédopornographie (English “child pornogaphy”) in Canada may be much closer to broad “CSAM” definitions

> The point is, X did things that are illegal in France, no matter what you call them.

Which French law are you alleging they violated? Article 227-23 du Code pénal, or something else? And how exactly are you claiming they violated it?

Note the French authorities at this time are not accusing them of violating the law. An investigation is simply a concern or suspicion of a legal violation, not a formal accusation; one possible outcome of an investigation is a formal accusation, another is the conclusion that they (at least technically) didn’t violate the law after all. I don’t think the French legal process has reached a conclusion either way yet.

One relevant case is the unpublished Court of Cassation decision 06-86.763 dated 12 septembre 2007 [0] which upheld a conviction of child pornography for importing and distributing the anime film “Twin Angels - le retour des bêtes célestes - Vol. 3". [0] However, the somewhat odd situation is that it appears that film is catalogued by the French national library, [1] although I don’t know if a catalogue entry definitively proves they possess the item. Also, art. 227-23 distinguishes between material depicting under 15s (illegal to even possess) and material depicting under 18s (only illegal to possess if one has intent to distribute); this prosecution appears to be have been brought under the latter category only-even though the individual was depicted as being under 15-suggesting this anime might not be illegal to possess in France if one has no intent to distribute it.

But this is the point - one needs to look at the details of exactly what the law says and how exactly the authorities apply it, rather than vague assertions of criminality which might not actually be true.

[0] https://www.legifrance.gouv.fr/juri/id/JURITEXT000007640077/

[1] https://catalogue.bnf.fr/ark:/12148/cb38377329p

◧◩◪
555. troyvi+b65[view] [source] [discussion] 2026-02-04 17:21:56
>>comman+845
There's pro-AI censorship and then there's pro-social media censorship. It was the X offices that were raided. X is a social media company. They would have been raided whether it was AI that created the CSAM or a bunch of X contractors generating it mechanical-turk style.

I think the HN crowd is more nuanced than you're giving them credit for: https://hn.algolia.com/?q=chat+control

◧◩◪◨⬒
572. Toucan+Kq5[view] [source] [discussion] 2026-02-04 18:44:59
>>mekdoo+hP4
> When did we accept, "Users are doing the scamming, not the company" as an excuse?

Section 230. https://en.wikipedia.org/wiki/Section_230

As always, Washington doing the hard work of making sure corpos never need to fix anything, ever.

◧◩◪◨⬒⬓⬔⧯▣▦
573. pyrale+Sq5[view] [source] [discussion] 2026-02-04 18:45:19
>>chrisj+s24
> That's from RAINN, the US's largest anti-sexual violence organisation.

For everyone to make up their own opinion about this poster's honesty, here's where his quote is from [1]. Chosen quotes:

> CSAM includes both real and synthetic content, such as images created with artificial intelligence tools.

> It doesn’t matter if the child agreed to it. It doesn’t matter if they sent the image themselves. If a minor is involved, it’s CSAM—and it’s illegal.

[1]: https://rainn.org/get-the-facts-about-csam-child-sexual-abus...

◧◩◪
618. utopia+Sr7[view] [source] [discussion] 2026-02-05 09:26:02
>>almost+fQ4
Takes literally minutes to setup with Webtop (assuming you are familiar with Docker/Podman https://docs.linuxserver.io/images/docker-webtop/ ) , nothing to install on the thin client, the stock browser is enough.

I used this when an employer was forcing me to use Windows and I needed Linux tools to work efficiently so I connected home. Goes through firewalls, proxies, etc.

Anyway if you want to host this not at home but a cloud provider there was HavenCo https://en.wikipedia.org/wiki/HavenCo don't ask me how I know about it, just curiosity.

[go to top]