zlacker

[return to "X offices raided in France as UK opens fresh investigation into Grok"]
1. Altern+ut[view] [source] 2026-02-03 13:39:21
>>vikave+(OP)
> Prosecutors say they are now investigating whether X has broken the law across multiple areas.

This step could come before a police raid.

This looks like plain political pressure. No lives were saved, and no crime was prevented by harassing local workers.

◧◩
2. moolco+Du[view] [source] 2026-02-03 13:45:38
>>Altern+ut
> This looks like plain political pressure. No lives were saved, and no crime was prevented by harassing local workers.

The company made and released a tool with seemingly no guard-rails, which was used en masse to generate deepfakes and child pornography.

◧◩◪
3. cubefo+3o3[view] [source] 2026-02-04 06:02:54
>>moolco+Du
> The company made and released a tool with seemingly no guard-rails, which was used en masse to generate deepfakes and child pornography.

Do you have any evidence for that? As far as I can tell, this is false. The only thing I saw was Grok changing photos of adults into them wearing bikinis, which is far less bad.

◧◩◪◨
4. scott_+cq3[view] [source] 2026-02-04 06:18:10
>>cubefo+3o3
Did you miss the numerous news reports? Example: https://www.theguardian.com/technology/2026/jan/08/ai-chatbo...

For obvious reasons, decent people are not about to go out and try to general child sexual abuse material to prove a point to you, if that’s what you’re asking for.

◧◩◪◨⬒
5. cubefo+hr3[view] [source] 2026-02-04 06:29:04
>>scott_+cq3
First of all, the Guardian is known to be heavily biased again Musk. They always try hard to make everything about him sound as negative as possible. Second, last time I tried, Grok even refused to create pictures of naked adults. I just tried again and this is still the case:

https://x.com/i/grok/share/1cd2a181583f473f811c0d58996232ab

The claim that they released a tool with "seemingly no guardrailes" is therefore clearly false. I think what instead has happened here is that some people found a hack to circumvent some of those guardrails via something like a jailbreak.

◧◩◪◨⬒⬓
6. scott_+Qv3[view] [source] 2026-02-04 07:10:54
>>cubefo+hr3
For more evidence:

https://www.bbc.co.uk/news/articles/cvg1mzlryxeo

Also, X seem to disagree with you and admit that CSAM was being generated:

https://arstechnica.com/tech-policy/2026/01/x-blames-users-f...

Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:

https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...

This is because of government pressure (see Ofcom link).

I’d say you’re making yourself look foolish but you seem happy to defend nonces so I’ll not waste my time.

◧◩◪◨⬒⬓⬔
7. cubefo+qP3[view] [source] 2026-02-04 09:46:15
>>scott_+Qv3
> Also, X seem to disagree with you and admit that CSAM was being generated

That post doesn't contain such an admission, it instead talks about forbidden prompting.

> Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:

That article links to this article: https://x.com/Safety/status/2011573102485127562 - which contradicts your claim that there were no guardrails before. And as I said, I already tried it a while ago, and Grok also refused to create images of naked adults then.

◧◩◪◨⬒⬓⬔⧯
8. scott_+K34[view] [source] 2026-02-04 11:37:37
>>cubefo+qP3
> That post doesn't contain such an admission, it instead talks about forbidden prompting.

In response to what? If CSAM is not being generated, why aren't X just saying that? Instead they're saying "please don't do it."

> which contradicts your claim that there were no guardrails before.

From the linked post:

> However content is created or whether users are free or paid subscribers, our Safety team are working around the clock to add additional safeguards

Which was posted a full week after the initial story broke and after Ofcom started investigative action. So no, it does not contradict my point which was:

> Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:

As you quoted.

I really can't decide if you're stupid, think I and other readers are stupid, or so dedicated to defending paedophilia that you'll just tell flat lies to everyone reading your comment.

◧◩◪◨⬒⬓⬔⧯▣
9. cubefo+ns4[view] [source] 2026-02-04 14:21:44
>>scott_+K34
Leave your accusations for yourself. Grok already didn't generate naked pictures of adults months ago when I tested it for the first time. Clearly the "additional safeguards" are meant to protect the system against any jailbreaks.
◧◩◪◨⬒⬓⬔⧯▣▦
10. scott_+1y4[view] [source] 2026-02-04 14:49:48
>>cubefo+ns4
Just to be clear, I'm to ignore:

* Internet Watch Foundation

* The BBC

* The Guardian

* X themselves

* Ofcom

And believe the word of an anonymous internet account who claims to have tried to undress women using Grok for "research."

[go to top]