zlacker

[return to "X offices raided in France as UK opens fresh investigation into Grok"]
1. Altern+ut[view] [source] 2026-02-03 13:39:21
>>vikave+(OP)
> Prosecutors say they are now investigating whether X has broken the law across multiple areas.

This step could come before a police raid.

This looks like plain political pressure. No lives were saved, and no crime was prevented by harassing local workers.

◧◩
2. moolco+Du[view] [source] 2026-02-03 13:45:38
>>Altern+ut
> This looks like plain political pressure. No lives were saved, and no crime was prevented by harassing local workers.

The company made and released a tool with seemingly no guard-rails, which was used en masse to generate deepfakes and child pornography.

◧◩◪
3. cubefo+3o3[view] [source] 2026-02-04 06:02:54
>>moolco+Du
> The company made and released a tool with seemingly no guard-rails, which was used en masse to generate deepfakes and child pornography.

Do you have any evidence for that? As far as I can tell, this is false. The only thing I saw was Grok changing photos of adults into them wearing bikinis, which is far less bad.

◧◩◪◨
4. scott_+cq3[view] [source] 2026-02-04 06:18:10
>>cubefo+3o3
Did you miss the numerous news reports? Example: https://www.theguardian.com/technology/2026/jan/08/ai-chatbo...

For obvious reasons, decent people are not about to go out and try to general child sexual abuse material to prove a point to you, if that’s what you’re asking for.

◧◩◪◨⬒
5. cubefo+hr3[view] [source] 2026-02-04 06:29:04
>>scott_+cq3
First of all, the Guardian is known to be heavily biased again Musk. They always try hard to make everything about him sound as negative as possible. Second, last time I tried, Grok even refused to create pictures of naked adults. I just tried again and this is still the case:

https://x.com/i/grok/share/1cd2a181583f473f811c0d58996232ab

The claim that they released a tool with "seemingly no guardrailes" is therefore clearly false. I think what instead has happened here is that some people found a hack to circumvent some of those guardrails via something like a jailbreak.

◧◩◪◨⬒⬓
6. scott_+Qv3[view] [source] 2026-02-04 07:10:54
>>cubefo+hr3
For more evidence:

https://www.bbc.co.uk/news/articles/cvg1mzlryxeo

Also, X seem to disagree with you and admit that CSAM was being generated:

https://arstechnica.com/tech-policy/2026/01/x-blames-users-f...

Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:

https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...

This is because of government pressure (see Ofcom link).

I’d say you’re making yourself look foolish but you seem happy to defend nonces so I’ll not waste my time.

[go to top]