zlacker

[parent] [thread] 14 comments
1. cubefo+(OP)[view] [source] 2026-02-04 06:02:54
> The company made and released a tool with seemingly no guard-rails, which was used en masse to generate deepfakes and child pornography.

Do you have any evidence for that? As far as I can tell, this is false. The only thing I saw was Grok changing photos of adults into them wearing bikinis, which is far less bad.

replies(3): >>scott_+92 >>klez+Ho >>numpad+0x
2. scott_+92[view] [source] 2026-02-04 06:18:10
>>cubefo+(OP)
Did you miss the numerous news reports? Example: https://www.theguardian.com/technology/2026/jan/08/ai-chatbo...

For obvious reasons, decent people are not about to go out and try to general child sexual abuse material to prove a point to you, if that’s what you’re asking for.

replies(1): >>cubefo+e3
◧◩
3. cubefo+e3[view] [source] [discussion] 2026-02-04 06:29:04
>>scott_+92
First of all, the Guardian is known to be heavily biased again Musk. They always try hard to make everything about him sound as negative as possible. Second, last time I tried, Grok even refused to create pictures of naked adults. I just tried again and this is still the case:

https://x.com/i/grok/share/1cd2a181583f473f811c0d58996232ab

The claim that they released a tool with "seemingly no guardrailes" is therefore clearly false. I think what instead has happened here is that some people found a hack to circumvent some of those guardrails via something like a jailbreak.

replies(5): >>scott_+N7 >>emsign+p9 >>jibal+t9 >>Hikiko+DW >>neorom+Cx1
◧◩◪
4. scott_+N7[view] [source] [discussion] 2026-02-04 07:10:54
>>cubefo+e3
For more evidence:

https://www.bbc.co.uk/news/articles/cvg1mzlryxeo

Also, X seem to disagree with you and admit that CSAM was being generated:

https://arstechnica.com/tech-policy/2026/01/x-blames-users-f...

Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:

https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...

This is because of government pressure (see Ofcom link).

I’d say you’re making yourself look foolish but you seem happy to defend nonces so I’ll not waste my time.

replies(1): >>cubefo+nr
◧◩◪
5. emsign+p9[view] [source] [discussion] 2026-02-04 07:26:46
>>cubefo+e3
> First of all, the Guardian is known to be heavily biased again Musk.

Says who? Musk?

◧◩◪
6. jibal+t9[view] [source] [discussion] 2026-02-04 07:27:10
>>cubefo+e3
That is only "known" to intellectually dishonest ideologues.
7. klez+Ho[view] [source] 2026-02-04 09:27:43
>>cubefo+(OP)
That's why this is an investigation looking for evidence and not a conviction.

This is how it works, at least in civil law countries. If the prosecutor has reasonable suspicious that a crime is taking place they send the so-called "judiciary police" to gather evidence. If they find none (or they're inconclusive etc...) the charges are dropped, otherwise they ask the court to go to trial.

On some occasions I take on judiciary police duties for animal welfare. Just last week I participated in a raid. We were not there to arrest anyone, just to gather evidence so the prosecutor could decide whether to press charges and go to trial.

replies(1): >>direwo+Iz
◧◩◪◨
8. cubefo+nr[view] [source] [discussion] 2026-02-04 09:46:15
>>scott_+N7
> Also, X seem to disagree with you and admit that CSAM was being generated

That post doesn't contain such an admission, it instead talks about forbidden prompting.

> Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:

That article links to this article: https://x.com/Safety/status/2011573102485127562 - which contradicts your claim that there were no guardrails before. And as I said, I already tried it a while ago, and Grok also refused to create images of naked adults then.

replies(1): >>scott_+HF
9. numpad+0x[view] [source] 2026-02-04 10:32:05
>>cubefo+(OP)
Grok do seem to have tons of useless guardrails. Reportedly you can't prompt it directly. But also reportedly they tend to go for almost nonsensically off-guardrail interpretation of prompts.
◧◩
10. direwo+Iz[view] [source] [discussion] 2026-02-04 10:53:03
>>klez+Ho
Note that the raid itself is a punishment. It's normal for them to seize all electronic devices. How is X France supposed to do any business without any electronic devices? And even when charges are dropped, the devices are never returned.
◧◩◪◨⬒
11. scott_+HF[view] [source] [discussion] 2026-02-04 11:37:37
>>cubefo+nr
> That post doesn't contain such an admission, it instead talks about forbidden prompting.

In response to what? If CSAM is not being generated, why aren't X just saying that? Instead they're saying "please don't do it."

> which contradicts your claim that there were no guardrails before.

From the linked post:

> However content is created or whether users are free or paid subscribers, our Safety team are working around the clock to add additional safeguards

Which was posted a full week after the initial story broke and after Ofcom started investigative action. So no, it does not contradict my point which was:

> Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:

As you quoted.

I really can't decide if you're stupid, think I and other readers are stupid, or so dedicated to defending paedophilia that you'll just tell flat lies to everyone reading your comment.

replies(1): >>cubefo+k41
◧◩◪
12. Hikiko+DW[view] [source] [discussion] 2026-02-04 13:36:46
>>cubefo+e3
>First of all, the Guardian is known to be heavily biased again Musk.

Biased against the man asking Epstein which day would be best for the "wildest" party.

◧◩◪◨⬒⬓
13. cubefo+k41[view] [source] [discussion] 2026-02-04 14:21:44
>>scott_+HF
Leave your accusations for yourself. Grok already didn't generate naked pictures of adults months ago when I tested it for the first time. Clearly the "additional safeguards" are meant to protect the system against any jailbreaks.
replies(1): >>scott_+Y91
◧◩◪◨⬒⬓⬔
14. scott_+Y91[view] [source] [discussion] 2026-02-04 14:49:48
>>cubefo+k41
Just to be clear, I'm to ignore:

* Internet Watch Foundation

* The BBC

* The Guardian

* X themselves

* Ofcom

And believe the word of an anonymous internet account who claims to have tried to undress women using Grok for "research."

◧◩◪
15. neorom+Cx1[view] [source] [discussion] 2026-02-04 16:35:16
>>cubefo+e3
>First of all, the Guardian is known to be heavily biased again Musk.

Which is good, that is the sane position to take these days.

[go to top]