zlacker

[return to "X offices raided in France as UK opens fresh investigation into Grok"]
1. mnewme+pE3[view] [source] 2026-02-04 08:22:55
>>vikave+(OP)
Good one.

No platform ever should allow CSAM content.

And the fact that they didn’t even care and haven’t want to spend money for implementing guardrails or moderation is deeply concerning.

This has imho nothing to do with model censorship, but everything with allowing that kind of content on a platform

◧◩
2. Reptil+bY3[view] [source] 2026-02-04 10:55:45
>>mnewme+pE3
I disagree. Prosecute people that use the tools, not the tool makers if AI generated content is breaking the law.

A provider should have no responsibility how the tools are used. It is on users. This is a can of worms that should stay closed, because we all lose freedoms just because of couple of bad actors. AI and tool main job is to obey. We are hurling at "I'm sorry, Dave. I'm afraid I can't do that" future with breakneck speed.

◧◩◪
3. mnewme+104[view] [source] 2026-02-04 11:09:20
>>Reptil+bY3
I agree that users who break the law must be prosecuted. But that doesn’t remove responsibility from tool providers when harm is predictable, scalable, and preventable by design.

We already apply this logic elsewhere. Car makers must include seatbelts. Pharma companies must ensure safety. Platforms must moderate illegal content. Responsibility is shared when the risk is systemic.

◧◩◪◨
4. Reptil+424[view] [source] 2026-02-04 11:25:08
>>mnewme+104
>But that doesn’t remove responsibility from tool providers when harm is predictable, scalable, and preventable by design.

Platforms moderating illegal content is exactly what we are arguing about, so you can't use it as an argument.

The rest cases you list are harms to the people using the tools/products. It is not harms that people using the tools inflict on third parties.

We are literally arguing about 3d printer control two topics downstream. 3d printers in theory can be used for CSAM too. So we should totally ban them - right? So are pencils, paper, lasers, drawing tablets.

◧◩◪◨⬒
5. mnewme+h34[view] [source] 2026-02-04 11:34:04
>>Reptil+424
That is not the argument. No one is arguing about banning open source LLMs that could potentially create problematic content on huggingface, but X provides not only an AI model, but a platform and distribution as well, so that is inherently different
◧◩◪◨⬒⬓
6. graeme+q15[view] [source] 2026-02-04 16:59:51
>>mnewme+h34
> No one is arguing about banning open source LLMs that could potentially create problematic content on huggingface,

If LLMs should have guardrails, why should open source ones be exempt? What about people hosting models on hugging face? WHat if you use a model both distributed by and hosted by hugging face.

[go to top]