zlacker

[parent] [thread] 23 comments
1. miki12+(OP)[view] [source] 2026-02-04 05:31:41
This vindicates the pro-AI censorship crowd I guess.

It definitely makes it clear what is expected of AI companies. Your users aren't responsible for what they use your model for, you are, so you'd better make sure your model can't ever be used for anything nefarious. If you can't do that without keeping the model closed and verifying everyone's identities... well, that's good for your profits I guess.

replies(11): >>culi+L2 >>themaf+n4 >>Jordan+O7 >>popalc+S8 >>mnewme+sj >>madeof+wj >>gordia+ky >>direwo+6D >>code_f+661 >>keepam+Qb1 >>comman+VI1
2. culi+L2[view] [source] 2026-02-04 06:02:12
>>miki12+(OP)
It's not really different from how we treat any other platform that can host CSAM. I guess the main difference is that it's being "made" instead of simply "distributed" here
3. themaf+n4[view] [source] 2026-02-04 06:13:39
>>miki12+(OP)
Holding corporations accountable for their profit streams is "censorship?" I wish they'd stop passing models trained on internet conversations and hoarded data as fit for any purpose. The world does not need to boil oceans for hallucinating chat bots at this particular point in history.
replies(1): >>themaf+MN2
4. Jordan+O7[view] [source] 2026-02-04 06:43:16
>>miki12+(OP)
I could maybe see this argument if we were talking about raiding Stable Diffusion or Facebook or some other provider of local models. But the content at issue was generated not just by Twitter's AI model, but on their servers, integrated directly into their UI and hosted publicly on their platform. That makes them much more clearly culpable -- they're not just enabling this shit, they're creating it themselves on demand (and posting it directly to victims' public profiles).
replies(1): >>disgru+Uw
5. popalc+S8[view] [source] 2026-02-04 06:53:17
>>miki12+(OP)
It's a bit of a leap to say that the model must be censored. SD and all the open image gen models are capable of all kinds of things, but nobody has gone after the open model trainers. They have gone after the companies making profits from providing services.
replies(2): >>vinter+ci >>Kaiser+Au
◧◩
6. vinter+ci[view] [source] [discussion] 2026-02-04 08:14:27
>>popalc+S8
So far, yes, but as far as I can tell their case against the AI giants aren't based on it being for-profit services in any way.
replies(1): >>popalc+Pc3
7. mnewme+sj[view] [source] 2026-02-04 08:24:51
>>miki12+(OP)
This is the wrong take.

Yes they could have an uncensored model, but then they would need proper moderation and delete this kind of content instantly or ban users that produce it. Or don’t allow it in the first place.

It doesn’t matter how CSAM is produced, the only thing that matters is that is on the platform.

I am flabbergasted people even defend this

replies(1): >>direwo+RD
8. madeof+wj[view] [source] 2026-02-04 08:25:09
>>miki12+(OP)
Let’s take a step back and remove AI generation from the conversation for a moment.

Did X do enough to prevent its website being used to distribute illegal content - consensual sexual material of both adults and children?

Now reintroduce AI generation, where X plays a more active role in facilitating the creation of that illegal content.

◧◩
9. Kaiser+Au[view] [source] [discussion] 2026-02-04 09:49:18
>>popalc+S8
Again its all about reasonable.

Firstly does the open model explicitly/tacitly allow CSAM generation?

Secondly, when the trainers are made aware of the problem, do they ignore it or attempt to put in place protections?

Thirdly, do they pull in data that is likely to allow that kind of content to be generated?

Fourthly, when they are told that this is happening, do they pull the model?

Fithly, do they charge for access/host the service and allow users to generate said content on their own servers?

◧◩
10. disgru+Uw[view] [source] [discussion] 2026-02-04 10:08:15
>>Jordan+O7
And importantly, this is clearly published by Grok, rather than the user, so in this case (obviously this isn't the US) but if it was I'm not sure Section 230 would apply.
11. gordia+ky[view] [source] 2026-02-04 10:19:01
>>miki12+(OP)
This is not about AI but about censorship of a nonaligned social network. It's been a developing current in EU. They have basically been foaming at the mouth at the platform since it got bought.
replies(1): >>direwo+tD
12. direwo+6D[view] [source] 2026-02-04 10:56:49
>>miki12+(OP)
It's not because it could generate CSAM. It's because when they found out it could generate CSAM, they didn't try to prevent that, they advertised it. Actual knowledge is a required compenent of many crimes.
replies(1): >>bhelke+KZ2
◧◩
13. direwo+tD[view] [source] [discussion] 2026-02-04 10:59:31
>>gordia+ky
It's about a guy who thinks posting child porn on twitter is hilarious and that guy happens to own twitter.

If it was about blocking the social media they'd just block it, like they did with Russia Today, CUII-Liste Lina, or Pavel Durov.

replies(2): >>mordni+jM >>gordia+3g3
◧◩
14. direwo+RD[view] [source] [discussion] 2026-02-04 11:02:16
>>mnewme+sj
It matters whether they attempt to block it or encourage it. Musk encouraged it, until legal pressure hit, then moved it behind a paywall so it's harder to see evidence.
replies(1): >>mnewme+6E
◧◩◪
15. mnewme+6E[view] [source] [discussion] 2026-02-04 11:03:53
>>direwo+RD
Exactly!
◧◩◪
16. mordni+jM[view] [source] [discussion] 2026-02-04 12:03:31
>>direwo+tD
He said that child pornography is funny? Do you have a link by any chance?
17. code_f+661[view] [source] 2026-02-04 14:17:21
>>miki12+(OP)
I think having guardrails on your AI to not be able to produce this stuff is good actually. Also, Elon encourages this behavior socially through his posts so yeah he should face consequences.
18. keepam+Qb1[view] [source] 2026-02-04 14:45:24
>>miki12+(OP)
There's no crowds or sides. It's all manufactured divisions because some of those who can't or don't want to create the technology are determined to control it. So they'll get you mad about what they need to, to justify actions that increase their control.

It's the same playbook that is used again and again. For war, civil liberties crackdowns, lockdowns, COVID, etc, etc: 0) I want (1); start playbook: A) Something bad is here, B) You need to feel X + Panic about it, C) We are solving it via (1). Because you reacted at B, you will support C. Problem, reaction, solution. Gives the playmakers the (1) they want.

We all know this is going on. But I guess we like knowing someone is pulling the strings. We like being led and maybe even manipulated because perhaps in the familiar system (which yields the undeniable goods of our current way of life), there is safety and stability? How else to explain.

Maybe the need to be entertained with drama is a hackable side effect of stable societies populated by people who evolved as warriors, hunters and survivors.

19. comman+VI1[view] [source] 2026-02-04 17:12:10
>>miki12+(OP)
Pretty disturbing to me how many people _on here_ are cheering for this. I thought that at least here of all places, there might be some nuanced discussion on "ok, I see why people are emotional about this topic in particular, but it's worth stepping back and putting emotions aside for a minute to see if this is actually reasonable overall..." but besides your comment, I'm not seeing much of that.
replies(1): >>troyvi+YK1
◧◩
20. troyvi+YK1[view] [source] [discussion] 2026-02-04 17:21:56
>>comman+VI1
There's pro-AI censorship and then there's pro-social media censorship. It was the X offices that were raided. X is a social media company. They would have been raided whether it was AI that created the CSAM or a bunch of X contractors generating it mechanical-turk style.

I think the HN crowd is more nuanced than you're giving them credit for: https://hn.algolia.com/?q=chat+control

◧◩
21. themaf+MN2[view] [source] [discussion] 2026-02-04 22:15:09
>>themaf+n4
What would be censorship is if those same companies then brigaded forums and interfered with conversations and votes in an effort to try to hide their greed and criminality.

Not that this would _ever_ happen on Hacker News. :|

◧◩
22. bhelke+KZ2[view] [source] [discussion] 2026-02-04 23:21:38
>>direwo+6D
> when they found out it could generate CSAM, they didn't try to prevent that, they advertised it.

Twitter publicly advertised it can create CSAM?

I have been off twitter for several years and I am open to being wrong here but that sounds unlikely.

◧◩◪
23. popalc+Pc3[view] [source] [discussion] 2026-02-05 00:51:35
>>vinter+ci
The for-profit part may or may not be a qualifier, but the architecture of a centralized service means they automatically become the scene of the crime -- either dissemination or storing of illegal material. Whereas if Stability creates a model, and others use their model locally, the relationship of Stability to the crime is ad-hoc. They aren't an accessory.
◧◩◪
24. gordia+3g3[view] [source] [discussion] 2026-02-05 01:19:50
>>direwo+tD
Although I despise it, I respect your right to lie through your teeths.
[go to top]