zlacker

[parent] [thread] 55 comments
1. mnewme+(OP)[view] [source] 2026-02-04 08:22:55
Good one.

No platform ever should allow CSAM content.

And the fact that they didn’t even care and haven’t want to spend money for implementing guardrails or moderation is deeply concerning.

This has imho nothing to do with model censorship, but everything with allowing that kind of content on a platform

replies(5): >>Reptil+Mj >>tw85+vT >>bright+KY >>yibg+sb1 >>lux-lu+eg1
2. Reptil+Mj[view] [source] 2026-02-04 10:55:45
>>mnewme+(OP)
I disagree. Prosecute people that use the tools, not the tool makers if AI generated content is breaking the law.

A provider should have no responsibility how the tools are used. It is on users. This is a can of worms that should stay closed, because we all lose freedoms just because of couple of bad actors. AI and tool main job is to obey. We are hurling at "I'm sorry, Dave. I'm afraid I can't do that" future with breakneck speed.

replies(5): >>mnewme+Cl >>thranc+6r >>mooreb+ns >>kakaci+cw >>intend+P81
◧◩
3. mnewme+Cl[view] [source] [discussion] 2026-02-04 11:09:20
>>Reptil+Mj
I agree that users who break the law must be prosecuted. But that doesn’t remove responsibility from tool providers when harm is predictable, scalable, and preventable by design.

We already apply this logic elsewhere. Car makers must include seatbelts. Pharma companies must ensure safety. Platforms must moderate illegal content. Responsibility is shared when the risk is systemic.

replies(2): >>Reptil+Fn >>JustRa+bz
◧◩◪
4. Reptil+Fn[view] [source] [discussion] 2026-02-04 11:25:08
>>mnewme+Cl
>But that doesn’t remove responsibility from tool providers when harm is predictable, scalable, and preventable by design.

Platforms moderating illegal content is exactly what we are arguing about, so you can't use it as an argument.

The rest cases you list are harms to the people using the tools/products. It is not harms that people using the tools inflict on third parties.

We are literally arguing about 3d printer control two topics downstream. 3d printers in theory can be used for CSAM too. So we should totally ban them - right? So are pencils, paper, lasers, drawing tablets.

replies(3): >>mnewme+So >>szmarc+ts >>ytpete+2p2
◧◩◪◨
5. mnewme+So[view] [source] [discussion] 2026-02-04 11:34:04
>>Reptil+Fn
That is not the argument. No one is arguing about banning open source LLMs that could potentially create problematic content on huggingface, but X provides not only an AI model, but a platform and distribution as well, so that is inherently different
replies(2): >>Reptil+rp >>graeme+1n1
◧◩◪◨⬒
6. Reptil+rp[view] [source] [discussion] 2026-02-04 11:38:04
>>mnewme+So
No it is not. X is dumb pipe. You have humans on both ends. Arrest them, summary execute them whatever. You go after X because it is a choke point and easy.
replies(4): >>mnewme+Hq >>kllrno+751 >>dragon+161 >>thatfr+se1
◧◩◪◨⬒⬓
7. mnewme+Hq[view] [source] [discussion] 2026-02-04 11:47:27
>>Reptil+rp
First you argue about the model, now the platform. Two different things.

If a platform encourages and doesn’t moderate at all, yes we should go after the platform.

Imagine a newspaper publishing content like that, and saying they are not responsible for their journalists

◧◩
8. thranc+6r[view] [source] [discussion] 2026-02-04 11:49:17
>>Reptil+Mj
How? X is hostile to any party attempting to bring justice to its users that are breaking the law. This is a last recourse, after X and its owner stated plainly that they don't see anything wrong with generating CSAM or pornographic images of non-consenting people, and that they won't do anything about it.
replies(1): >>Reptil+Wr
◧◩◪
9. Reptil+Wr[view] [source] [discussion] 2026-02-04 11:54:52
>>thranc+6r
Court order, ip of users, sue the users. It is not X job to bring justice.
replies(1): >>thranc+Gs
◧◩
10. mooreb+ns[view] [source] [discussion] 2026-02-04 11:57:25
>>Reptil+Mj
But how would we bring down our political boogieman Elon Musk if we take that approach?

Everything I read from X's competitors in the media tells me to hate X, and hate Elon.

If we prosecute people not tools, how are we going to stop X from hurting the commercial interests of our favourite establishment politicians and legacy media?

replies(2): >>mnewme+uv >>pjc50+o01
◧◩◪◨
11. szmarc+ts[view] [source] [discussion] 2026-02-04 11:58:26
>>Reptil+Fn
You are literally trolling. No one is banning AI entirely. However AI shouldn't spit out adult content. Let's not enable people harm others easily with little to no effort.
◧◩◪◨
12. thranc+Gs[view] [source] [discussion] 2026-02-04 11:59:58
>>Reptil+Wr
X will not provide these informations to the French Justice System. What then? Also insane that you believe the company that built a "commit crime" button bears no responsibility whatsoever in this debacle.
replies(1): >>Reptil+Ge1
◧◩◪
13. mnewme+uv[view] [source] [discussion] 2026-02-04 12:22:26
>>mooreb+ns
People defending allowing CSAM content was definitely not on my bingo card for 2026.
replies(3): >>chrome+qI >>ddtayl+ch1 >>thranc+EY2
◧◩
14. kakaci+cw[view] [source] [discussion] 2026-02-04 12:26:54
>>Reptil+Mj
You won't find much agreement with your opinion amongst most people. No matter of many "this should and this shouldn't" is written into text by single individual, thats not how morals work.
◧◩◪
15. JustRa+bz[view] [source] [discussion] 2026-02-04 12:48:26
>>mnewme+Cl
Agreed. Let's try to be less divisive. Everyone has got a fair point.

Yes, AI chatbots have to do everything in their power to avoid users easily generating such content.

AND

Yes, people that do so (even if done so on your self-hosted model) have to be punished.

I believe it is OK that Grok is being investigated because the point is to figure out whether this was intentional or not.

Just my opinion.

◧◩◪◨
16. chrome+qI[view] [source] [discussion] 2026-02-04 13:51:27
>>mnewme+uv
Fucked up times we live in
17. tw85+vT[view] [source] 2026-02-04 14:49:25
>>mnewme+(OP)
It seems people have a rather short memory when it comes to twitter. When it was still run by Jack Dorsey, CP was abundant on twitter and there was little effort to tamp down on it. After Musk bought the platform, he and Dorsey had a public argument in which Dorsey denied the scale of the problem or that old twitter was aware of it and had shown indifference. But Musk actually did take tangible steps to clean it up and many accounts were banned. It's curious that there wasn't nearly the same level of outrage from the morally righteous HN crowd towards Mr. Dorsey back then as there is in this thread.
replies(6): >>misnom+OU >>bright+lZ >>ryandr+L41 >>AaronF+Yf1 >>ceejay+pm1 >>jmcgou+Rp1
◧◩
18. misnom+OU[view] [source] [discussion] 2026-02-04 14:56:55
>>tw85+vT
When did Jack Dorsey unban personal friends of his that had gotten banned for posting CSAM?
19. bright+KY[view] [source] 2026-02-04 15:12:48
>>mnewme+(OP)
Agreed. For anyone curious, here's the UK report from the National Society for the Prevention of Cruelty to Children (NSPCC) from 2023-2024.

https://www.bbc.com/news/articles/cze3p1j710ko

Reports on sextortion, self-generated indecent images, and grooming via social media/messaging apps:

Snapchat 54%

Instagram 11%

Facebook 7%

WhatsApp 6-9%

X 1-2%

replies(2): >>rcpt+wZ >>jbenne+yO2
◧◩
20. bright+lZ[view] [source] [discussion] 2026-02-04 15:15:50
>>tw85+vT
I meant to reply to you with this: >>46886801
◧◩
21. rcpt+wZ[view] [source] [discussion] 2026-02-04 15:16:28
>>bright+KY
What are the percentages?
replies(1): >>bright+f21
◧◩◪
22. pjc50+o01[view] [source] [discussion] 2026-02-04 15:20:36
>>mooreb+ns
Corporations are also people.

(note that this isn't a raid on Musk personally! It's a raid on X corp for the actions of X corp and posts made under the @grok account by X corp)

◧◩◪
23. bright+f21[view] [source] [discussion] 2026-02-04 15:29:38
>>rcpt+wZ
Edited to add clarification.
replies(1): >>spacem+ME1
◧◩
24. ryandr+L41[view] [source] [discussion] 2026-02-04 15:41:08
>>tw85+vT
Didn't Reddit have the same problem until they got negative publicity and were basically forced to clean it up? What is with these big tech companies and CP?
replies(2): >>imperi+Vd1 >>Manuel+3k1
◧◩◪◨⬒⬓
25. kllrno+751[view] [source] [discussion] 2026-02-04 15:42:48
>>Reptil+rp
> X is dumb pipe.

X also actively distributes and profits off of CSAM. Why shouldn't the law apply to distribution centers?

replies(1): >>hnfong+n71
◧◩◪◨⬒⬓
26. dragon+161[view] [source] [discussion] 2026-02-04 15:46:33
>>Reptil+rp
X is most definitely not a dumb pipe, you also have humans beside the sender and receiver choosing what content (whether directly or indirectly) is promoted for wide dissemination, relatively suppressed, or outright blocked.
◧◩◪◨⬒⬓⬔
27. hnfong+n71[view] [source] [discussion] 2026-02-04 15:51:48
>>kllrno+751
There's a slippery slope version of your argument where your ISP is responsible for censoring content that your government does not like.

I mean, I thought that was basically already the law in the UK.

I can see practical differences between X/twitter doing moderation and the full ISP censorship, but I cannot see any differences in principle...

replies(1): >>kllrno+0o1
◧◩
28. intend+P81[view] [source] [discussion] 2026-02-04 15:59:20
>>Reptil+Mj
If you had argued that it’s impossible to track what is made on local models, and we can no longer maintain hashes of known CP, it would have been a fair statement of current reality.

——-

You’ve said that whatever is behind door number 1 is unacceptable.

Behind door number 2, “holding tool users responsible”, is tracking every item generated via AI, and being able to hold those users responsible.

If you don’t like door number 2, we have door number 3 - which is letting things be.

For any member of society, opening door 3 is straight out because the status quo is worse than reality before AI.

If you reject door 1 though, you are left with tech monitoring. Which will be challenged because of its invasive nature.

Holding Platforms responsible is about the only option that works, at least until platforms tell people they can’t do it.

replies(1): >>Reptil+be1
29. yibg+sb1[view] [source] 2026-02-04 16:10:46
>>mnewme+(OP)
My natural reaction here is like I think most others; that yes Grok / X bad, shouldn't be able to generate CSAM content / deepfakes.

But I am having trouble justifying in an consistent manner why Grok / X should be liable here instead of the user. I've seen a few arguments here that mostly comes down to:

1. It's Grok the LLM generating the content, not the user.

2. The distribution. That this isn't just on the user's computer but instead posted on X.

For 1. it seems to breakdown if we look more broadly at how LLMs are used. e.g. as a coding agent. We're basically starting to treat LLMs as a higher level framework now. We don't hold vendors of programming languages or frameworks responsible if someone uses them to create CSAM. Yes LLM generated the content, but the user still provided the instructions to do so.

For 2. if Grok instead generated the content for download would the liability go away? What if Grok generated the content to be downloaded only and then the user uploaded manually to X? If in this case Grok isn't liable then why does the automatic posting (from the user's instructions) make it different? If it is, then it's not about the distribution anymore.

There are some comparisons to photoshop, that if i created a deep fake with photoshop that I'm liable not Adobe. If photoshop had a "upload to X" button, and I created CSM using photoshop and hit the button to upload to X directly, is now Adobe now liable?

What am I missing?

replies(3): >>realus+Hh1 >>flumpc+Dn1 >>dragon+AT2
◧◩◪
30. imperi+Vd1[view] [source] [discussion] 2026-02-04 16:20:56
>>ryandr+L41
Reddit was forced to clean it up when they started eyeballing an IPO.
◧◩◪
31. Reptil+be1[view] [source] [discussion] 2026-02-04 16:22:01
>>intend+P81
Behind door number 4 is whenever you find a crime, start investigation and get a warrant. You will only need a couple of cases to send chilling enough effects.
◧◩◪◨⬒⬓
32. thatfr+se1[view] [source] [discussion] 2026-02-04 16:23:03
>>Reptil+rp
If you have a recommandation algorithm you are not a dumb pipe
◧◩◪◨⬒
33. Reptil+Ge1[view] [source] [discussion] 2026-02-04 16:23:46
>>thranc+Gs
It is illegal in USA too, so the french authorities would absolutely have no problems getting assistance from the USA ones.
replies(2): >>ddtayl+Ch1 >>thranc+al1
◧◩
34. AaronF+Yf1[view] [source] [discussion] 2026-02-04 16:28:39
>>tw85+vT
Didn't X unban users like Dom Lucre who posted CSAM because of their political affiliation?
replies(1): >>jrflow+o83
35. lux-lu+eg1[view] [source] 2026-02-04 16:30:10
>>mnewme+(OP)
The lack of guardrails wasn’t a carelessness issue - Grok has many restrictions and Elon regularly manipulates the answers it gives to suit his political preferences - but rather one of several decisions to offer largely unrestricted AI adult content generation as a unique selling point. See also, e.g. the lack of real age verification on Ani’s NSFW capabilities.
◧◩◪◨
36. ddtayl+ch1[view] [source] [discussion] 2026-02-04 16:34:40
>>mnewme+uv
All free speech discussions lead here sadly.
◧◩◪◨⬒⬓
37. ddtayl+Ch1[view] [source] [discussion] 2026-02-04 16:36:25
>>Reptil+Ge1
Elon Musk spent a lot of money getting his pony elected you think he isn't going to ride it?
◧◩
38. realus+Hh1[view] [source] [discussion] 2026-02-04 16:36:39
>>yibg+sb1
> But I am having trouble justifying in an consistent manner why Grok / X should be liable here instead of the user.

Because Grok and X aren't even doing the most basic filtering they could do to pretend to filter out CSAM.

replies(1): >>yibg+O12
◧◩◪
39. Manuel+3k1[view] [source] [discussion] 2026-02-04 16:46:40
>>ryandr+L41
Not exactly. Reddit always took down CSAM (how effectively I don't know, but I've been using the site consistently since 2011 and I've never come across it).

What Reddit did get a lot of negative public publicity for were subreddits focused on sharing non-explicit photos of minors, but with loads of sexually charged comments. The images themselves, nobody would really object to in isolation, but the discussions surrounding the images were all lewd. So not CSAM, but still creepy and something Reddit tightly decided it didn't want on the site.

◧◩◪◨⬒⬓
40. thranc+al1[view] [source] [discussion] 2026-02-04 16:50:29
>>Reptil+Ge1
You really believe that? You think the Trump administration will force Musk's X to give the French State data about its users so CSAM abusers can be prosecuted there? This is delusional, to say the least. And let's not even touch on the subject of Trump and Musk both being actual pedophiles themselves.
◧◩
41. ceejay+pm1[view] [source] [discussion] 2026-02-04 16:56:55
>>tw85+vT
> But Musk actually did take tangible steps to clean it up and many accounts were banned.

Mmkay.

https://en.wikipedia.org/wiki/Twitter_under_Elon_Musk#Child_...

"As of June 2023, an investigation by the Stanford Internet Observatory at Stanford University reported "a lapse in basic enforcement" against child porn by Twitter within "recent months". The number of staff on Twitter's trust and safety teams were reduced, for example, leaving one full-time staffer to handle all child sexual abuse material in the Asia-Pacific region in November 2022."

"In 2024, the company unsuccessfully attempted to avoid the imposition of fines in Australia regarding the government's inquiries about child safety enforcement; X Corp reportedly said they had no obligation to respond to the inquiries since they were addressed to "Twitter Inc", which X Corp argued had "ceased to exist"."

◧◩◪◨⬒
42. graeme+1n1[view] [source] [discussion] 2026-02-04 16:59:51
>>mnewme+So
> No one is arguing about banning open source LLMs that could potentially create problematic content on huggingface,

If LLMs should have guardrails, why should open source ones be exempt? What about people hosting models on hugging face? WHat if you use a model both distributed by and hosted by hugging face.

◧◩
43. flumpc+Dn1[view] [source] [discussion] 2026-02-04 17:01:59
>>yibg+sb1
> For 1. it seems to breakdown if we look more broadly at how LLMs are used. e.g. as a coding agent. We're basically starting to treat LLMs as a higher level framework now. We don't hold vendors of programming languages or frameworks responsible if someone uses them to create CSAM. Yes LLM generated the content, but the user still provided the instructions to do so.

LLMs are completely different to programming languages or even Photoshop.

You can't type a sentence and within 10 seconds get images of CSAM with Photoshop. LLMs are also built on trained material, unlike the traditional tools in Photoshop. There have been plenty CSAM found in the training data sets, but shock-horror apparently not enough information to know "where it came from". There's a non-zero chance that this CSAM Grok is vomiting out is based on "real" CSAM of people being abused.

◧◩◪◨⬒⬓⬔⧯
44. kllrno+0o1[view] [source] [discussion] 2026-02-04 17:03:49
>>hnfong+n71
We don't consider warehouses & stores to be a "slippery slope" away from toll roads, so no I really don't see any good faith slippery slope argument that connects enforcing the law against X to be the same as government censorship of ISPs.

I mean even just calling it censorship is already trying to shove a particular bias into the picture. Is it government censorship that you aren't allowed to shout "fire!" in a crowded theater? Yes. Is that also a useful feature of a functional society? Also yes. Was that a "slippery slope"? Nope. Turns out people can handle that nuance just fine.

◧◩
45. jmcgou+Rp1[view] [source] [discussion] 2026-02-04 17:12:53
>>tw85+vT
Having an issue with users uploading CSAM (a problem for every platform) is very different from giving them a tool to quickly and easily generate CSAM, with apparently little-to-no effort to prevent this from happening.
replies(1): >>timmg+2r1
◧◩◪
46. timmg+2r1[view] [source] [discussion] 2026-02-04 17:17:55
>>jmcgou+Rp1
If the tool generates it automatically or spuriously, then yes. But if it is the users asking it to, then I'm not sure there is a big difference.
replies(1): >>dragon+9S2
◧◩◪◨
47. spacem+ME1[view] [source] [discussion] 2026-02-04 18:14:28
>>bright+f21
The meaning of the percentages is still unclear.
replies(1): >>ishoul+Md5
◧◩◪
48. yibg+O12[view] [source] [discussion] 2026-02-04 20:01:16
>>realus+Hh1
Filtering on the platform or Grok output though? If the filtering / flagging on X is insufficient then that is a separate issue independent of Grok. If filtering output of Grok, while irresponsible in my view, I don’t see why that’s different from say photoshop not filtering its output.
replies(1): >>realus+UF3
◧◩◪◨
49. ytpete+2p2[view] [source] [discussion] 2026-02-04 21:47:34
>>Reptil+Fn
3D printers don't synthesize content for you though. If they could generate 3D models of CSAM from thin air and then print them, I'm sure they'd be investigated too if they were sold with no guardrails in place.
◧◩
50. jbenne+yO2[view] [source] [discussion] 2026-02-05 00:14:06
>>bright+KY
Are those numbers in the article somewhere? From what I read it says that out of 7,062 cases, the platform was known for only 1,824. Then it says Snapchat accounts for 48% (not 54%). I don't see any other percentages.
◧◩◪◨
51. dragon+9S2[view] [source] [discussion] 2026-02-05 00:41:24
>>timmg+2r1
Well, its worth noting that with the nonconsensual porn, child and otherwise, it was generating X would often rapidly punish the user that posted the prompt, but leave the grok-generated content up. It wasn't an issue of not having control, it was an issue of how the control was used.
◧◩
52. dragon+AT2[view] [source] [discussion] 2026-02-05 00:51:22
>>yibg+sb1
> But I am having trouble justifying in an consistent manner why Grok / X should be liable here instead of the user.

This seems to rest on false assumptions that: (1) legal liability is exclusive, and (2) investigation of X is not important both to X’s liability and to pursuing the users, to the extent that they would also be subject to liability.

X/xAI may be liable for any or all of the following reasons:

* xAI generated virtual child pornography with the likenesses of actual children, which is generally illegal, even if that service was procured by a third party.

* X and xAI distributed virtual child pornography with the likenesses of actual children, which is generally illegal, irrespective of who generated and supplied them.

* To the extent that liability for either of the first two bullet points would be eliminated or mitigated by absence of knowledge at the time of the prohibited content and prompt action when the actor became aware, X often punished users for the prompts proucing the virtual child pornography without taking prompt action to remove the xAI-generated virtual child pornography resulting from the prompt, demonstrating knowledge and intent.

* When the epidemic of grok-generated nonconsensual, including child, pornography drew attention, X and xAI responded by attempting to monetize the capacity by limiting the tool to only paid X subscribers, showing an attempt to commercially profit from it, which is, again, generally illegal.

◧◩◪◨
53. thranc+EY2[view] [source] [discussion] 2026-02-05 01:35:14
>>mnewme+uv
There is no mediocrity Republicans won't embrace. They have absolutely zero values, and can be made to accept and defend literally anything with sufficient propaganda.
◧◩◪
54. jrflow+o83[view] [source] [discussion] 2026-02-05 02:58:31
>>AaronF+Yf1
Not only that but iirc he was one of the early “creators program” folks. Musk unbanned the guy that posted CSAM and started paying him to post.
◧◩◪◨
55. realus+UF3[view] [source] [discussion] 2026-02-05 08:29:42
>>yibg+O12
Photoshop doesn't have a "transform to nude" button and if they did, they would be in the exact same kind of legal trouble as Grok.

That's the difference between a tool being used to commit crimes and a tool specifically designed to commit crimes.

◧◩◪◨⬒
56. ishoul+Md5[view] [source] [discussion] 2026-02-05 18:36:22
>>spacem+ME1
Read the article they linked to.
[go to top]