zlacker

[return to "A Developer Accidentally Found CSAM in AI Data. Google Banned Him for It"]
1. amarch+T9[view] [source] 2025-12-11 16:44:03
>>markat+(OP)
Just a few days ago I was doing some low paid (well, not so low) Ai classification task - akin to mechanical turk ones - for a very big company and was - involuntarily, since I guess they don't review them before showing - shown an ai image by the platform depicting a naked man and naked kid. though it was more barbie like than anything else. I didn't really enjoy the view tbh, contacted them but got no answer back
◧◩
2. ipytho+ui[view] [source] 2025-12-11 17:22:12
>>amarch+T9
If the picture truly was of a child, the company is _required_ to report CSAM to NCMEC. It's taken very seriously. If they're not being responsive, escalate and report it yourself so you don't have legal problems.

See https://report.cybertip.org/.

◧◩◪
3. amarch+8l[view] [source] 2025-12-11 17:33:18
>>ipytho+ui
Even if it's an Ai image? I will follow through contacting them directly rather than with the platform messaging system, then I'll see what to do if they don't answer

Edit i read the informations given in the briefing before the task, and they say that there might be offensive content displayed. They say to tell them if it happens, but well I did and got no answer so weeeell, not so inclined to believe they care about it

◧◩◪◨
4. ipytho+jL1[view] [source] 2025-12-12 01:33:14
>>amarch+8l
The company may not care, but the gov definitely does. And if you don’t report then you could be in serious legal jeopardy. If any fragments of that image are still present on your machine, whether it came from the company or not, you could be held accountable for possessing csam.

So screw the company, report it yourself and make sure to cite the company and their lack of a response. There’s a Grand Canyon sized chasm between “offensive content” and csam.

[go to top]