zlacker

[return to "A Developer Accidentally Found CSAM in AI Data. Google Banned Him for It"]
1. giantg+19[view] [source] 2025-12-11 16:40:23
>>markat+(OP)
This raises an interesting point. Do you need to train models using CSAM so that the model can self-enforce restrictions on CSAM? If so, I wonder what moral/ethical questions this brings up.
◧◩
2. jshear+1a[view] [source] 2025-12-11 16:44:54
>>giantg+19
It's a delicate subject but not an unprecedented one. Automatic detection of already known CSAM images (as opposed to heuristic detection of unknown images) has been around for much longer than AI, and for that service to exist someone has to handle the actual CSAM before it's reduced to a perceptual hash in a database.

Maybe AI-based heuristic detection is more ethically/legally fraught since you'd have to stockpile CSAM to train on, rather than hashing then destroying your copy immediately after obtaining it.

◧◩◪
3. tcfhgj+Ag[view] [source] 2025-12-11 17:14:16
>>jshear+1a
> Maybe AI detection is more ethically fraught since you'd need to keep hold of the CSAM until the next training run,

why?

the damage is already done

◧◩◪◨
4. tremon+Dm[view] [source] 2025-12-11 17:39:23
>>tcfhgj+Ag
Why would you think that? Every distribution, every view is adding damage, even if the original victim doesn't know (or even would rather not know) about it.
◧◩◪◨⬒
5. tcfhgj+Kq[view] [source] 2025-12-11 18:00:07
>>tremon+Dm
I don't think it's how it works.
[go to top]