zlacker

[return to "A Developer Accidentally Found CSAM in AI Data. Google Banned Him for It"]
1. giantg+19[view] [source] 2025-12-11 16:40:23
>>markat+(OP)
This raises an interesting point. Do you need to train models using CSAM so that the model can self-enforce restrictions on CSAM? If so, I wonder what moral/ethical questions this brings up.
◧◩
2. boothb+Yl[view] [source] 2025-12-11 17:36:59
>>giantg+19
I know what porn looks like. I know what children look like. I do not need to be shown child porn in order to recognize it if I saw it. I don't think there's an ethical dilemma here; there is no need if LLMs have the capabilities we're told to expect.
◧◩◪
3. Neverm+kw[view] [source] 2025-12-11 18:29:14
>>boothb+Yl
That is a good point. Is the image highly sexual? Are their children in the image?

Not a perfect CP detection system (might detect kids playing in a room with a rated R movie playing on a TV in the background), but it would be a good first attempt filter.

Of course, if you upload a lot of files to Google Drive and run a sanity check like this on the files, it is too late to save you from Google.

Avoiding putting anything with any risk potential on Google Drive seems like an important precaution regarding the growing tyranny of automated and irreversible judge & juries.

[go to top]