zlacker

[return to "A Developer Accidentally Found CSAM in AI Data. Google Banned Him for It"]
1. giantg+19[view] [source] 2025-12-11 16:40:23
>>markat+(OP)
This raises an interesting point. Do you need to train models using CSAM so that the model can self-enforce restrictions on CSAM? If so, I wonder what moral/ethical questions this brings up.
◧◩
2. boothb+Yl[view] [source] 2025-12-11 17:36:59
>>giantg+19
I know what porn looks like. I know what children look like. I do not need to be shown child porn in order to recognize it if I saw it. I don't think there's an ethical dilemma here; there is no need if LLMs have the capabilities we're told to expect.
◧◩◪
3. giantg+wP2[view] [source] 2025-12-12 13:31:05
>>boothb+Yl
"I know what porn looks like. I know what children look like."

Do you though?

Some children look like adults (17 vs 18, etc). Some adults, look younger than they actually are. How do we tell the difference between porn and art, such as nude scenes in movies, or even ancient sculptures? It doesn't seem like an agent would be able to make these determinations without a significant amount of training, and likely added context about any images it processes.

[go to top]