zlacker

[return to "A Developer Accidentally Found CSAM in AI Data. Google Banned Him for It"]
1. giantg+19[view] [source] 2025-12-11 16:40:23
>>markat+(OP)
This raises an interesting point. Do you need to train models using CSAM so that the model can self-enforce restrictions on CSAM? If so, I wonder what moral/ethical questions this brings up.
◧◩
2. boothb+Yl[view] [source] 2025-12-11 17:36:59
>>giantg+19
I know what porn looks like. I know what children look like. I do not need to be shown child porn in order to recognize it if I saw it. I don't think there's an ethical dilemma here; there is no need if LLMs have the capabilities we're told to expect.
◧◩◪
3. cs02rm+Im[view] [source] 2025-12-11 17:39:45
>>boothb+Yl
They don't have your capabilities.
◧◩◪◨
4. mossTe+1q[view] [source] 2025-12-11 17:56:38
>>cs02rm+Im
I've seen AI image generation models described as being able to combine multiple subjects into a novel (or novel enough) output e.g. "pineapple" and "skateboarding" becomes an image of a skateboarding pineapple. It doesn't seem like a reach to assume it can do what GP suggests.
[go to top]