zlacker

[return to "A Developer Accidentally Found CSAM in AI Data. Google Banned Him for It"]
1. giantg+19[view] [source] 2025-12-11 16:40:23
>>markat+(OP)
This raises an interesting point. Do you need to train models using CSAM so that the model can self-enforce restrictions on CSAM? If so, I wonder what moral/ethical questions this brings up.
◧◩
2. boothb+Yl[view] [source] 2025-12-11 17:36:59
>>giantg+19
I know what porn looks like. I know what children look like. I do not need to be shown child porn in order to recognize it if I saw it. I don't think there's an ethical dilemma here; there is no need if LLMs have the capabilities we're told to expect.
◧◩◪
3. cs02rm+Im[view] [source] 2025-12-11 17:39:45
>>boothb+Yl
They don't have your capabilities.
[go to top]