https://techcrunch.com/2024/02/06/eu-csa-deepfakes/
seems to be a working link . makes refernce to the title submission.
of further note :
"The possession and exchange of “pedophile manuals” would also be criminalized under the plan — which is part of a wider package of measures the EU says is intended to boost prevention of CSA"
https://techcrunch.com/2024/02/06/eu-csa-deepfakes/
this could be chilling, as outreach to children at risk, could be misconstrued, or repurposed for grooming, thus materials concerning adult interaction with children would become treacherous
> The current law criminalizes possession of purely fictional material and has been applied in the absence of any images of real children, including to possession of fictional stories with no pictures at all, or vice versa, cartoon pictures without any stories.[4]
* https://en.wikipedia.org/wiki/Child_pornography_laws_in_Cana...
In the US, there is a legal distinction between child pornography and child obscenity. Both are criminal, and exceptions to the 1st Amendment – but, the first is much easier to prove in court. In the 2002 case of Ashcroft v Free Speech Coalition [0] SCOTUS ruled that (under the 1st Amendment), child pornography only included material made with real children – so written text, drawings, or CGI/AI-generated children are not legally child pornography in the US. In the US, child pornography is only images/video [1] of real children. If someone uses editing software or AI to transform an innocent image/video of a real child into a pornographic one, that is also child pornography. But an image/video of a completely virtual child, that doesn't (non-coincidentally) look like any identifiable real child, is not child pornography in the US.
What most people don't seem to know, is that while a virtual child can't be criminal child pornography in the US, it still can be criminal child obscenity – which is rarely prosecuted, and much harder to prove in court, but if they succeed, can still result in a lengthy prison term. In 2021, a Texas man was sentenced to 40 years in prison over a bunch of stories and drawings of child sexual abuse. [2] (Given the man is in his 60s, that's effectively a life sentence, he's probably going to die in prison.) If someone can get 40 years in prison for stories and drawings, there is no reason in principle why someone could not end up in federal prison for AI-generated images too, under 18 USC 1466A. [3]
[0] https://en.wikipedia.org/wiki/Ashcroft_v._Free_Speech_Coalit...
[1] maybe audio too? I'm not sure about that
[2] https://www.justice.gov/opa/pr/texas-man-sentenced-40-years-...
First. Rightly said, it's an assumption. Then, I'd like to highlight that resources are limited. This means that if want to spend resources on X you have to take resources from Y, and is this trade-off really acceptable for this case in particular?
What outcomes can we really expect from a law like this? How do we know? What's the best and worst scenario? How will it be enforced?
I'd bet nobody can answer these questions with data supporting them. Including policymakers.
> And I don't think social safety nets can prevent children from being abused
Just today on the front page: >>39374152
Anyway, all of this is just speculation because research on this topic is banned in practical terms.
[0] https://www.sciencedirect.com/science/article/abs/pii/S00057... See graphs on pages 687 and 689