https://techcrunch.com/2024/02/06/eu-csa-deepfakes/
seems to be a working link . makes refernce to the title submission.
of further note :
"The possession and exchange of “pedophile manuals” would also be criminalized under the plan — which is part of a wider package of measures the EU says is intended to boost prevention of CSA"
https://techcrunch.com/2024/02/06/eu-csa-deepfakes/
this could be chilling, as outreach to children at risk, could be misconstrued, or repurposed for grooming, thus materials concerning adult interaction with children would become treacherous
when its faked, it can be magnitudes more damning in the fake details, vs spraying jpegs of old polaroids of a baby having first bath
Someone abuses a child, film it, put it in AI. And they now have that child's model.
Throw away the child and they're currently guilty free if any charges. Of course that won't be enough so repeat the process.
It's not like someone is creating a model in blender and than running that though a AI. Not like that doesn't happen anyway.
> The current law criminalizes possession of purely fictional material and has been applied in the absence of any images of real children, including to possession of fictional stories with no pictures at all, or vice versa, cartoon pictures without any stories.[4]
* https://en.wikipedia.org/wiki/Child_pornography_laws_in_Cana...
Also, the only way to find out if this has any effect at all (positive or negative) would disgust and outrage many, as that test would require having a region where it's forbidden and a control group where it's allowed and seeing which is worse.
I'm not sure how many people would try to lynch mob (let alone vote out) whoever tries to do that, but I'm sure it's enough that exact numbers don't matter.
Yes, but given that CSAM data already exists, and we can't go back in time to prevent it, there's no further cost to attain that dataset. Unlike all future real CSAM, which will be produced by abusing children IRL.
I see parallels with Unit 731 here. Horrible things have already happened, so what do we do with the information?
The government is greedy in its lust for control and order in a chaotic world. It has a tendency to overreach, then overreach again (as we see with in the overlap of privacy and counterterrorism).
However, I also think thoughtcrime is a very dangerous and slippery slope. It's not an easy question with an easy answer.
- "It's a safer outlet and prevents actual child abuse, so it's a good thing."
- "It will encourage and enforce paedophilic tendencies and (indirectly) encourages actual child abuse, so it's a bad thing."
The last time I looked, the evidence is inconclusive. It's a difficult topic to research well, so I'm not expecting anything conclusive on this any time soon.
My own view is that most likely, there are different kind of paedophiles and that different things will be true for different groups because these types of things aren't that simple. This kind of nuance is even harder to research, especially on such a controversial topic fraught with ethics issues.
There's also the issue of training material, which is unique to AI. Is it possible to have AI generated child abuse material without training material of that kind of thing? I don't know enough about AI to comment on that.
One can watch people getting tortured to death in "Hostel" or "Saw" in graphic detail on Amazon Prime.
Is it the innocence and defenselessness of children that gets society so riled up?
A big unanswered question in the age of AI: how does a system of law work when breaking one law is bad, but the product of breaking many laws is totally exempt?
We're starting to see the milder form of this in debates around authorship and copyright. But when your AI model requires a shockingly large quantity of clearly verboten material as input, what is one to make of the output?
Yes.
Those who seek sexual gratification from the abuse of a minor. The real deal.
And those who are aroused by the body of the minor, or watching the abuse of an minor.
If the model is "good enough" than you could potentially say that those who are interested in pedophilia probably won't seek the further extremes to fulfil their pleasure.
However, in the long run they are still pedophillac and the real deal will always be the more for those.
For the moment, GenAI isn't.
That's too much in my opinion. What if there is a novel about teens experiencing it for the first time at 16, or 17? Or if there is a novel a rapist?
People cannot be honest about how they feel, sexually, because it's thought crime.
People cannot get help because others must report it to the police.
Researchers cannot study it (or release results contrary to the status quo) because it would be career ending.
Reporters cannot tackle the subject because it would make them unemployable or ruin their publication.
Politicians must bow to it, and leverage it, because of the populist view.
Inmates cannot get rehabilitation because they're societally hated, and other inmates will kill them or commit deplorable acts against them. (This is socially acceptable behavior, even encouraged)
This argument falls flat regarding synthetically-produced CSAM and CSAM-adjacent material (no human beings exploited in its creation; in fact, one could argue that creation of synthetic CSAM depresses the market for child-exploiting CSAM), so I wouldn't be surprised if the US can't find a way in their legal structure to ban such material; their protections against obscenity in general tend to be weaker than other nations, and if the content is merely obscene and not generated by harming a minor, it's harder to argue it should be banned (as opposed to shunned for being odious).
In the US, there is a legal distinction between child pornography and child obscenity. Both are criminal, and exceptions to the 1st Amendment – but, the first is much easier to prove in court. In the 2002 case of Ashcroft v Free Speech Coalition [0] SCOTUS ruled that (under the 1st Amendment), child pornography only included material made with real children – so written text, drawings, or CGI/AI-generated children are not legally child pornography in the US. In the US, child pornography is only images/video [1] of real children. If someone uses editing software or AI to transform an innocent image/video of a real child into a pornographic one, that is also child pornography. But an image/video of a completely virtual child, that doesn't (non-coincidentally) look like any identifiable real child, is not child pornography in the US.
What most people don't seem to know, is that while a virtual child can't be criminal child pornography in the US, it still can be criminal child obscenity – which is rarely prosecuted, and much harder to prove in court, but if they succeed, can still result in a lengthy prison term. In 2021, a Texas man was sentenced to 40 years in prison over a bunch of stories and drawings of child sexual abuse. [2] (Given the man is in his 60s, that's effectively a life sentence, he's probably going to die in prison.) If someone can get 40 years in prison for stories and drawings, there is no reason in principle why someone could not end up in federal prison for AI-generated images too, under 18 USC 1466A. [3]
[0] https://en.wikipedia.org/wiki/Ashcroft_v._Free_Speech_Coalit...
[1] maybe audio too? I'm not sure about that
[2] https://www.justice.gov/opa/pr/texas-man-sentenced-40-years-...
If you had the opportunity to tune your AI with photography than to self generated where true photography of a pig which produced higher quality less defects on generation why would you not go for such?
That isn't how anything works.
Listen to the podcast "Hunting Warhead" before you make another comment so wildly uninformed on the topic anywhere.
Which is actually a perfectly valid defense imo, as it’s horribly dumb to incriminate real people because of fictional characters. Should everyone who has a copy of IT go to jail because of child pornography? It makes no sense.
Because of new content. If AI is being trained on real data and new content than the datasets don't end up stale.
An AI can generate an image of a wizard with a frog on their head and that doesn't imply that the training set had such an image
The other side of this is, "What's an acceptable number of crimes against children?" Implicitly, there's a choice, "This remedy is projected to reduce crimes against children by X%, but we're not going to do it because Y." The projected marginal reduction is justified using the consequences of Y.
I'm perfectly ok with this calculus. If someone wants to say that right to privacy is more important than a projected reduction in crimes against children, more power to them.
What I'm asking for is honesty regarding this calculus. For some reason it short circuits peoples' brains and the consequences of taking whichever action end up making them unable to say, "Yes, the [projected] marginal reduction is the what I'm willing to spend by not doing Y." Just say it.
For example, there was a time when to get a flood effect filmmakers flooded a set. 3 extras died. Later on they were told they can't do that, but they can simulate it. Tons of movies show people getting overcome by floods, but no one dies in real life anymore.
Stupid question but why take kids then and not adult women? Why take the risk of buying CP if you do not like the kids young?
Same with CP.
But real movies still use real effects. Just a lot more of it is on a green screen as a cost saving exercise and the demand for the movie to be now now now.
If quality went in to making films as they did in the past, the movie industry wouldn't be such a shovel of shite. Those were real, with real actors and real acting. Now you got CGI however, scenes are still produced in the real.
I mean we all repress certain behaviors to a certain extent. Some level of repression is healthy. I might have the urge to eat 5KG of ice cream every day but I keep it under control.
> I don't think CP will turn anyone into a pedophile, might as well let people satisfy their urges in a non-harmful way.
If need CP to get off you are a pedophile by definition because you are sexually attracted to children. I guess you meant watching CP will not automatically turn him into a child molester.
It is not that I agree with this new law. I do not see how it is enforceable. But I do see why people have a very negative view of it.
In your text, however, there is something I find far more interesting for discussion:
> enforce a certain level of morality in their societies
I, personally, like this. I think a society should have a level of enforced moral behavior. But by going down this road, you can reach some undesirable conclusions super fast.
In this case, shouldn't we solve child abuse (by, for example, promoting social safety nets) instead of spending already limited resources in victimless crimes?
Edit: clarification
No, I meant a pedophile. It's not like people aren't attracted to children just because they haven't seen a CP video, therefore allowing fictional CP should be OK (it won't convert anyone into a pedophile).
I guess there's an argument to be made that a pedophile watching CP might be tempted to become a child molester, but then we should debate that, not issue blanket bans on fictional CP.
I, as a person with superior moral principles, completely agree. Following an old tradition, we should also put some psychopaths in charge of enforcement.
The US on the other hand is a different story. In 2005 one man was convicted to 20 years imprisonment for hentai. In 2016 a man was convicted to 10 years imprisonment for owning a coloring book.
Wikipedia has a few more cases in the article "Legal status of fictional child pornography".
Thanks for clarifying, I misunderstood your earlier point then.
First. Rightly said, it's an assumption. Then, I'd like to highlight that resources are limited. This means that if want to spend resources on X you have to take resources from Y, and is this trade-off really acceptable for this case in particular?
What outcomes can we really expect from a law like this? How do we know? What's the best and worst scenario? How will it be enforced?
I'd bet nobody can answer these questions with data supporting them. Including policymakers.
> And I don't think social safety nets can prevent children from being abused
Just today on the front page: >>39374152
Anyway, all of this is just speculation because research on this topic is banned in practical terms.
Any thoughts on the use of prediction markets, especially ones where predictor performance is tracked, in order to make better predictions on the results of legislative action?
[0] https://www.sciencedirect.com/science/article/abs/pii/S00057... See graphs on pages 687 and 689
I was only assuming that a higher happiness was correlated with children not being abused. It was just speculation on my part. Sorry I should've clarified that.