That’s going to be hard to argue. Where are the copies?
“Having copied the five billion images—without the consent of the original artists—Stable Diffusion relies on a mathematical process called diffusion to store compressed copies of these training images, which in turn are recombined to derive other images. It is, in short, a 21st-century collage tool.“
“Diffusion is a way for an AI program to figure out how to reconstruct a copy of the training data through denoising. Because this is so, in copyright terms it’s no different from an MP3 or JPEG—a way of storing a compressed copy of certain digital data.”
The examples of training diffusion (eg, reconstructing a picture out of noise) will be core to their argument in court. Certainly during training the goal is to reconstruct original images out of noise. But, do they exist in SD as copies? Idk
In fairness, Diffusion is arguably a very complex entropy coding similar to Arithmetic/Huffman coding.
Given that copyright is protectable even on compressed/encrypted files, it seems fair that the “container of compressed bytes” (in this case the Diffusion model) does “contain” the original images no differently than a compressed folder of images contains the original images.
A lawyer/researcher would likely win this case if they re-create 90%ish of a single input image from the diffusion model with text input.
Oh, one image is enough to apply copyright as if it were a patent, to ban a process that makes original works most of the time?
The article authors say it works as a "collage tool" trying to minimise the composition and layout of the image as unimportant elements. At the same time forgetting that SD is changing textures as well, so it's a collage minus textures and composition?
Is there anything left to complain about? unless, by draw of luck, both layout and textures are very similar to a training image. But ensuring no close duplications are allowed should suffice.
Copyright should apply one by one, not in bulk. Each work they complain about should be judged on its own merits.
The fact that the derivation involves millions of works as opposed to a single one is immaterial for the copyright issue.
You can draw Biden yourself if you're talented and it's not considered a derivative of anything.
If a person creates a perfect copy of something it shows they have put thousands of hours of practice into training their skills and maybe dozens or even hundreds of hours into the replica.
When a computer generates a replica of something it's what it was designed to do. AI art is trying to replicate the human process, but it will always have the stink of "the computer could do this perfectly but we are telling it not to right now"
Take Chess as an example. We have Chess engines that can beat even the best human Chess players very consistently.
But we also have Chess engines designed to play against beginners, or at all levels of Chess play really.
We still have Human-only tournaments. Why? Why not allow a Chess Engine set to perform like a Grandmaster to compete in tournaments?
Because there would always be the suspicion that if it wins, it's because it cheated to play at above it's level when it needed to. Because that's always an option for a computer, to behave like a computer does.
There are no models I know of with the ability to generate an exact copy of an image from its training set unless it was solely trained on that image to the point it could. In that case I could argue the model’s purpose was to copy that image rather than learn concepts from a broad variety of images to the point it would be almost impossible to generate an exact copy.
I think a lot of the arguments revolving around AI image generators could benefit from the constituent parties reading up on how transformers work. It would at least make the criticisms more pointed and relevant, unlike the criticisms drawn in the linked article.
Is it "the model cannot possibly recreate an image from its training set perfectly" or is it "the model is extremely unlikely to recreate an image from its training set perfectly, but it could in theory"?
Because I am willing to bet it's the latter.
> You’re acting like the “computer” has a will of it’s own. Generating a perfect copy of an image would be a completely separate task from training a model for image generation.
Not my intent, of course I don't think computers have a will of their own. What I meant, obviously, is that it's always possible for a bad actor of a human to make the computer behave in a way that is detrimental to other humans and then justify it by saying "the computer did it, all I did is train the model".
- Open Microsoft Paint
- Make a blank 400 x 400 image
- Select a pixel and input an R,G,B value
- Repeat the last two steps
To reproduce a copyrighted work. I'm sure people have done this with e.g. pixel art images of copyrighted IP of Mario or Link. At 400x400, it would take 160,000 pixels to do this. At 1 second per pixel, a human being could do this in about a week.
Because people have the capability of doing this, and in fact we have proof that people have done so using tools such as MS paint, AND because it is unlikely but possible that someone could reproduce protected IP using such a method, should we ban Microsoft Paint, or the paint tool, or the ability to input raw RGB inputs?