That’s going to be hard to argue. Where are the copies?
“Having copied the five billion images—without the consent of the original artists—Stable Diffusion relies on a mathematical process called diffusion to store compressed copies of these training images, which in turn are recombined to derive other images. It is, in short, a 21st-century collage tool.“
“Diffusion is a way for an AI program to figure out how to reconstruct a copy of the training data through denoising. Because this is so, in copyright terms it’s no different from an MP3 or JPEG—a way of storing a compressed copy of certain digital data.”
The examples of training diffusion (eg, reconstructing a picture out of noise) will be core to their argument in court. Certainly during training the goal is to reconstruct original images out of noise. But, do they exist in SD as copies? Idk
It seems to me that they're claiming here that Stability has somehow manage to store copies of these images in about 1 byte of space each. That's an incredible compression ratio!
In GPT this is words and phrases, e.g. "Frodo Baggins" high affinity, "Frodo Superman" will be negligible. Now consider all words that may link to those words - potentially billions of words (or phrases), but (probably/hopefully) none replicated. The phrases are out of specific context because they cover _all contexts_ in the training data. When you speak to GPT it randomises these words in response to you, typically choosing the words/phrases with the highest affinity, to the words you prompted, this almost gives it the appearance of emergent AI, because it is crossing different concepts (texts) in it's answers.
Stable Diffusion works similarly but with colours (words), and patterns/styles (phrases). Now if you ask for a green field in the style of Van Gogh, it could compare Van Gogh's work to a backdrop from Windows XP. You could argue depending on the degree of those things it gives you you are violating copyrights, however that narrow view doesn't take into account that although you've specifically asked for Van Gogh and that's where it concentrates, it's also pulling in work from potentially hundreds of other lower affinity sources. It's this dilution which means you'll never see an untainted original source image.
So in essence, it's the user who is breaching the copyright by specifying concentration on specific terms in the prompt, not the model. The model is simply a set of patterns, and the user is making those patterns breach copyright which IMHO is no different to the user copying a painting with a brush.
The brush isn't the thing you sue.