That’s going to be hard to argue. Where are the copies?
“Having copied the five billion images—without the consent of the original artists—Stable Diffusion relies on a mathematical process called diffusion to store compressed copies of these training images, which in turn are recombined to derive other images. It is, in short, a 21st-century collage tool.“
“Diffusion is a way for an AI program to figure out how to reconstruct a copy of the training data through denoising. Because this is so, in copyright terms it’s no different from an MP3 or JPEG—a way of storing a compressed copy of certain digital data.”
The examples of training diffusion (eg, reconstructing a picture out of noise) will be core to their argument in court. Certainly during training the goal is to reconstruct original images out of noise. But, do they exist in SD as copies? Idk
If you take that tack, I'll go one step further back in time and ask "Where is your agreement from the original author who owns the copyright that you could use this image in the way you did?"
The fact that there is suddenly a new way to "use an image" (input to a computer algorithm) doesn't mean that copyright magically doesn't also apply to that usage.
A canonical example is the fact that television programs like "WKRP in Cincinnati" can't use the music licenses from the television broadcast if they want to distribute a DVD or streaming version--the music has to be re-licensed.
AFAIK, downloading and learning from images, even copyrighted images, fall under fair use, this is how practically every artist today learns how to draw.
Stable Diffusion does not create 1:1 copies of artwork it has been trained on, and its purpose is quite the opposite, there may be cases where the transformative aspect of a generated image may be argued as not being transformative enough, but so far I've only seen one such reproducable image, which would be the 'bloodborne box art' prompt, which was also mentioned in this discussion.