You can't bring back the training images no matter how hard you try.
An MPEG codec doesn't contain every movie in the world just because it could represent them if given the right file.
The white light coming off a blank canvas also doesn't contain a copy of the Mona Lisa which will be revealed once someone obscures some of the light.
The most common example of this (Greg Rutkowski) is not in StableDiffusion's training set.
The only thing so far discovered is either a) older public domain works nearly fully reproduced b) small fragments of newer works or c) "likenesses"
Which runs into some very interesting historical precedents.
((I wonder if there's a split between people who think AI emancipation might happen this century versus people who think that such a thing is silly to contemplate))
The answer is of course not, and the same principle applies if someone uses Stable Diffusion to find a latent space encoding for a copyright image (the 231 byte number - had to go double check what the grid size actually is).
A recreation of a piece of art does not mean a copy, I've personally seen hundreds of recreations of Edvard Munch's 'The Scream', all of them perfectly legal.
Even in a massively overtrained model, it is practically impossible to create a 1:1 copy of a piece of art the model was trained upon.
And of course that would be a pointless exercise to begin with, why would anyone want to generate 1:1 copies (or anything near that) of existing images ?
The whole 'magic' of Stable Diffusion is that you can create new works of art in the combined styles of art, photography etc that it has been trained on.
What does this mean? It doesn't mean you can't recreate the original, because that's been done. It doesn't mean that literally the bits for the image aren't present in the encoded data, because that's true for any compression algorithm.
As an example of a plausible scenario where copyright might actually be violated, consider this: an NGO wants images on their website. They type in something like 'afghan girl' or 'struggling child' and unknowingly use the recreations of the famous photographs they get.
The “color of your bits” only applies to the process of creating a work. Stable Diffusion’s training of the algorithm could be seen as violating copyright but that doesn’t spread to the works generated by it.
In the same vein, one can claim copyright on an image generated by stable diffusion even if the creation of the algorithm is safe from copyright violation.
“some representation of the originals exist inside the model+prompt” is also not sufficient for the model to be in violation of copyright of any one art piece. Some latent representation of the concept of an art piece or style isn’t enough.
It’s also important to note the distinction that there is no training data stored in its original form as part of the model during training, it’s simply used to tweak a function with the purpose of translating text to images. Some could say that’s like using the color from a picture of a car on the internet. Some might say it’s worse but it’s all subjective unless the opposition can draw new ties of the actual technical process to things already precedent.
The way SD model weights work, if you managed to prompt engineer a recreation of one specific work, it would only have been generated as a product of all the information in the entire training set + noise seed + the prompt. And the prompt wouldn't look anything like a reasonable description of any specific work.
Which is to say, it means nothing because you can equally generate a likeness of works which are known not to be included in the training set (easy, you ask for a latent encoding of the image and it gives you one): equivalent to a JPEG codec.
I think this is the most relevant line of your argument. Because if you could just ask it like "show me the latest picture of [artist]" then you'll have a hard time convincing me that this is fundamentally different from a database with a fancy query language and lots of copyrighted work in it.