zlacker

[return to "We’ve filed a law­suit chal­leng­ing Sta­ble Dif­fu­sion"]
1. Ephil0+T6[view] [source] 2023-01-14 08:16:18
>>zacwes+(OP)
“Hav­ing copied the five bil­lion images—with­out the con­sent of the orig­i­nal artists—Sta­ble Dif­fu­sion relies on a math­e­mat­i­cal process called dif­fu­sion to store com­pressed copies of these train­ing images, which in turn are recom­bined to derive other images.”

This seems like it’s not an accurate description of what diffusion is doing. A diffusion model is not the same as compression. They’re implying that Stable Diffusion is taking the entire dataset and making it smaller then storing it. Instead, it’s just learning patterns about the art and replicating those patterns.

The “compression” they’re referring to is the latent space representation which is how Stable Diffusion avoids having to manipulate large images during computation. I mean you could call it a form of compression, but the actual training images aren’t stored using that latent space in the final model afaik. So it's not compressing every single image and storing it in the model.

This page says there were 5 billion images in the stable diffusion training dataset (albeit that may not be true as I see online it’s closer to the 2 billion mark). A Stable Diffusion model is about 5 gb. 5 gb / 5 billion is 1 byte per image. That’s impossible to fit an image in 1 byte. Obviously the claim about it storing compressed copies of the training data is not true. The size of the file comes from the weights in it, not because it’s storing “compressed copies”. In general, it seems this lawsuit is misrepresenting how Stable Diffusion works on a technical level.

◧◩
2. Ephil0+kw1[view] [source] 2023-01-14 20:17:19
>>Ephil0+T6
I should clarify a bit about how latent space works as I didn't in the original comment.

Stable diffusion has something called an encoder and decoder. What the encoder does is it takes an image, finds it's fundamental characteristics, and then converts it into a data point (for the sake of simplicity we will use a vector even though it doesn't have to be). Let's say the vector <0.2,0.5,0.6> represents a black dog. If you took a similar vector, you would get another picture of a dog (say a white dog). These vectors are contained in what's called a latent space which is just a collection of items where similar concepts are close together.

Stable Diffusion uses this latent space because it's more computationally efficient. So what it does is it starts with a noisy image which is converted into latent space, then it slowly gets rid of noise. It does this entire process on the latent space representation as opposed to the actual image. This means it's more computationally efficient because it doesn't have to store an entire pixel image in memory. Once it finishes getting rid of the noise, it uses the decoder to convert the image back into a pixel image. What you'll notice is that throughout this entire process it's not just retrieving a compressed image from it's training set and then using it. Instead, it's generating the image through de-noising. This de-noising process is guided by it's understanding of different concepts that can be represented in the latent space.

I think where this lawsuit goes wrong is it implies that the latent space is literally storing a copy of every image in the dataset. As far as I am aware, this is not true. Even though the latent space representations of images are dramatically smaller, it's not small enough to fit the entire dataset in a 5gb file. The only thing Stable Diffusion is storing is the algorithm itself for converting to and from latent space and that's just for computational efficiency as mentioned above. I've heard that Stable Diffusion might store some key concepts from the latent space, but I don't know if that's true or not. Either way, it seems unlikely that the entire dataset is being stored in Stable Diffusion. To me, it seems that saying Stable Diffusion is storing the images themselves is like saying GZIP's algorithm is storing the compressed version of every file in existence.

Disclaimer: Not an ML expert and this is just based on my own understanding of how it works. So I could be wrong

◧◩◪
3. Ephil0+T32[view] [source] 2023-01-15 01:24:22
>>Ephil0+kw1
Update: As I’ve looked more into the topic the less sure I am now about if I’m correct. I still think that there’s probably little chance that the whole dataset is shipped with stable diffusion. However, I am wondering about if maybe partial examples are shipped with it (e.g. a dictionary of certain concepts) or if there any any other caveats where stable diffusion might contain traces of the original data (note: I don’t think it contains the whole dataset still). I am not an expert so there’s a chance I could be wrong about all of this. So take my words with a grain of salt. Regardless, I still don’t believe the characterization of stable diffusion just copying and pasting images is correct and I believe the lawsuit still is making several factual errors as others online have pointed out.
◧◩◪◨
4. Ephil0+gv7[view] [source] 2023-01-16 23:07:37
>>Ephil0+T32
Nevermind, asked someone on a Discord who is more familiar with ML than me. Also, checked a few resources online. As far as I can tell there aren't any traces of the original dataset in Stable Diffusion. There aren't even partial examples of the dataset in there according to the person I talked to. Maybe they're wrong, but I suspect they are right. I did read there is a dictionary for CLIP, but that's a bunch of words that Stable Diffusion can recognize and not saved artwork.

Disclaimer: Not an ML expert

[go to top]