zlacker

[return to "We’ve filed a law­suit chal­leng­ing Sta­ble Dif­fu­sion"]
1. dr_dsh+12[view] [source] 2023-01-14 07:17:25
>>zacwes+(OP)
“Sta­ble Dif­fu­sion con­tains unau­tho­rized copies of mil­lions—and pos­si­bly bil­lions—of copy­righted images.”

That’s going to be hard to argue. Where are the copies?

“Hav­ing copied the five bil­lion images—with­out the con­sent of the orig­i­nal artists—Sta­ble Dif­fu­sion relies on a math­e­mat­i­cal process called dif­fu­sion to store com­pressed copies of these train­ing images, which in turn are recom­bined to derive other images. It is, in short, a 21st-cen­tury col­lage tool.“

“Diffu­sion is a way for an AI pro­gram to fig­ure out how to recon­struct a copy of the train­ing data through denois­ing. Because this is so, in copy­right terms it’s no dif­fer­ent from an MP3 or JPEG—a way of stor­ing a com­pressed copy of cer­tain dig­i­tal data.”

The examples of training diffusion (eg, reconstructing a picture out of noise) will be core to their argument in court. Certainly during training the goal is to reconstruct original images out of noise. But, do they exist in SD as copies? Idk

◧◩
2. baxtr+TV[view] [source] 2023-01-14 16:30:30
>>dr_dsh+12
One idea I had was to try to recreate the original using a prompt. If you succeed, it should be obvious that the original was in the training set?
◧◩◪
3. zowie_+g21[view] [source] 2023-01-14 17:15:47
>>baxtr+TV
The LAION-5B dataset is public, so you can check directly whether a picture is in there or not. StabilityAI only takes a very limited amount of information from each individual picture, so for Stable Diffusion to closely reproduce a picture it would need to appear quite frequently in the dataset. There are examples of this, such as old famous paintings, "bloodborne box art" and probably many others, though I haven't looked deeply into it.
[go to top]