zlacker

[return to "We’ve filed a law­suit chal­leng­ing Sta­ble Dif­fu­sion"]
1. dr_dsh+12[view] [source] 2023-01-14 07:17:25
>>zacwes+(OP)
“Sta­ble Dif­fu­sion con­tains unau­tho­rized copies of mil­lions—and pos­si­bly bil­lions—of copy­righted images.”

That’s going to be hard to argue. Where are the copies?

“Hav­ing copied the five bil­lion images—with­out the con­sent of the orig­i­nal artists—Sta­ble Dif­fu­sion relies on a math­e­mat­i­cal process called dif­fu­sion to store com­pressed copies of these train­ing images, which in turn are recom­bined to derive other images. It is, in short, a 21st-cen­tury col­lage tool.“

“Diffu­sion is a way for an AI pro­gram to fig­ure out how to recon­struct a copy of the train­ing data through denois­ing. Because this is so, in copy­right terms it’s no dif­fer­ent from an MP3 or JPEG—a way of stor­ing a com­pressed copy of cer­tain dig­i­tal data.”

The examples of training diffusion (eg, reconstructing a picture out of noise) will be core to their argument in court. Certainly during training the goal is to reconstruct original images out of noise. But, do they exist in SD as copies? Idk

◧◩
2. yazadd+X3[view] [source] 2023-01-14 07:43:18
>>dr_dsh+12
> That’s going to be hard to argue. Where are the copies?

In fairness, Diffusion is arguably a very complex entropy coding similar to Arithmetic/Huffman coding.

Given that copyright is protectable even on compressed/encrypted files, it seems fair that the “container of compressed bytes” (in this case the Diffusion model) does “contain” the original images no differently than a compressed folder of images contains the original images.

A lawyer/researcher would likely win this case if they re-create 90%ish of a single input image from the diffusion model with text input.

◧◩◪
3. visarg+D4[view] [source] 2023-01-14 07:50:34
>>yazadd+X3
> 90%ish of a single input image

Oh, one image is enough to apply copyright as if it were a patent, to ban a process that makes original works most of the time?

The article authors say it works as a "collage tool" trying to minimise the composition and layout of the image as unimportant elements. At the same time forgetting that SD is changing textures as well, so it's a collage minus textures and composition?

Is there anything left to complain about? unless, by draw of luck, both layout and textures are very similar to a training image. But ensuring no close duplications are allowed should suffice.

Copyright should apply one by one, not in bulk. Each work they complain about should be judged on its own merits.

◧◩◪◨
4. manhol+a6[view] [source] 2023-01-14 08:08:51
>>visarg+D4
But they are not original works, they are wholly derived works of the training data set. Take that data set away and the algorithm is unable to produce a single original pixel.

The fact that the derivation involves millions of works as opposed to a single one is immaterial for the copyright issue.

◧◩◪◨⬒
5. realus+N7[view] [source] 2023-01-14 08:24:42
>>manhol+a6
The training data set is indeed mandatory but that doesn't make the resulting model a derivative in itself. In fact the training is specifically made to remove derivatives.
◧◩◪◨⬒⬓
6. IncRnd+V8[view] [source] 2023-01-14 08:36:12
>>realus+N7
Go to stablediffusionweb.com and enter "a person like biden" into the box. You will see a picture exactly like President Biden. That picture will have been derived from the trained images of Joe Biden. That cannot be in dispute.
◧◩◪◨⬒⬓⬔
7. realus+G9[view] [source] 2023-01-14 08:44:00
>>IncRnd+V8
Just because it generates you an image like Biden still does not make it a derivative either.

You can draw Biden yourself if you're talented and it's not considered a derivative of anything.

◧◩◪◨⬒⬓⬔⧯
8. IncRnd+ia[view] [source] 2023-01-14 08:51:13
>>realus+G9
There is no need for rhetorical games. The actual issue is that Stable Diffusion does create derivatives of copyrighted works. In some cases the produced images contain pixel level details from the originals. [1]

[1] https://arxiv.org/pdf/2212.03860.pdf

◧◩◪◨⬒⬓⬔⧯▣
9. realus+Ha[view] [source] 2023-01-14 08:55:14
>>IncRnd+ia
> The actual issue is that Stable Diffusion does create derivatives of copyrighted works.

Nothing points to that, in fact even in this website they had to lie on how stablediffusion actually works, maybe a sign that their argument isn't really solid enough.

> [1] https://arxiv.org/pdf/2212.03860.pdf

You realize those are considered defects of the model right? Sure, this model isn't perfect and will be improved.

[go to top]