zlacker

[return to "We’ve filed a law­suit chal­leng­ing Sta­ble Dif­fu­sion"]
1. dr_dsh+12[view] [source] 2023-01-14 07:17:25
>>zacwes+(OP)
“Sta­ble Dif­fu­sion con­tains unau­tho­rized copies of mil­lions—and pos­si­bly bil­lions—of copy­righted images.”

That’s going to be hard to argue. Where are the copies?

“Hav­ing copied the five bil­lion images—with­out the con­sent of the orig­i­nal artists—Sta­ble Dif­fu­sion relies on a math­e­mat­i­cal process called dif­fu­sion to store com­pressed copies of these train­ing images, which in turn are recom­bined to derive other images. It is, in short, a 21st-cen­tury col­lage tool.“

“Diffu­sion is a way for an AI pro­gram to fig­ure out how to recon­struct a copy of the train­ing data through denois­ing. Because this is so, in copy­right terms it’s no dif­fer­ent from an MP3 or JPEG—a way of stor­ing a com­pressed copy of cer­tain dig­i­tal data.”

The examples of training diffusion (eg, reconstructing a picture out of noise) will be core to their argument in court. Certainly during training the goal is to reconstruct original images out of noise. But, do they exist in SD as copies? Idk

◧◩
2. yazadd+X3[view] [source] 2023-01-14 07:43:18
>>dr_dsh+12
> That’s going to be hard to argue. Where are the copies?

In fairness, Diffusion is arguably a very complex entropy coding similar to Arithmetic/Huffman coding.

Given that copyright is protectable even on compressed/encrypted files, it seems fair that the “container of compressed bytes” (in this case the Diffusion model) does “contain” the original images no differently than a compressed folder of images contains the original images.

A lawyer/researcher would likely win this case if they re-create 90%ish of a single input image from the diffusion model with text input.

◧◩◪
3. anothe+96[view] [source] 2023-01-14 08:08:50
>>yazadd+X3
Great. Now the defence shows an artist that can recreate an image. Cool, now people who look at images get copyright suits filed against them for encoding those images in their heads.
◧◩◪◨
4. smusam+R11[view] [source] 2023-01-14 17:12:42
>>anothe+96
Don't think stable Diffusion can reproduce any single image its trained on, not matter what prompts you use.

It does have Mona lisa because of over fitting. But that's because there is too much Mona lisa on internet.

These artist taking part in suit won't be able to recreat any of their work.

◧◩◪◨⬒
5. Aerroo+ur1[view] [source] 2023-01-14 19:47:33
>>smusam+R11
I think there's a chance they might be able to recreate some simpler work if they make the prompts specific enough. When you set up a prompt you're essentially telling the system what you want it to generate - if you prompt it with enough specificity you might be able to just recreate the image you had.

Kind of like recreating your image one object at a time. It might not be exact, but close enough.

◧◩◪◨⬒⬓
6. smusam+RJ1[view] [source] 2023-01-14 22:05:25
>>Aerroo+ur1
People have tried, unless the thing you want to recreat has been seen by it a lot (over trained) you won't get the same image. You don't have that much fine grained control via text only.

Best you can do is to mask and keep inpainting the area that looks different until it doesn't.

[go to top]