zlacker

[parent] [thread] 251 comments
1. dr_dsh+(OP)[view] [source] 2023-01-14 07:17:25
“Sta­ble Dif­fu­sion con­tains unau­tho­rized copies of mil­lions—and pos­si­bly bil­lions—of copy­righted images.”

That’s going to be hard to argue. Where are the copies?

“Hav­ing copied the five bil­lion images—with­out the con­sent of the orig­i­nal artists—Sta­ble Dif­fu­sion relies on a math­e­mat­i­cal process called dif­fu­sion to store com­pressed copies of these train­ing images, which in turn are recom­bined to derive other images. It is, in short, a 21st-cen­tury col­lage tool.“

“Diffu­sion is a way for an AI pro­gram to fig­ure out how to recon­struct a copy of the train­ing data through denois­ing. Because this is so, in copy­right terms it’s no dif­fer­ent from an MP3 or JPEG—a way of stor­ing a com­pressed copy of cer­tain dig­i­tal data.”

The examples of training diffusion (eg, reconstructing a picture out of noise) will be core to their argument in court. Certainly during training the goal is to reconstruct original images out of noise. But, do they exist in SD as copies? Idk

replies(14): >>akjetm+C1 >>TheDon+E1 >>yazadd+W1 >>synu+G2 >>whatev+w3 >>anothe+Z3 >>groest+I5 >>codefl+57 >>bsder+d7 >>rysert+x9 >>locuto+jG >>jrm4+gN >>baxtr+ST >>Aerroo+kj1
2. akjetm+C1[view] [source] 2023-01-14 07:36:22
>>dr_dsh+(OP)
I don't think you have to reproduce an entire original work to demonstrate copyright violation. Think about sampling in hip hop for example. A 2 second sample, distorted, re-pitched, etc. can be grounds for a copyright violation.
replies(3): >>Salgat+Q1 >>limite+e2 >>mirekr+s3
3. TheDon+E1[view] [source] 2023-01-14 07:36:40
>>dr_dsh+(OP)
It doesn't matter if they exist as exact copies in my opinion.

The law doesn't recognize a mathematical computer transformation as creating a new work with original copyright.

If you give me an image, and I encrypt it with a randomly generated password, and then don't write down the password anywhere, the resulting file will be indistinguishable from random noise. No one can possibly derive the original image from it. But, it's still copyrighted by the original artist as long as they can show "This started as my image, and a machine made a rote mathematical transformation to it" because machine's making rote mathematical transformations cannot create new copyright.

The argument for stable diffusion would be that even if you cannot point to any image, since only algorithmic changes happened to the inputs, without any human creativity, the output is a derived work which does not have its own unique copyright.

replies(7): >>dymk+s2 >>limite+y2 >>mbgerr+54 >>hgomer+66 >>Last5D+of >>street+kA >>michae+HT1
◧◩
4. Salgat+Q1[view] [source] [discussion] 2023-01-14 07:41:38
>>akjetm+C1
The difference here is that the images aren't stored, but rather an extremely abstract description of the image was used to very slightly adjust a network of millions of nodes in a tiny direction. No semblance of the original image even remotely exists in the model.
replies(4): >>akjetm+g2 >>visarg+L3 >>AlotOf+C5 >>Xelyne+jV
5. yazadd+W1[view] [source] 2023-01-14 07:43:18
>>dr_dsh+(OP)
> That’s going to be hard to argue. Where are the copies?

In fairness, Diffusion is arguably a very complex entropy coding similar to Arithmetic/Huffman coding.

Given that copyright is protectable even on compressed/encrypted files, it seems fair that the “container of compressed bytes” (in this case the Diffusion model) does “contain” the original images no differently than a compressed folder of images contains the original images.

A lawyer/researcher would likely win this case if they re-create 90%ish of a single input image from the diffusion model with text input.

replies(10): >>yazadd+t2 >>visarg+C2 >>magnat+g3 >>anothe+84 >>chii+r4 >>vnoril+r5 >>madaxe+4k >>willia+UZ >>angust+xu2 >>vinter+BF2
◧◩
6. limite+e2[view] [source] [discussion] 2023-01-14 07:46:57
>>akjetm+C1
Perhaps different media has different rules? You can’t necessarily apply music sampling rules to text, for example. Eg I don’t think incorporating a phrase from someone else’s poem into my poem would be grounds for a copyright violation.
replies(1): >>IncRnd+m7
◧◩◪
7. akjetm+g2[view] [source] [discussion] 2023-01-14 07:47:55
>>Salgat+Q1
there are some artists with very strong, recognizable styles. if you provide one of these artists' name in your prompt and get a result back that employs their strong, recognizable style, i think that demonstrates that the network has a latent representation of the artists work stored inside of it.
replies(5): >>andyba+H2 >>eega+l3 >>realus+r3 >>astran+D8 >>djbebs+4q1
◧◩
8. dymk+s2[view] [source] [discussion] 2023-01-14 07:48:55
>>TheDon+E1
> But, it's still copyrighted by the original artist as long as they can show "This started as my image, and a machine made a rote mathematical transformation to it" because machine's making rote mathematical transformations cannot create new copyright.

Do you have evidence that this is actually what the courts have decided with respect to NNs?

◧◩
9. yazadd+t2[view] [source] [discussion] 2023-01-14 07:49:10
>>yazadd+W1
lol thinking about this more:

I understand people’s livelihoods are potentially at stake, but what a shame it would be if we find AGI, even consciousness but have to shut it down because of a copyright dispute.

replies(4): >>readth+K2 >>jimnot+53 >>visarg+u3 >>Xelyne+jS
◧◩
10. limite+y2[view] [source] [discussion] 2023-01-14 07:49:38
>>TheDon+E1
So what happens if you put a painting into a mechanical grinder? Is the shapeless pile of dust still copyrighted work? I don’t think so.
replies(2): >>jimnot+c3 >>TheDon+E7
◧◩
11. visarg+C2[view] [source] [discussion] 2023-01-14 07:50:34
>>yazadd+W1
> 90%ish of a single input image

Oh, one image is enough to apply copyright as if it were a patent, to ban a process that makes original works most of the time?

The article authors say it works as a "collage tool" trying to minimise the composition and layout of the image as unimportant elements. At the same time forgetting that SD is changing textures as well, so it's a collage minus textures and composition?

Is there anything left to complain about? unless, by draw of luck, both layout and textures are very similar to a training image. But ensuring no close duplications are allowed should suffice.

Copyright should apply one by one, not in bulk. Each work they complain about should be judged on its own merits.

replies(3): >>manhol+94 >>yazadd+ja >>chongl+KS
12. synu+G2[view] [source] 2023-01-14 07:51:37
>>dr_dsh+(OP)
You could make the same argument that as long as you are using lossy compression you are unable to infringe on copyright.
replies(2): >>visarg+g4 >>8n4vid+15
◧◩◪◨
13. andyba+H2[view] [source] [discussion] 2023-01-14 07:51:58
>>akjetm+g2
I was with you right up until the final sentence.

How did "style" become "work"?

replies(2): >>limite+p3 >>WA+V3
◧◩◪
14. readth+K2[view] [source] [discussion] 2023-01-14 07:52:03
>>yazadd+t2
what if they shut us down because of a copyright dispute? :-)
replies(1): >>yazadd+I6
◧◩◪
15. jimnot+53[view] [source] [discussion] 2023-01-14 07:57:08
>>yazadd+t2
I think the result will be image sharing websites where you have to agree to have your image read into the model.

I think it is likely github will do the same with copilot.

replies(2): >>8n4vid+Q4 >>voakba+NY
◧◩◪
16. jimnot+c3[view] [source] [discussion] 2023-01-14 07:59:05
>>limite+y2
The owner of that Banksy painting certainly thinks so.
replies(1): >>limite+E3
◧◩
17. magnat+g3[view] [source] [discussion] 2023-01-14 07:59:23
>>yazadd+W1
And how that's different from gzip or base64, which can re-create original image when given appropriate input?
replies(2): >>bryanr+G4 >>yazadd+C7
◧◩◪◨
18. eega+l3[view] [source] [discussion] 2023-01-14 08:00:09
>>akjetm+g2
So, what you are saying is that it is illegal to paint in the style of another artist? I‘m no lawyer, but I‘m pretty sure that is completely legss as long as you don’t claim your paintings ARE from the other artist.
◧◩◪◨⬒
19. limite+p3[view] [source] [discussion] 2023-01-14 08:00:55
>>andyba+H2
That’s the key question of the lawsuit IMO!
◧◩◪◨
20. realus+r3[view] [source] [discussion] 2023-01-14 08:01:37
>>akjetm+g2
No that doesn't, that demonstrates that the model has a abstract features and characteristics of this artists stored in the model, not work.

You can't bring back the training images no matter how hard you try.

◧◩
21. mirekr+s3[view] [source] [discussion] 2023-01-14 08:02:10
>>akjetm+C1
Can it? I thought up to 15s you can copy it verbatim without violation.
◧◩◪
22. visarg+u3[view] [source] [discussion] 2023-01-14 08:02:31
>>yazadd+t2
Didn't yesterday someone proclaim generative models can't destroy anything worth protecting? It was about chatGPT but the principle is the same.
23. whatev+w3[view] [source] 2023-01-14 08:02:48
>>dr_dsh+(OP)
This feels like the argument of a money launderer.
◧◩◪◨
24. limite+E3[view] [source] [discussion] 2023-01-14 08:03:45
>>jimnot+c3
The painting that has several cuts in about 25% of the surface area? Don’t think that constitutes as a shapeless pile of dust.
replies(1): >>jimnot+S5
◧◩◪
25. visarg+L3[view] [source] [discussion] 2023-01-14 08:04:53
>>Salgat+Q1
Not to mention that it works by inverting noise. Different noise, different result. Let's recognise the important contribution of noise here.
◧◩◪◨⬒
26. WA+V3[view] [source] [discussion] 2023-01-14 08:07:05
>>andyba+H2
Because in some cases, adding a style prompt gives almost the original image: https://www.reddit.com/r/StableDiffusion/comments/wby0ob/it_...
replies(1): >>andyba+A9
27. anothe+Z3[view] [source] 2023-01-14 08:07:50
>>dr_dsh+(OP)
It's going to be very hard to them to argue against Stable Diffusion and not reach the conclusion that people looking at art are doing exactly what training the AI did.

You looked at my art, now I can use copyright against the copies in your brain.

replies(1): >>visarg+y4
◧◩
28. mbgerr+54[view] [source] [discussion] 2023-01-14 08:08:18
>>TheDon+E1
No. Humans decided to include artwork that they did not have any right to use as part of a training data set. This is about holding humans accountable for their actions.
replies(2): >>dymk+95 >>andyba+S9
◧◩
29. anothe+84[view] [source] [discussion] 2023-01-14 08:08:50
>>yazadd+W1
Great. Now the defence shows an artist that can recreate an image. Cool, now people who look at images get copyright suits filed against them for encoding those images in their heads.
replies(2): >>dylan6+Z4 >>smusam+QZ
◧◩◪
30. manhol+94[view] [source] [discussion] 2023-01-14 08:08:51
>>visarg+C2
But they are not original works, they are wholly derived works of the training data set. Take that data set away and the algorithm is unable to produce a single original pixel.

The fact that the derivation involves millions of works as opposed to a single one is immaterial for the copyright issue.

replies(8): >>realus+M5 >>forgot+a6 >>basch+nI >>willia+h11 >>smegge+dd1 >>rule72+SR1 >>bobbru+cZ1 >>rsuelz+b52
◧◩
31. visarg+g4[view] [source] [discussion] 2023-01-14 08:09:50
>>synu+G2
That's a huge understatement. 5 billion images to a model of 5GB. 1 byte per image. Let's see if one byte per image would constitute a copyright violation in other fields than neural networks.
replies(4): >>synu+w4 >>forgot+17 >>cf141q+Jb >>Xelyne+AV
◧◩
32. chii+r4[view] [source] [discussion] 2023-01-14 08:11:12
>>yazadd+W1
> In fairness, Diffusion is arguably a very complex entropy coding similar to Arithmetic/Huffman coding.

so, digits of pi anyone?

◧◩◪
33. synu+w4[view] [source] [discussion] 2023-01-14 08:11:54
>>visarg+g4
It will be interesting to see how they legally define the moment where compression stops being compression and starts being an original work.

If I train on one image I can get it right back out. Even two, maybe even a thousand? Not sure what the line would be where it becomes ok vs not but there will have to be some answer.

replies(1): >>Xelyne+MV
◧◩
34. visarg+y4[view] [source] [discussion] 2023-01-14 08:12:09
>>anothe+Z3
By forcing the AI community to develop technology to avoid replication of training examples they might disclose every bit of human copying as well. What can detect copyright violations in AI can be applied on human works as well.
replies(1): >>Fillig+bF
◧◩◪
35. bryanr+G4[view] [source] [discussion] 2023-01-14 08:13:29
>>magnat+g3
well I guess it wouldn't be different, only there aren't any companies zipping up millions of images and then offering people the chance to get those images by putting in the text prompt that recreates them without paying any fees to the artists whose images were used.
replies(1): >>rysert+X9
◧◩◪◨
36. 8n4vid+Q4[view] [source] [discussion] 2023-01-14 08:15:47
>>jimnot+53
maybe a fair price to pay for free repo hosting. wouldn't want my private repos being used for training though
◧◩◪
37. dylan6+Z4[view] [source] [discussion] 2023-01-14 08:17:09
>>anothe+84
Just because I look at an image does not mean that I can recreate it. storing it in the training data means the AI can recreate it.

There's a world of difference that you are just writing off.

replies(3): >>realus+q5 >>XorNot+87 >>turtle+lU
◧◩
38. 8n4vid+15[view] [source] [discussion] 2023-01-14 08:17:17
>>synu+G2
if it's sufficiently lossy, yeah. don't know where you draw the line tho. maybe similar to fair use video clips.
replies(1): >>Xelyne+qW
◧◩◪
39. dymk+95[view] [source] [discussion] 2023-01-14 08:18:55
>>mbgerr+54
“Did they have a right to use publicly posted images” is up for the courts to decide
replies(1): >>cudgy+zh
◧◩◪◨
40. realus+q5[view] [source] [discussion] 2023-01-14 08:20:48
>>dylan6+Z4
> storing it in the training data means the AI can recreate it.

No it doesn't, it means that abstract facts related to this image might be stored.

replies(2): >>dylan6+98 >>bluefi+RT
◧◩
41. vnoril+r5[view] [source] [discussion] 2023-01-14 08:20:50
>>yazadd+W1
Storing copies of training data is pretty much the definition of overfitting, right?

The data must be encoded with various levels of feature abstraction for this stuff to work at all. Much like humans learning art, if devoid of the input that makes human art interesting (life experience).

I think a more promising avenue for litigating AI plagiarism is to identify that the model understands some narrow slice of the solution space that contains copyrighted works, but is much weaker when you try to deviate from it. Then you could argue that the model has probably used that distinct work rather than learned a style or a category.

replies(1): >>lolind+t51
◧◩◪
42. AlotOf+C5[view] [source] [discussion] 2023-01-14 08:22:47
>>Salgat+Q1
This is very much a 'color of your bits' topic, but I'm not sure why the internal representation matters. It's pretty trivial to recreate famous works like the Mona Lisa or Starry Night or Monet's Water Lily Pond. Obviously some representation of the originals exist inside the model+prompt. Why wouldn't that apply to other images in the training sets?
replies(5): >>XorNot+w7 >>Kim_Br+lp >>Fillig+Fr >>huggin+jM >>derang+8h1
43. groest+I5[view] [source] 2023-01-14 08:23:52
>>dr_dsh+(OP)
> It is, in short, a 21st-cen­tury col­lage tool.

Interesting that they mention collages. IANAL but it was my impression that collages are derivative work if they incorporate many different pieces and only small parts of the original. Their compression argument seems more convincing.

replies(1): >>Fillig+KE
◧◩◪◨
44. realus+M5[view] [source] [discussion] 2023-01-14 08:24:42
>>manhol+94
The training data set is indeed mandatory but that doesn't make the resulting model a derivative in itself. In fact the training is specifically made to remove derivatives.
replies(1): >>IncRnd+U6
◧◩◪◨⬒
45. jimnot+S5[view] [source] [discussion] 2023-01-14 08:25:25
>>limite+E3
So what % does?
replies(1): >>limite+tF1
◧◩
46. hgomer+66[view] [source] [discussion] 2023-01-14 08:27:45
>>TheDon+E1
Some years ago I had an idea to have a method of file sharing with strong plausible deniability from the sharer.

The idea, in stage one, was to split a file into chunks and xor those with other random chunks (equivalent to a one-time pad), those chunks as well as the created random chunks then got shared around the networks, with nobody hosting both parts of a pair.

The next stage is that future files inserted into the network would not create new random chunks but randomly use existing chunks already in the network. The result is a distributed store of chunks each of which is provably capable of generating any other chunk given the right pair. The correlations are then stored in a separate manifest.

It feels like such a system is some kind of entropy coding system. In the limit the manifest becomes the same size as the original data. At the same time though, you can prove that any given chunk contains no information. I love thinking about how the philosophy of information theory interacts with the law.

replies(1): >>TheDon+L6
◧◩◪◨
47. forgot+a6[view] [source] [discussion] 2023-01-14 08:28:07
>>manhol+94
If I were to take the first word from a thousand books and use it to write my own would I be guilty of copyright violations?
replies(1): >>yazadd+C8
◧◩◪◨
48. yazadd+I6[view] [source] [discussion] 2023-01-14 08:34:49
>>readth+K2
Seriously!!!

I didn’t say it cuz I didn’t think it would resonate, but it’s a whole new world we are quickly entering.

◧◩◪
49. TheDon+L6[view] [source] [discussion] 2023-01-14 08:35:09
>>hgomer+66
I think this touches on the core mismatch between the legal perspective and technical perspective.

Yes, on a technical level, those chunks are random data. On the legal side, however, those chunks are illegal copyright infringement because that is their intent, and there is a process that allows the intent to happen.

I can't really say it better than this post does, so I highly recommend reading it: https://ansuz.sooke.bc.ca/entry/23

replies(2): >>XorNot+Y7 >>hgomer+nY2
◧◩◪◨⬒
50. IncRnd+U6[view] [source] [discussion] 2023-01-14 08:36:12
>>realus+M5
Go to stablediffusionweb.com and enter "a person like biden" into the box. You will see a picture exactly like President Biden. That picture will have been derived from the trained images of Joe Biden. That cannot be in dispute.
replies(3): >>realus+F7 >>willia+J51 >>rsuelz+q52
◧◩◪
51. forgot+17[view] [source] [discussion] 2023-01-14 08:36:49
>>visarg+g4
The distribution of the bytes matters a bit here. In theory the model could be over trained against one copyrighted work such that it is almost perfectly preserved within the model.
replies(1): >>synu+78
52. codefl+57[view] [source] 2023-01-14 08:37:43
>>dr_dsh+(OP)
You seem to be under the impression that SD can only generate original art. However, it will literally recreate existing paintings for you if you just prompt it with the title. Identical composition and everything.
replies(3): >>Fillig+5F >>Aerroo+Ru1 >>gdubs+xL4
◧◩◪◨
53. XorNot+87[view] [source] [discussion] 2023-01-14 08:38:28
>>dylan6+Z4
No, it means there is a 512 bit number you can combine with the training data to reproduce a reasonable though not exact likeness (attempts to use SD and others as compression algorithms show they're pretty bad at it, because while they can get "similar" they'll outright confabulate details in a plausible looking way - i.e. redrawing the streets of San Francisco in images of the golden gate bridge).

Which of course then arrives at the problem: the original data plainly isn't stored in a byte exact form, and you can only recover it by providing an astounding specific input string (the 512 bit latent space vector). But that's not data which is contained within Stable Diffusion. It's equivalent to trying to sue a compression codec because a specific archive contains a copyrighted image.

replies(2): >>yazadd+48 >>danari+RI
54. bsder+d7[view] [source] 2023-01-14 08:39:09
>>dr_dsh+(OP)
> That’s going to be hard to argue. Where are the copies?

If you take that tack, I'll go one step further back in time and ask "Where is your agreement from the original author who owns the copyright that you could use this image in the way you did?"

The fact that there is suddenly a new way to "use an image" (input to a computer algorithm) doesn't mean that copyright magically doesn't also apply to that usage.

A canonical example is the fact that television programs like "WKRP in Cincinnati" can't use the music licenses from the television broadcast if they want to distribute a DVD or streaming version--the music has to be re-licensed.

replies(1): >>huggin+cY
◧◩◪
55. IncRnd+m7[view] [source] [discussion] 2023-01-14 08:40:49
>>limite+e2
"Copyright currently protects poetry just like it protects any other kind of writing or work of authorship. Poetry, therefore, is subject to the same minimal standards for originality that are used for other written works, and the same tests determine whether copyright infringement has occurred." [1]

[1] https://scholarship.law.vanderbilt.edu/vlr/vol58/iss3/13/

◧◩◪◨
56. XorNot+w7[view] [source] [discussion] 2023-01-14 08:42:38
>>AlotOf+C5
Because you're silently invoking additional data (the prompt + noise seed), which is not present in the training weights. You have the prompt + noise seed for any given output.

An MPEG codec doesn't contain every movie in the world just because it could represent them if given the right file.

The white light coming off a blank canvas also doesn't contain a copy of the Mona Lisa which will be revealed once someone obscures some of the light.

replies(1): >>ifdefd+Zi
◧◩◪
57. yazadd+C7[view] [source] [discussion] 2023-01-14 08:43:38
>>magnat+g3
That’s my point, Diffusion[1] does seem to be “just like” gzip or base64.

And it would be illegal for me to sell or distribute zipped copies of images without the copyright holder’s consent. Similarly there might be an argument for why Diffusion[1] specifically can’t be built with copyrighted images.

[1] which is just one part of something like Stable Diffusion

replies(1): >>astran+j8
◧◩◪
58. TheDon+E7[view] [source] [discussion] 2023-01-14 08:43:56
>>limite+y2
Maybe?

If you take a bad paper shredder that, say, shreds a photo into large re-usable chunks, run the photo through that, and tape the large re-usable chunks back together, you have a photo with the same copyright as before.

If you tape them together in a new creative arrangement, you might apply enough human creativity to create a new copyrighted work.

If you grind the original to dust, and then have a mechanical process somehow mechanically re-arrange the pieces back into an image without applying creativity, then the new mechanically created arrangement would, I suspect, be a derived work.

Of course, such a process don't really exist, so for the "shapeless dust" question, it's pretty pointless to think about. However, stable diffusion is grinding images down into neural networks, and then without a significant amount of human creativity involved, creating images reconstituted from that dust.

Perhaps the prompt counts as human creativity, but that seems fairly unlikely. After all, you can give it a prompt of 'dog' and get reconstituted dust, that hardly seems like it clears a bar.

Perhaps the training process somehow injected human creativity, but that also seems difficult to argue, it's an algorithm.

◧◩◪◨⬒⬓
59. realus+F7[view] [source] [discussion] 2023-01-14 08:44:00
>>IncRnd+U6
Just because it generates you an image like Biden still does not make it a derivative either.

You can draw Biden yourself if you're talented and it's not considered a derivative of anything.

replies(3): >>IncRnd+h8 >>yazadd+u9 >>bluefi+wT
◧◩◪◨
60. XorNot+Y7[view] [source] [discussion] 2023-01-14 08:47:25
>>TheDon+L6
Except you've a heckin' problem with Stable Diffusion because you have to argue that the intent is to steal the copyright by copying already existing artworks.

But that's not what people use Stable Diffusion for: people use Stable Diffusion to create new works which don't previously exist as that combination of colors/bytes/etc.

Artists don't have copyright on their artistic style, process, technique or subject matter - only on the actual artwork they output or reasonable similarities. But "reasonable similarity" covers exactly that intent - an intent to simply recreate the original.

People keep talking about copyright, but no one's trying to rip off actual existing work. They're doing things like "Pixar style, ultra detailed gundam in a flower garden". So you're rocking up in court saying "the intent is to steal my clients work" - but where is the clients line of gundam horticultural representations? It doesn't exist.

You can't copyright artistic style, only actual output. Artists are fearful that the ability to emulate style means commissions will dry up (this is true) but you've never had copyright protection over style, and it's not even remotely clear how that would work (and, IMO, it would be catastrophic if it was - there's exactly one group of megacorps who would now be in a position to sue everyone because try defining "style" in a legal sense).

replies(1): >>TheDon+om
◧◩◪◨⬒
61. yazadd+48[view] [source] [discussion] 2023-01-14 08:48:57
>>XorNot+87
> It's equivalent to trying to sue a compression codec because a specific archive contains a copyrighted image.

This is the most salient point in this whole HN thread!

You can’t sue Stable Diffusion or the creators of it! That just seems silly.

But (I don’t know I’m not a lawyer) there might be an argument to sue an instance of Stable Diffusion and the creators of it.

I haven’t picked a side of this debate yet, but it has already become a fun debate to watch.

replies(2): >>astran+u8 >>techdr+qv
◧◩◪◨
62. synu+78[view] [source] [discussion] 2023-01-14 08:49:47
>>forgot+17
You can see this with the Mona Lisa. You can get pretty close reproductions back by asking for it (or at least you could in one of the iterations). Likely it overfit due to it being such a ubiquitous image.
◧◩◪◨⬒
63. dylan6+98[view] [source] [discussion] 2023-01-14 08:49:50
>>realus+q5
The pedantry gets tiring. If the AI can't recreate it exactly, it can recreate a likeness that is compelling enough that the average person would think it was the same. If it can't now, it will as it gets better. That's the point of using the training data.
replies(3): >>astran+r8 >>realus+O8 >>hhjink+9i
◧◩◪◨⬒⬓⬔
64. IncRnd+h8[view] [source] [discussion] 2023-01-14 08:51:13
>>realus+F7
There is no need for rhetorical games. The actual issue is that Stable Diffusion does create derivatives of copyrighted works. In some cases the produced images contain pixel level details from the originals. [1]

[1] https://arxiv.org/pdf/2212.03860.pdf

replies(1): >>realus+G8
◧◩◪◨
65. astran+j8[view] [source] [discussion] 2023-01-14 08:51:21
>>yazadd+C7
A lossy compressor isn't just like a lossless compressor. Especially not one that has ~2 bytes for each input image.
replies(3): >>synu+69 >>yazadd+a9 >>Xelyne+IS
◧◩◪◨⬒⬓
66. astran+r8[view] [source] [discussion] 2023-01-14 08:52:36
>>dylan6+98
That is not the point of using the training data. It's specifically trained to not do that.

See https://openai.com/blog/dall-e-2-pre-training-mitigations/ "Preventing Image Regurgitation".

replies(1): >>ghaff+XZ
◧◩◪◨⬒⬓
67. astran+u8[view] [source] [discussion] 2023-01-14 08:53:33
>>yazadd+48
You can't (successfully) sue the creators of Stable Diffusion because they're an academic group in Germany, a country that has an explicit allowance in copyright law for training non-commercial models.
◧◩◪◨⬒
68. yazadd+C8[view] [source] [discussion] 2023-01-14 08:54:29
>>forgot+a6
Words have a special carve out in copyright law / precedent. So much so that a whole other category of Intellectual Property exists called Trademarks to protect special words.

But back to your point “if you were to take the first sentence from a thousand books and use it in your own book”, then yes based on my understanding (I am not a lawyer) of copyright you would be in violation of IP laws.

replies(1): >>basch+YI
◧◩◪◨
69. astran+D8[view] [source] [discussion] 2023-01-14 08:54:49
>>akjetm+g2
Or it means their style is so easy to recognize that you can see it even when it doesn't exist.

The most common example of this (Greg Rutkowski) is not in StableDiffusion's training set.

◧◩◪◨⬒⬓⬔⧯
70. realus+G8[view] [source] [discussion] 2023-01-14 08:55:14
>>IncRnd+h8
> The actual issue is that Stable Diffusion does create derivatives of copyrighted works.

Nothing points to that, in fact even in this website they had to lie on how stablediffusion actually works, maybe a sign that their argument isn't really solid enough.

> [1] https://arxiv.org/pdf/2212.03860.pdf

You realize those are considered defects of the model right? Sure, this model isn't perfect and will be improved.

replies(1): >>IncRnd+i9
◧◩◪◨⬒⬓
71. realus+O8[view] [source] [discussion] 2023-01-14 08:56:40
>>dylan6+98
> If the AI can't recreate it exactly, it can recreate a likeness that is compelling enough that the average person would think it was the same

That's the opposite goal of this image model. Sure you might find other types of research models which are meant to do that but that's not stablediffusion and the likes.

◧◩◪◨⬒
72. synu+69[view] [source] [discussion] 2023-01-14 08:59:58
>>astran+j8
How many bytes make it an original work vs a compressed copy?
replies(2): >>astran+Rb >>bluebo+mt1
◧◩◪◨⬒
73. yazadd+a9[view] [source] [discussion] 2023-01-14 09:00:34
>>astran+j8
I agree with you. My intuition is also that SD itself is not a violation of copyright.

That said it can sometimes be in violation of copyright if it creates a specific image that is “too close to another original” (just like a human would be in violation even if they never previously saw that image).

But the above is just my intuition (and possibly yours) that doesn’t mean a lawyer couldn’t make the argument that it’s a ”good enough lossy compression - just like jpeg but smaller” and therefore “contains the images in just 2 bytes”.

That lawyer may fail to win the argument, but there is a chance that they do win the argument! Especially as researchers keep making Diffusion and SD models better and better at being compression algos (which is a topic people are actively working on).

◧◩◪◨⬒⬓⬔⧯▣
74. IncRnd+i9[view] [source] [discussion] 2023-01-14 09:01:10
>>realus+G8
> You realize those are considered defects of the model right? Sure, this model isn't perfect.

You can call copying of input as a defect, but why are you simultaneously arguing that it doesn't occur?

replies(1): >>realus+2a
◧◩◪◨⬒⬓⬔
75. yazadd+u9[view] [source] [discussion] 2023-01-14 09:03:30
>>realus+F7
Correction: if you draw a copy of Biden and it happens to overlap enough with someone’s copyright of a drawing or image of Biden, you did create a derivative (whether you knew it or not).
replies(1): >>realus+bc
76. rysert+x9[view] [source] 2023-01-14 09:03:58
>>dr_dsh+(OP)
I would agree that we're acting like hypocrites here. But unlike stable diffusion, GitHub didn't release their model. So it's extremely hard to know what's going on inside copilot, on the other hand we have the model of stable diffusion and we can see wether or not it has memorized copyrighted images.
◧◩◪◨⬒⬓
77. andyba+A9[view] [source] [discussion] 2023-01-14 09:04:18
>>WA+V3
And yet nobody has managed to demonstrate reconstruction of a large enough section of a work that is still under copyright to prove the point.

The only thing so far discovered is either a) older public domain works nearly fully reproduced b) small fragments of newer works or c) "likenesses"

◧◩◪
78. andyba+S9[view] [source] [discussion] 2023-01-14 09:06:44
>>mbgerr+54
"right" in the informal sense or in some legal sense?
replies(1): >>mbgerr+1m
◧◩◪◨
79. rysert+X9[view] [source] [discussion] 2023-01-14 09:08:05
>>bryanr+G4
Search engines do that.
replies(1): >>bryanr+ug
◧◩◪◨⬒⬓⬔⧯▣▦
80. realus+2a[view] [source] [discussion] 2023-01-14 09:08:35
>>IncRnd+i9
I don't call these defects copying either but overfitting characteristics. Usually they are there because there's a massive amount of near-identical images.

It's both undesirable and not relevant to this kind of lawsuit.

◧◩◪
81. yazadd+ja[view] [source] [discussion] 2023-01-14 09:12:29
>>visarg+C2
> Oh, one image is enough to apply copyright as if it were a patent, to ban a process that makes original works most of the time?

The law can do whatever its writers want. The law is mutable, so the answer to your question is “maybe”.

Maybe SD will get outlawed for copyright reasons on a single image. The law and the courts have done sillier things.

replies(1): >>ghaff+ZS
◧◩◪
82. cf141q+Jb[view] [source] [discussion] 2023-01-14 09:31:00
>>visarg+g4
Another thing worth referencing in this context might be hashing. If a few bytes per image are copyright infringement, then likely so is publishing checksums.
replies(2): >>synu+Ff >>Xelyne+0W
◧◩◪◨⬒⬓
83. astran+Rb[view] [source] [discussion] 2023-01-14 09:33:32
>>synu+69
Usually judges would care more about whether the bytes came from than how many of them there are.

Since SD is trained by gradient updating against several different images at the same time, it of course never copies any image bits straight into it. Since it's a latent-diffusion model, actual "image"ness is limited to the image encoder (VAE), so any fractional bits would be in there if you want to look.

The text encoder (LAION OpenCLIP) does have bits from elsewhere copied straight into it to build the tokens list.

https://huggingface.co/stabilityai/stable-diffusion-2-1/raw/...

replies(2): >>synu+vf >>derang+841
◧◩◪◨⬒⬓⬔⧯
84. realus+bc[view] [source] [discussion] 2023-01-14 09:36:41
>>yazadd+u9
is that really how copyright law works? Drawing something similar independently is considered a derivative even if there's no links to it?

It's bad news for art websites themselves if that's the case...

replies(1): >>techdr+Yt
◧◩
85. Last5D+of[view] [source] [discussion] 2023-01-14 10:12:48
>>TheDon+E1
This surely can't be the case, right? If it was, then what's stopping me from taking any possible byte sequence and applying my copyright to it?

I could always show that there exists some function f that produces said byte sequence when applied to my copyrighted material.

Can I sue Microsoft because the entire Windows 11 codebase is just one "rote mathematical transformation" away from the essay I wrote in elementary school?

replies(1): >>TheDon+ok
◧◩◪◨⬒⬓⬔
86. synu+vf[view] [source] [discussion] 2023-01-14 10:14:21
>>astran+Rb
The important distinction then is using another program or device to analyze the bits but without copying them, that takes its own new impression? Like using a camera?
replies(1): >>astran+Ug
◧◩◪◨
87. synu+Ff[view] [source] [discussion] 2023-01-14 10:15:50
>>cf141q+Jb
What is a 1080p MP4 video of a film if not simply a highly detailed, irreversible but guaranteed unique checksum of that original content?
replies(1): >>cf141q+4y
◧◩◪◨⬒
88. bryanr+ug[view] [source] [discussion] 2023-01-14 10:23:57
>>rysert+X9
good point, but didn't Google Image search lose some case and have to change their behavior?
replies(1): >>rule72+6T1
◧◩◪◨⬒⬓⬔⧯
89. astran+Ug[view] [source] [discussion] 2023-01-14 10:27:31
>>synu+vf
Well, theoretically more like a vague memory of it or taking notes on it.
◧◩◪◨
90. cudgy+zh[view] [source] [discussion] 2023-01-14 10:36:12
>>dymk+95
Pretty sure that’s already decided. Publicly played movies and music are not available to be used. Why would the same not apply to posted images?
replies(2): >>dymk+WH >>UncleE+1P
◧◩◪◨⬒⬓
91. hhjink+9i[view] [source] [discussion] 2023-01-14 10:45:31
>>dylan6+98
Why does this argument apply to an Artificial Intelligence, but not a human one? A human is not breaking copyright just by being able recreate a copyrighted work they've studied.
replies(1): >>ghaff+gW
◧◩◪◨⬒
92. ifdefd+Zi[view] [source] [discussion] 2023-01-14 10:57:19
>>XorNot+w7
OK so let me encrypt a movie and distribute that. Then you tell people they need to invoke additional data to watch the movie. Also give some hints (try the movie title lol).
replies(1): >>XorNot+Jz
◧◩
93. madaxe+4k[view] [source] [discussion] 2023-01-14 11:08:30
>>yazadd+W1
In that vein, surely MD5 hashes should also be copyrighted, as they are derived from a work.
replies(1): >>Xelyne+uS
◧◩◪
94. TheDon+ok[view] [source] [discussion] 2023-01-14 11:11:27
>>Last5D+of
The law doesn't care about technical tricks. It cares about how you got the bytes and what humans think of them.

Sure, the windows 11 codebase is in pi somewhere if you go far enough. Sure, pi is a non-copyrightable fact of nature. That doesn't mean the windows codebase is _actually_ in pi legally, just that it technically is.

The law does not care about weird gotchas like you describe.

I recommended reading this to a sibling comment, and I'll recommend it to you too: https://ansuz.sooke.bc.ca/entry/23

Yes, copyright law has obviously irrational results if you start trying to look at it only from a technical "but information is just 1s and 0s, you can't copyright 1s and 0s" perspective. The law does not care.

Which is why we have to think about the high level legal process that stable diffusion does, not so much the actual small technical details like "can you recover images from the neural net" or such.

replies(2): >>derang+jd1 >>93po+AF1
◧◩◪◨
95. mbgerr+1m[view] [source] [discussion] 2023-01-14 11:27:28
>>andyba+S9
Legal
replies(1): >>andyba+Jy
◧◩◪◨⬒
96. TheDon+om[view] [source] [discussion] 2023-01-14 11:30:08
>>XorNot+Y7
> because you have to argue that the intent is to steal the copyright by copying already existing artworks

Copyright infringement can happen without intending to infringe copyright.

Various music copyright cases start with "Artist X sampled some music from artist Y, thinking it was transformative and fair use". The court, in some of these cases, have found something the artist _intended_ to be transformative to in fact be copyright infringement.

> You can't copyright artistic style, only actual output

You copyright outputs, and then works that are derived from those outputs are potentially copyrighted. Stable Diffusion's outputs are clearly defined from the training set, basically by definition of what neural networks are.

It's less clear they're definitely copyright-infringing derivative works, but it's far less clearcut than how you're phrasing it.

◧◩◪◨
97. Kim_Br+lp[view] [source] [discussion] 2023-01-14 12:04:14
>>AlotOf+C5
Longer term, by analogy, it will then of course turn into a "what color is your neural net" topic.

Which runs into some very interesting historical precedents.

((I wonder if there's a split between people who think AI emancipation might happen this century versus people who think that such a thing is silly to contemplate))

◧◩◪◨
98. Fillig+Fr[view] [source] [discussion] 2023-01-14 12:28:11
>>AlotOf+C5
It applies to these specific images because there were thousands and thousands of copies in the training set. That’s not true for newer works.
replies(1): >>zowie_+4P
◧◩◪◨⬒⬓⬔⧯▣
99. techdr+Yt[view] [source] [discussion] 2023-01-14 12:48:43
>>realus+bc
No that’s not… at least in many countries. Unlike patents, “parallel creation” is allowed, this was fought out in case law over photography decades ago, because photographers would take images of the same subject, then someone else would, and they might incidentally capture a similar image for lots of reasons and thus before ubiquitous photography in our pockets, when you had to have expensive equipment or carefully control the lighting in a portraiture studio to get great results… well it happened and people sued like those with money to spare for lawyers are want to do, and thus precedent has been established for much of this. You don’t see it a lot outside photography but it’s not a new thing for art copyright law and I think the necessity of the user to provide their own input and get different outcomes outside of extremely sophisticated prompt editing… will be a significant fact in their favour.
◧◩◪◨⬒⬓
100. techdr+qv[view] [source] [discussion] 2023-01-14 13:00:44
>>yazadd+48
Exactly, the quarrel here is between the users of Stable Diffusion, some of which are deliberately, legally speaking with intent (prompt crafting to get a specific output demonstrates clear intent), trying to use Stable Diffusion to produce images that are highly derivative of and may or may not be declared legally infringing works of another artist, and the artists who’s works are being potentially infringed upon.

You can’t sue Canon for helping a user take better infringing copies of a painting, nor can you sue Apple or Nikon or Sony or Samsung… you can sue the user making an infringing image, not the tools they used to make the infringing image… the tools have no mens rea.

◧◩◪◨⬒
101. cf141q+4y[view] [source] [discussion] 2023-01-14 13:24:16
>>synu+Ff
I think this is overstretching it. That would be a checksum that can be parsed by humans and contains artistic value that serves as the basis for claims to copyright. An actual checksum no longer has artistic value in itself and cant reproduce the original work.

Which is why this is framed as compression, it implies that fundamentally SD makes copies instead of (re)creating art. Leaving out the issue of recreating forgeries of existing works, using the training data for the creation of new pieces should be well covered inside the bounds of appropriation. Demanding anything more then filtering the output of SD for 1:1 reproductions of the training data is really pushing it.

edit: Checksums arent necessarily unique btw. See "Hash collisions".

replies(1): >>synu+0F
◧◩◪◨⬒
102. andyba+Jy[view] [source] [discussion] 2023-01-14 13:31:01
>>mbgerr+1m
Can you clarify? My understanding is that it's very unclear whether there are any legal issues (in most jurisdictions) in scraping for training.

Obviously some fairy reputable organisations and individuals are moderately confident that there isn't otherwise they wouldn't have done it.

replies(1): >>Xelyne+IU
◧◩◪◨⬒⬓
103. XorNot+Jz[view] [source] [discussion] 2023-01-14 13:40:21
>>ifdefd+Zi
If you distribute a random byte stream, and someone uses that as a one time pad to encrypt a movie, then are you distributing the movie?

The answer is of course not, and the same principle applies if someone uses Stable Diffusion to find a latent space encoding for a copyright image (the 231 byte number - had to go double check what the grid size actually is).

replies(1): >>ifdefd+641
◧◩
104. street+kA[view] [source] [discussion] 2023-01-14 13:47:21
>>TheDon+E1
The only problem is that computers (I.e. most computers) cannot really generate random numbers.
◧◩
105. Fillig+KE[view] [source] [discussion] 2023-01-14 14:30:12
>>groest+I5
Compression down to two bytes per image?

You run into the pigeonhole argument. That level of compression can only work if there are less than seventy thousand different images in existence, total.

Certainly there’s a deep theoretical equivalent between intelligence and compression, but this scenario isn’t what anyone means by “compression” normally.

replies(1): >>Xelyne+EX
◧◩◪◨⬒⬓
106. synu+0F[view] [source] [discussion] 2023-01-14 14:32:54
>>cf141q+4y
Overfitting seems like a fuzzy area here. I could train a model on one image that could consistently produce an output no human could tell apart from the original. And of course, shades of gray from there.

Regarding your edit, what are the chances of a "hash collision" where the hash is two MP4 files for two different movies? Seems wildly astronomical.. impossible even? That's why this hash method is so special, plus the built in preview feature you can use to validate your hash against the source material, even without access to the original.

replies(1): >>cf141q+OT
◧◩
107. Fillig+5F[view] [source] [discussion] 2023-01-14 14:33:34
>>codefl+57
I’m curious. Can you give an example of that happening for a painting that’s still in copyright?
◧◩◪
108. Fillig+bF[view] [source] [discussion] 2023-01-14 14:34:21
>>visarg+y4
I’m afraid we won’t like the outcome.
109. locuto+jG[view] [source] 2023-01-14 14:44:17
>>dr_dsh+(OP)
> That’s going to be hard to argue. Where are the copies?

Discovery will show exactly what the base images for training are. You can view that the outputs are derivative works.

I don't think the mechanism is going to shield the violation, and frankly it shouldn't.

License your source material for the purpose and do it right. Doesn't everyone know it's wrong to steal?

replies(1): >>miohta+RG
◧◩
110. miohta+RG[view] [source] [discussion] 2023-01-14 14:48:33
>>locuto+jG
People in art school also practice by studying existing art and images.
replies(1): >>thethi+kH
◧◩◪
111. thethi+kH[view] [source] [discussion] 2023-01-14 14:53:11
>>miohta+RG
I think what’s clear is that this is an unprecedented type of use. I’m really interested in seeing how the courts rule on this one as it has wide implications for the AI era.
replies(2): >>joshsp+NH >>speled+tI
◧◩◪◨
112. joshsp+NH[view] [source] [discussion] 2023-01-14 14:56:54
>>thethi+kH
If not done extremely carefully, it also has wide implications for human artists
◧◩◪◨⬒
113. dymk+WH[view] [source] [discussion] 2023-01-14 14:58:21
>>cudgy+zh
What court case set the president that you can’t train a neural network on publicly posted movies and audio?
replies(1): >>Xelyne+KT
◧◩◪◨
114. basch+nI[view] [source] [discussion] 2023-01-14 15:01:35
>>manhol+94
If I take a million copywritten images from magazines, cut them with scissors, and make a single collage, I would expect the resulting image to be fair use. Fair use is an affirmative defense, like self defense, where you justify your infringement.

People are treating this like its a binary technical decision. Either it is or isn't a violation. Reality is that things are spectrums and judges judge. SD will likely be treated like a remix that sampled copywritten work, but just a tiny bit of each work, and sufficiently transformed it to create a new work.

replies(1): >>chongl+vT
◧◩◪◨
115. speled+tI[view] [source] [discussion] 2023-01-14 15:02:19
>>thethi+kH
Because this use is unprecedented, as you say, it's clear that the law wasn't written with this use case in mind. The more interesting question in my mind is what we think the new law should be, rather than what the courts happen to make of the existing law. I.e., I think the answer should come from politicians not from judges.
◧◩◪◨⬒
116. danari+RI[view] [source] [discussion] 2023-01-14 15:04:51
>>XorNot+87
> It's equivalent to trying to sue a compression codec because a specific archive contains a copyrighted image.

That's plainly untrue, as Stable Diffusion is not just the algorithm, but the trained model—trained on millions of copyrighted images.

replies(2): >>yazadd+Z01 >>anothe+GQ4
◧◩◪◨⬒⬓
117. basch+YI[view] [source] [discussion] 2023-01-14 15:06:02
>>yazadd+C8
I doubt it would be a violation.

Specifically fair use #3 "the amount and substantiality of the portion used in relation to the copyrighted work as a whole."

A sentence being a copyright violation would make every book review in the world illegal.

◧◩◪◨
118. huggin+jM[view] [source] [discussion] 2023-01-14 15:31:42
>>AlotOf+C5
>It's pretty trivial to recreate famous works like the Mona Lisa or Starry Night or Monet's Water Lily Pond.

A recreation of a piece of art does not mean a copy, I've personally seen hundreds of recreations of Edvard Munch's 'The Scream', all of them perfectly legal.

Even in a massively overtrained model, it is practically impossible to create a 1:1 copy of a piece of art the model was trained upon.

And of course that would be a pointless exercise to begin with, why would anyone want to generate 1:1 copies (or anything near that) of existing images ?

The whole 'magic' of Stable Diffusion is that you can create new works of art in the combined styles of art, photography etc that it has been trained on.

replies(1): >>AlotOf+2b1
119. jrm4+gN[view] [source] 2023-01-14 15:38:06
>>dr_dsh+(OP)
This is a gross misunderstanding of "copyright?" The model doesn't need to "contain" the images anymore than a Harry Potter knockoff needs to "contain" huge chunks of text from the originals.
replies(1): >>ghaff+ER
◧◩◪◨⬒
120. UncleE+1P[view] [source] [discussion] 2023-01-14 15:51:34
>>cudgy+zh
If you post a song on your website and I listen to it am I violating your copyright?

If my parrot recites your song after hearing my alleged infringement, I record its performance and post it on YouTube is that infringement?

Last one, if I use the song from your website to train an song recognition AI is that infringement?

replies(1): >>Xelyne+3U
◧◩◪◨⬒
121. zowie_+4P[view] [source] [discussion] 2023-01-14 15:51:42
>>Fillig+Fr
That's not true. As an example of a more recent copyright-protected work that Stability AI consistently reproduces fairly faithfully, I invite you to try out the prompt "bloodborne box art".
◧◩
122. ghaff+ER[view] [source] [discussion] 2023-01-14 16:12:56
>>jrm4+gN
It depends. If names are different, character and plot details differ, etc., a book about students at a school for wizards battling great evil may be a not particularly imaginative rip-off and may even invite litigation if it's too close, but I'm guessing it wouldn't win in court. See also The Sword of Shannara and Tolkien. https://en.wikipedia.org/wiki/The_Sword_of_Shannara

Creators mimic styles and elements of others' works all the time. Unless an ML algorithm crosses some literal copying threshold, I fail to see it as doing anything substantially different from what people routinely do.

◧◩◪
123. Xelyne+jS[view] [source] [discussion] 2023-01-14 16:17:44
>>yazadd+t2
The real tragedy is being marketed to so heavily that we construe enforcing copyright on llm/diffusion companies with shutting down an AGI. I blame companies like openai purposefully marketing themselves poorly since nobody is going to enforce false advertising laws on something they don't understand.
replies(1): >>menset+B41
◧◩◪
124. Xelyne+uS[view] [source] [discussion] 2023-01-14 16:19:12
>>madaxe+4k
Not really, since one of the major characteristics is being able to recover the copyrighted work from the encoded version.

Since md5 hashes don't share this property, they're not "in that vein".

replies(1): >>madaxe+Ne5
◧◩◪◨⬒
125. Xelyne+IS[view] [source] [discussion] 2023-01-14 16:20:26
>>astran+j8
So it's fine to distribute copyrighted works, as long as they're jpeg(lossy) encoded? I don't think the law would agree with you.
replies(1): >>Athero+D41
◧◩◪
126. chongl+KS[view] [source] [discussion] 2023-01-14 16:21:15
>>visarg+C2
Oh, one image is enough to apply copyright as if it were a patent, to ban a process that makes original works most of the time?

The software itself is not at issue here. If they had trained the network on public domain images then there’d be no lawsuit. The legal question to settle is whether it’s allowable to train (and use) a model on copyrighted images without permission from the artists.

They may actually be successful at arguing that the outputs are either copies or derived works which would require paying the original artist for licenses.

replies(1): >>jdiron+R61
◧◩◪◨
127. ghaff+ZS[view] [source] [discussion] 2023-01-14 16:23:06
>>yazadd+ja
All the handwringing about generative AI brings to mind the aphorism about genies returning to bottles. There can be lawsuits and laws--and there may even be cases where an output by chance or by tickling the input sufficiently looks very close to something in the training set. But anyone who thinks this technology will be banned in some manner is... mistaken.
replies(1): >>b3mora+T11
◧◩◪◨⬒
128. chongl+vT[view] [source] [discussion] 2023-01-14 16:26:25
>>basch+nI
If I take a million copywritten images from magazines, cut them with scissors, and make a single collage, I would expect the resulting image to be fair use.

That’s not how it works. Your collage would be fine if it was the only one since you used magazines you bought. Where you’d get into trouble is if you started printing copies of your collage and distributing them. In that case you’d be producing derived works and be on the hook for paying for licenses from the original authors.

replies(1): >>basch+vZ2
◧◩◪◨⬒⬓⬔
129. bluefi+wT[view] [source] [discussion] 2023-01-14 16:26:27
>>realus+F7
The difference is that computers create perfect copies of images by default, people don't.

If a person creates a perfect copy of something it shows they have put thousands of hours of practice into training their skills and maybe dozens or even hundreds of hours into the replica.

When a computer generates a replica of something it's what it was designed to do. AI art is trying to replicate the human process, but it will always have the stink of "the computer could do this perfectly but we are telling it not to right now"

Take Chess as an example. We have Chess engines that can beat even the best human Chess players very consistently.

But we also have Chess engines designed to play against beginners, or at all levels of Chess play really.

We still have Human-only tournaments. Why? Why not allow a Chess Engine set to perform like a Grandmaster to compete in tournaments?

Because there would always be the suspicion that if it wins, it's because it cheated to play at above it's level when it needed to. Because that's always an option for a computer, to behave like a computer does.

replies(2): >>derang+H11 >>smegge+Vb1
◧◩◪◨⬒⬓
130. Xelyne+KT[view] [source] [discussion] 2023-01-14 16:28:52
>>dymk+WH
I'd assume the precedent would be about sharing encoder data, which would be covered in bittorrent cases.

"Training a neural network" is an implementation detail. These companies accessed millions of copyrighted works, encoded them such that the copyright was unenforcable, then sell the output of that transformation.

replies(1): >>dymk+PU
◧◩◪◨⬒⬓⬔
131. cf141q+OT[view] [source] [discussion] 2023-01-14 16:29:52
>>synu+0F
Once you are down to one picture, collisions become feasible given the right environment and resolution of the image.

Pretty sure this is nitpicking about an overused analogy though.

◧◩◪◨⬒
132. bluefi+RT[view] [source] [discussion] 2023-01-14 16:30:07
>>realus+q5
This just sounds like really fancy, really lossy compression to me.

Compression that returns something different from the original most of the time, but still could return the original.

replies(1): >>anothe+xQ4
133. baxtr+ST[view] [source] 2023-01-14 16:30:30
>>dr_dsh+(OP)
One idea I had was to try to recreate the original using a prompt. If you succeed, it should be obvious that the original was in the training set?
replies(3): >>zowie_+f01 >>gedy+c11 >>nhtsam+R21
◧◩◪◨⬒⬓
134. Xelyne+3U[view] [source] [discussion] 2023-01-14 16:31:55
>>UncleE+1P
If I host a song I don't have license to on my website I'm violating copyright by distributing it to you when you listen on my site.

If my parrot recites your song after hearing it and I record that and upload to YouTube. I've violated your copyright.

If a big company does the same(runs the song through a non-human process, then sells the output) I believe they're blatantly infringing copyright.

replies(2): >>dymk+9V >>UncleE+752
◧◩◪◨
135. turtle+lU[view] [source] [discussion] 2023-01-14 16:33:57
>>dylan6+Z4
If you spent a decade trying to draw it, wouldn't your brain have the right "weights" to execute it pretty exactly going forward?

Except with computers, they don't need to eat or sleep, converse or attend stand-ups.

And once you're able to draw that one picture, you could probably draw similar ones. Your own style may emerge too.

Just thinking. Copywriters, students, and scribes used to copy stuff verbatim, sometimes just to "learn" it.

The product of that study could be published works, a synthesis of ideas from elsewhere, and so on. We would say it belonged to the executor, though.

So the AI learned, and what it has created belongs to it. Maybe.

Or, once we acknowledge AI can "see" images, precedent opens the way to citizenship (humanship?)

◧◩◪◨⬒⬓
136. Xelyne+IU[view] [source] [discussion] 2023-01-14 16:36:04
>>andyba+Jy
"It's very unclear" in legal cases is synonymous with "it hasn't been challenged in court yet". You say they're moderately confident because they're fairly reputable, but remember that Madoff was a "reputable business man" for the 20 years he ran a ponzi scheme. They don't have to be confident in the legality to do it, they just had to be confident in the potential profit. With openai being values at $10B by Microsoft, I'd say they've successfully muddied the legal waters long enough to cash out.
replies(1): >>andyba+Kd1
◧◩◪◨⬒⬓⬔
137. dymk+PU[view] [source] [discussion] 2023-01-14 16:37:28
>>Xelyne+KT
Not being able to reproduce the inputs (each image is contributing single bytes to the neural network) is relevant. Torrent files are a means to exactly reproduce their inputs. Diffusion models are trained to not reproduce their inputs, nor do they have the means to.
◧◩◪◨⬒⬓⬔
138. dymk+9V[view] [source] [discussion] 2023-01-14 16:40:00
>>Xelyne+3U
Big Company is not distributing the input images by distributing the neural network. There is no way to extract even a single input image out of a diffusion model.
◧◩◪
139. Xelyne+jV[view] [source] [discussion] 2023-01-14 16:41:23
>>Salgat+Q1
> No semblance of the original image even remotely exists in the model

What does this mean? It doesn't mean you can't recreate the original, because that's been done. It doesn't mean that literally the bits for the image aren't present in the encoded data, because that's true for any compression algorithm.

replies(1): >>smusam+401
◧◩◪
140. Xelyne+AV[view] [source] [discussion] 2023-01-14 16:44:02
>>visarg+g4
You took the images, encoded them in a computer process, and the result is able to reproduce some of those images. I fail to see why the size of the training set in bytes and the size of the model in bytes matters. Especially if, as other commenters have noted, much if the training data is repeated(mentions of thousands of mina Lisa's) so a straight division(training size/parameters size) says nothing about the bytes per copyrighted work.
replies(1): >>max47+OY1
◧◩◪◨
141. Xelyne+MV[view] [source] [discussion] 2023-01-14 16:46:08
>>synu+w4
There only needs to be an answer if it's determined that some number isn't copyright infringement. The easy answer would be to say that the process is what prevents the works from being transformative(and thus copyrightable) and not the size of the training set.
◧◩◪◨
142. Xelyne+0W[view] [source] [discussion] 2023-01-14 16:47:42
>>cf141q+Jb
Once you start recreating copyrighted works from hashes this analogy becomes relevant, until then how can you compare the two when the distinguishing feature is its ability to reproduce the training data.
◧◩◪◨⬒⬓⬔
143. ghaff+gW[view] [source] [discussion] 2023-01-14 16:49:00
>>hhjink+9i
It depends to what degree it's literal copying. See e.g. the Obama "Hope" poster. [1] Though that case is muddied by the fact that the artist lied about the source of his inspiration. Had it in fact been an older photo of JFK in a similar pose, there probably wouldn't have been a controversy.

[1] https://en.wikipedia.org/wiki/Barack_Obama_%22Hope%22_poster

◧◩◪
144. Xelyne+qW[view] [source] [discussion] 2023-01-14 16:50:03
>>8n4vid+15
Citing fair use is putting the cart before the horse here. The debate is around whether or not the stable diffusion training and generation processes can be considered transforming the worka to create a new one in the same way we do for humans which allows for the fair use of video clips. To say that it would be similar to fair use is assuming the outcome as evidence, aka begging the question.
◧◩◪
145. Xelyne+EX[view] [source] [discussion] 2023-01-14 16:58:18
>>Fillig+KE
When gzip turns my 10k character ASCII text file into a a 2kb archive, has it "compressed each character down to a fifth of a byte per character"? No, thats a misunderstanding of compression.

Just like gzip, training stable diffusion certainly removes a lot of data, but without understanding the effect of that transformation of the entropy of the data it's meaningless to say thing like "two bytes per image" because(like gzip) you need the whole encoded dataset to recover the image.

It's compressing many images into 10GB of data, not a single image into two bytes. This is directly analogous to what people usually mean by "compression"

◧◩
146. huggin+cY[view] [source] [discussion] 2023-01-14 17:02:54
>>bsder+d7
My assumption would be 'fair use'. Artists themselves make use of this extremely often, like when doing paintovers on copyrighted images (VERY common), fan art where they paint trademarked characters (also VERY common). The are often done for commission as well.

AFAIK, downloading and learning from images, even copyrighted images, fall under fair use, this is how practically every artist today learns how to draw.

Stable Diffusion does not create 1:1 copies of artwork it has been trained on, and its purpose is quite the opposite, there may be cases where the transformative aspect of a generated image may be argued as not being transformative enough, but so far I've only seen one such reproducable image, which would be the 'bloodborne box art' prompt, which was also mentioned in this discussion.

replies(2): >>zowie_+J31 >>bsder+DP1
◧◩◪◨
147. voakba+NY[view] [source] [discussion] 2023-01-14 17:06:39
>>jimnot+53
Image sharing sites routinely steal artwork from the web. My business has a unique logo with the business name in it. It has repeatedly shown up on such sites, despite repeated DMCA takedown requests.

Simply appearing on a shared hosting site should not be enough.

◧◩◪
148. smusam+QZ[view] [source] [discussion] 2023-01-14 17:12:42
>>anothe+84
Don't think stable Diffusion can reproduce any single image its trained on, not matter what prompts you use.

It does have Mona lisa because of over fitting. But that's because there is too much Mona lisa on internet.

These artist taking part in suit won't be able to recreat any of their work.

replies(2): >>Aerroo+tp1 >>neuah+H43
◧◩
149. willia+UZ[view] [source] [discussion] 2023-01-14 17:13:06
>>yazadd+W1
There's a key difference. A compression algorithm is made to be reversible. The point of compressing an MP3 is to be able to decompress as much of the original audio signal as possible.

Stable Diffusion is not made to decompress the original and actually has no direct mechanism for decompressing any originals. The originals are not present. The only thing present is an embedding of key components of the original in a multi-dimensional latent space that also includes text.

This doesn't mean that the outputs of Stable Diffusion cannot be in violation of a copyright, it just means that the operator is going to have to direct the model towards a part of that text/image latent space that violates copyright in some manner... and that the operator of the model, when given an output that is in violation of copyright, is liable for publishing the image. Remember, it is not a violation of copyright to photocopy an image in your house... it's a violation when you publish that image!

replies(1): >>codeon+dx1
◧◩◪◨⬒⬓⬔
150. ghaff+XZ[view] [source] [discussion] 2023-01-14 17:13:37
>>astran+r8
That's probably a very relevant point. (I'm guessing.) If I ask for an image of a red dragon in the style of $ARTIST, and the algorithm goes off and says "Oh, I've got the perfect one already in my data"--or even "I've got a few like that, I'll just paste them together"--that's a problem.
replies(1): >>astran+L42
◧◩◪◨
151. smusam+401[view] [source] [discussion] 2023-01-14 17:14:35
>>Xelyne+jV
Do you have any examples of recreating an image with these models? Something other than Mona lisa or other famous artworks because they have caused over fitting.
◧◩
152. zowie_+f01[view] [source] [discussion] 2023-01-14 17:15:47
>>baxtr+ST
The LAION-5B dataset is public, so you can check directly whether a picture is in there or not. StabilityAI only takes a very limited amount of information from each individual picture, so for Stable Diffusion to closely reproduce a picture it would need to appear quite frequently in the dataset. There are examples of this, such as old famous paintings, "bloodborne box art" and probably many others, though I haven't looked deeply into it.
◧◩◪◨⬒⬓
153. yazadd+Z01[view] [source] [discussion] 2023-01-14 17:20:38
>>danari+RI
But in fairness, even a human could know how to violate copyright but cannot be sued until they do violate it.

SD might know how to violate copyright but is that enough to sue it? Or can you only sue violations it helps create?

replies(1): >>danari+261
◧◩
154. gedy+c11[view] [source] [discussion] 2023-01-14 17:22:37
>>baxtr+ST
No, the "original" is in the (likely detailed) prompt you give it.
◧◩◪◨
155. willia+h11[view] [source] [discussion] 2023-01-14 17:23:08
>>manhol+94
If I make software that randomly draws pixels on the screen then we can say for a fact that no copyrighted images were used.

If that software happens to output an image that is in violation of copyright then it is not the fault of the model. Also, if you ran this software in your home and did nothing with the image, then there's no violation of copyright either. It only becomes an issue when you choose to publish the image.

The key part of copyright is when someone publishes an image as their own. That they copy an image doesn't matter at all. It's what they DO with the image that matters!

The courts will most likely make a similar distinction between the model, the outputs of the model, and when an individual publishes the outputs of the model. This would be that the copyright violation occurs when an individual publishes an image.

Now, if tools like Stable Diffusion are constantly putting users at risk of unknowingly violating copyrights then this tool becomes less appealing. In this case it would make commercial sense to help users know when they are in violation of copyright. It would also make sense to update our copyright catalogues to facilitate these kinds of fingerprints.

◧◩◪◨⬒⬓⬔⧯
156. derang+H11[view] [source] [discussion] 2023-01-14 17:26:45
>>bluefi+wT
You’re acting like the “computer” has a will of it’s own. Generating a perfect copy of an image would be a completely separate task from training a model for image generation.

There are no models I know of with the ability to generate an exact copy of an image from its training set unless it was solely trained on that image to the point it could. In that case I could argue the model’s purpose was to copy that image rather than learn concepts from a broad variety of images to the point it would be almost impossible to generate an exact copy.

I think a lot of the arguments revolving around AI image generators could benefit from the constituent parties reading up on how transformers work. It would at least make the criticisms more pointed and relevant, unlike the criticisms drawn in the linked article.

replies(1): >>bluefi+O71
◧◩◪◨⬒
157. b3mora+T11[view] [source] [discussion] 2023-01-14 17:28:52
>>ghaff+ZS
So as a code author I am pretty upset about Copilot specifically, and it seems like SD is similar (hadn't heard before about DeviantArt doing the same as what GitHub did). But I agree with this take: the tech is here, it's going to be used, and it's not going to be shut down by a lawsuit. Nor should it, frankly.

What I object to is not the AI itself, or even that my code has been used to train it. It's the copyright for me but not for thee way that it's been deployed. Does GitHub/Microsoft's assertion that training sidesteps licensing apply to GitHub/Microsoft's own code? Do they want to allow (a hypothetical) FSFPilot to be trained on their proprietary source? Have they actually trained Copilot on their own source? If not, why not?

I published my source subject to a license, and the force of that license is provided by my copyright. I'm happy to find other ways of doing things, but it has to be equitable. I'm not simply ceding my authorship to the latest commercial content grab.

replies(4): >>ghaff+351 >>skissa+A71 >>rlt+Vv2 >>woah+bn3
◧◩
158. nhtsam+R21[view] [source] [discussion] 2023-01-14 17:35:26
>>baxtr+ST
This is quite easy to do, but the results can be off in funny ways. For example, try putting this into SD with Euler a sampling and a cfg scale of 10:

"The Night Watch, a painting made by Rembrandt in 1642"

It generates a convincing low-res imitation about half the time, but it also has a tendency to make the triband flag into an American flag, or put an old ship in the background, or replace the dark city arch with a sunset...

If you keep refining the prompt, you can get closer, but at that point you're just describing what the painting should look like, rather than asking the model to recall an original work.

◧◩◪
159. zowie_+J31[view] [source] [discussion] 2023-01-14 17:43:03
>>huggin+cY
> when doing paintovers on copyrighted images (VERY common)

What are you talking about? I've been doing drawing and digital painting as a hobby for a long time and tracing is absolutely not "VERY common". I don't know anybody who has ever done this.

> fan art where they paint trademarked characters (also VERY common)

This is true in the sense that many artists do it (besides confusing trademark law and copyright law: the character designs are copyright-protected, trademarks protect brand names and logos). However, it is not fair use (as far as I'm aware at least, I'm not a lawyer). A rightholder can request for fanart to be removed and the artist would have to remove it. Rightsholders almost never do, because fanart doesn't hurt them.

There's also more examples of it reproducing copyright-protected images, I pulled the "bloodborne box art" prompt from this article: https://arxiv.org/pdf/2212.03860.pdf But I agree with you that reproducing images is very much not the intention of Stable Diffusion, and it's already very rare. The way I see it, the cases of Stable Diffusion reproducing images too closely is just a gotcha for establishing a court case.

replies(1): >>huggin+8k1
◧◩◪◨⬒⬓⬔
160. ifdefd+641[view] [source] [discussion] 2023-01-14 17:45:36
>>XorNot+Jz
I think it boils down to one question: can you prompt the model to show mostly unchanged pictures from artists? Then it's definitely problematic. If not, then I don't have enough knowledge of the topic to give a strong opinion. (my previous answer was just an use case that fits your argument)
replies(1): >>XorNot+pC1
◧◩◪◨⬒⬓⬔
161. derang+841[view] [source] [discussion] 2023-01-14 17:45:48
>>astran+Rb
“any fractional bits would be in there if you want to look.”

What do you mean by this in the context of generating images via prompt? “Fractional bits” don’t make sense and it’s more misleading if anything. Regardless, a model violating criteria for being within fair use will always be judged by the outputs it generates rather than its composing bytes (which can be independent)

replies(1): >>astran+S42
◧◩◪◨
162. menset+B41[view] [source] [discussion] 2023-01-14 17:47:31
>>Xelyne+jS
That could be a funny movie.

Special agents from the MPAA sent to assassins an Android who can spew out high quality art.

◧◩◪◨⬒⬓
163. Athero+D41[view] [source] [discussion] 2023-01-14 17:47:40
>>Xelyne+IS
If I compress a copyrighted work down to two bytes and publish that, I think that judges would declare it legal. If it can't be uncompressed to resemble the copyrighted work in any sense, no judge is going to declare it illegal.
◧◩◪◨⬒⬓
164. ghaff+351[view] [source] [discussion] 2023-01-14 17:51:17
>>b3mora+T11
I doubt Microsoft sees fragments of Windows source code as a particular crown jewel these days. That said, some of it is decades old code that was intended for the public to see (unlike, presumably, anything in a public GitHub repository). And some of it is presumably third-party code licensed to Microsoft that was likewise never intended for public viewing. So, while it would be a good gesture on the part of Microsoft to scan their own code--if they haven't done so--I could see why it might be problematic. (Just as training on private GitHub repos would be.)

tl;dr I think there's a distinction between training on copyrighted but public content and private content.

replies(1): >>b3mora+a71
◧◩◪
165. lolind+t51[view] [source] [discussion] 2023-01-14 17:53:36
>>vnoril+r5
Even that approach seems highly vulnerable to fair use. If the model does not recreate a copyrighted work with enough fidelity to be recognized as such, then how can it be said to be in violation of copyright?
◧◩◪◨⬒⬓
166. willia+J51[view] [source] [discussion] 2023-01-14 17:55:33
>>IncRnd+U6
You've made some errors in reasoning.

First, there is a legal definition of a "derivative work" and there is an artistic notion of a "derivative work". If the two of us both draw a picture of the Statue of Liberty, artistically we have both derived the drawing based on the original statue. However, neither of these drawings in relation to the original sculpture nor the other drawing is legally considered a derivative work.

Let's think about a cartoonish caricature of Joe Biden. What "makes up" Joe Biden?

https://www.youtube.com/watch?v=QRu0lUxxVF4

To what extent are these "constituent parts" present in every image of Joe Biden? All of them? Is the latent space not something that is instead hidden in all images of Joe Biden? Can an image of Joe Biden be made by anyone that is not derived from these "high order" characteristics of what is recognizable as Joe Biden across a number of different renderings from disparate individuals?

replies(1): >>IncRnd+BH1
◧◩◪◨⬒⬓⬔
167. danari+261[view] [source] [discussion] 2023-01-14 17:57:20
>>yazadd+Z01
I would assert (with no legal backing, since this is the first suit that actually attempts to address the issue either way) that the trained model is a copyright infringement in itself. It is a novel kind of copyright infringement, to be sure, but I believe that use of copyrighted material in a neural net's training set without the creator's permission should be considered copyright infringement without any further act required to make it so.
replies(1): >>yazadd+nL1
◧◩◪◨
168. jdiron+R61[view] [source] [discussion] 2023-01-14 18:01:50
>>chongl+KS
Then I think any work of art or media inspired by past sources would fall into this category. It's a very grey line, and I haven't seen anyone or any case law put it into proper terms as of yet.
replies(2): >>deely3+YA1 >>Spivak+t62
◧◩◪◨⬒⬓⬔
169. b3mora+a71[view] [source] [discussion] 2023-01-14 18:03:12
>>ghaff+351
Private third-party GitHub repos is another good example. If licenses don't apply to training data, as GitHub has asserted, why not use those too? Do they think they'll get in trouble over it? Why doesn't the same trouble apply to my publicly-readable GPL-licensed code?
replies(2): >>ghaff+ak1 >>skissa+jr1
◧◩◪◨⬒⬓
170. skissa+A71[view] [source] [discussion] 2023-01-14 18:05:23
>>b3mora+T11
> Have they actually trained Copilot on their own source? If not, why not?

People have posted illegal Windows source code leaks to GitHub. Microsoft doesn’t seem to care that much because these repos stay up for months or even years at a time without Microsoft DMCAing them-if you go looking you’ll find some right now. I think it is entirely possible, even likely, that some of those repos were included in Copilot’s training data set. So Copilot actually was trained on (some of) Microsoft’s proprietary source code, and Microsoft doesn’t seem to care.

replies(1): >>b3mora+Zb1
◧◩◪◨⬒⬓⬔⧯▣
171. bluefi+O71[view] [source] [discussion] 2023-01-14 18:06:36
>>derang+H11
> There are no models I know of with the ability to generate an exact copy of an image from its training set

Is it "the model cannot possibly recreate an image from its training set perfectly" or is it "the model is extremely unlikely to recreate an image from its training set perfectly, but it could in theory"?

Because I am willing to bet it's the latter.

> You’re acting like the “computer” has a will of it’s own. Generating a perfect copy of an image would be a completely separate task from training a model for image generation.

Not my intent, of course I don't think computers have a will of their own. What I meant, obviously, is that it's always possible for a bad actor of a human to make the computer behave in a way that is detrimental to other humans and then justify it by saying "the computer did it, all I did is train the model".

replies(1): >>mlsu+Af2
◧◩◪◨⬒
172. AlotOf+2b1[view] [source] [discussion] 2023-01-14 18:25:42
>>huggin+jM
A work doesn't have to be identical to be considered a derivative work, which is why we also don't consider every JPEG a newly copyrighted image distinct from the source material.

As an example of a plausible scenario where copyright might actually be violated, consider this: an NGO wants images on their website. They type in something like 'afghan girl' or 'struggling child' and unknowingly use the recreations of the famous photographs they get.

◧◩◪◨⬒⬓⬔⧯
173. smegge+Vb1[view] [source] [discussion] 2023-01-14 18:29:38
>>bluefi+wT
>The difference is that computers create perfect copies of images by default

are we looking at the output of the same program? because all of the output images i look at have eyes looking in different direction and things of horror in place of hands or ears, and they feature glasses meting into people faces, and that's the good ones, the bad one have multiple arms contorting out of odd places while bent at unnatural angles.

replies(1): >>bluefi+Lf1
◧◩◪◨⬒⬓⬔
174. b3mora+Zb1[view] [source] [discussion] 2023-01-14 18:29:45
>>skissa+A71
The question is not whether there's some of their code that they don't mind being incorporated, but whether there's any at all that they wouldn't allow to be. And more importantly, not used for their own bot, but for someone else's.

If licenses don't apply to training, then they don't apply for anyone, anywhere. If they do apply, then Copilot is violating my license.

replies(1): >>skissa+Sr1
◧◩◪◨
175. smegge+dd1[view] [source] [discussion] 2023-01-14 18:37:21
>>manhol+94
how is that any different from new human artist that study other artists work to learn a style or technique. In fact it used to be that the preferred way for painters to learn was to repeatedly copy paintings of masters.
replies(1): >>manhol+cA2
◧◩◪◨
176. derang+jd1[view] [source] [discussion] 2023-01-14 18:37:46
>>TheDon+ok
>But, it's still copyrighted by the original artist as long as they can show "This started as my image, and a machine made a rote mathematical transformation to it"

I think the post you’re replying to saw was confused about the quote above. The person who’s claiming copyright by showing the claimed file started as their own image has to show that it started from their own image, and not just that the file could have derived from the image. Copyright cares about both the works and the provenance of works.

Stable Diffusion couldn’t be flagged under this pretense if a person used a prompt that was their own nor could they even be sued if they ran an image through it as long as there is no plausibility that it was made by a copyright work. The only thing I imagine a case working on is the actual training process of the algorithm rather than the algorithm itself for that exact reason.

◧◩◪◨⬒⬓⬔
177. andyba+Kd1[view] [source] [discussion] 2023-01-14 18:40:21
>>Xelyne+IU
That's one company. There's dozens if not hundreds of companies, research groups and individuals working under the same assumption.

Maybe it's a mass delusion but that feels like a stretch.

Also your wording makes this sound entirely like a sinister conspiracy or cash grab. Many people think this is simply a worthy pursuit and the right direction to be looking at the moment.

replies(1): >>Xelyne+myl
◧◩◪◨⬒⬓⬔⧯▣
178. bluefi+Lf1[view] [source] [discussion] 2023-01-14 18:53:59
>>smegge+Vb1
Storing and retrieving photos, files, music, exactly identical to how they were before, is what computers do.

Save a photo on your computer, open it in a browser or photo viewer, you will get that photo. That is the default behavior of computers. That is not in dispute, is it?

All of this machine learning stuff is trying to get them to not do that. To actually create something new that no one actually stored on them.

Hope that clears up the misunderstanding.

◧◩◪◨
179. derang+8h1[view] [source] [discussion] 2023-01-14 19:01:10
>>AlotOf+C5
It’s not quite a one to one. Copyright law isn’t as arbitrary as it would seem in my experience. Also there’s the conflation of two things here: whether the model is within copyright violation and whether the works generated by it are

The “color of your bits” only applies to the process of creating a work. Stable Diffusion’s training of the algorithm could be seen as violating copyright but that doesn’t spread to the works generated by it.

In the same vein, one can claim copyright on an image generated by stable diffusion even if the creation of the algorithm is safe from copyright violation.

“some representation of the originals exist inside the model+prompt” is also not sufficient for the model to be in violation of copyright of any one art piece. Some latent representation of the concept of an art piece or style isn’t enough.

It’s also important to note the distinction that there is no training data stored in its original form as part of the model during training, it’s simply used to tweak a function with the purpose of translating text to images. Some could say that’s like using the color from a picture of a car on the internet. Some might say it’s worse but it’s all subjective unless the opposition can draw new ties of the actual technical process to things already precedent.

180. Aerroo+kj1[view] [source] 2023-01-14 19:12:35
>>dr_dsh+(OP)
Models for Stable Diffusion are about 2-8GB in size. 5 billion images means that every image gets about 1 byte.

It seems to me that they're claiming here that Stability has somehow manage to store copies of these images in about 1 byte of space each. That's an incredible compression ratio!

replies(1): >>SillyU+E42
◧◩◪◨
181. huggin+8k1[view] [source] [discussion] 2023-01-14 19:16:13
>>zowie_+J31
>and tracing is absolutely not "VERY common"

Paintover does not have to mean actual 'tracing', a LOT of artists use photos as direct references and paint over them in a separate layer, keeping the composition, poses, colors very close to the original while still changing details and style enought to make it transformative enough to be considered a 'new work'.

Here are two examples of artist Sam Yang using two still frames from the tv show Squid Game and painting over those, the results which he then sells as prints:

https://www.inprnt.com/gallery/samdoesarts/the-alleyway/ https://www.inprnt.com/gallery/samdoesarts/067/

That said, you could even get away with less transformation and still have it be considered original work, take Andy Warhol's 'Orange Marilyn' and 'Portrait of Mao', those are inked and flat color changes over photographs.

replies(1): >>zowie_+et1
◧◩◪◨⬒⬓⬔⧯
182. ghaff+ak1[view] [source] [discussion] 2023-01-14 19:16:18
>>b3mora+a71
I assume there's something in their terms of service about not poking around in private repos and using the code even for internal purposes except for necessary maintenance like backups, court orders, etc.

I am not a lawyer but I also assume Microsoft's position, at least in part, is that they can download and use code in GitHub public repos just like anyone else can and developing a public service based on training with that (and a lot of other) code isn't redistributing that code.

◧◩◪◨
183. Aerroo+tp1[view] [source] [discussion] 2023-01-14 19:47:33
>>smusam+QZ
I think there's a chance they might be able to recreate some simpler work if they make the prompts specific enough. When you set up a prompt you're essentially telling the system what you want it to generate - if you prompt it with enough specificity you might be able to just recreate the image you had.

Kind of like recreating your image one object at a time. It might not be exact, but close enough.

replies(2): >>smusam+QH1 >>rlt+ly2
◧◩◪◨
184. djbebs+4q1[view] [source] [discussion] 2023-01-14 19:50:57
>>akjetm+g2
That seems to indicate to me that the original work is actually not under copyright, since if it is the only method of achieving such an image in such a style, then there is no originality to be copyrighted.
◧◩◪◨⬒⬓⬔⧯
185. skissa+jr1[view] [source] [discussion] 2023-01-14 19:58:22
>>b3mora+a71
Copyright is not the only law. Something might be permitted by copyright law (as fair use, an implied license, etc)-yet simultaneously violate other laws-breach of contract, misappropriation of trade secrets, etc.
◧◩◪◨⬒⬓⬔⧯
186. skissa+Sr1[view] [source] [discussion] 2023-01-14 20:01:23
>>b3mora+Zb1
IANAL, but they likely believe their unpublished source code contains trade secrets. They may believe that training a public model is okay on published source code (irrespective of its copyright license), but that doing so on unpublished source code containing trade secrets might legally count as a voluntary relinquishment of their trade secrets (if we are talking about their own code) or illegal misappropriation of the trade secrets of others (if they trained it on third party private repos)
◧◩◪◨⬒
187. zowie_+et1[view] [source] [discussion] 2023-01-14 20:09:50
>>huggin+8k1
First of all, those are only two works in a very large body of works of an artist that seems to work almost entirely from imagination, which already counters the claim that this is a very common way of working, since even this artist would almost never work like that. Secondly, putting strangely much effort into a comment on Hacker News, I actually looked up the source frame of one of these: https://youtu.be/K6hOvyz65jM?t=236 It's definitely based on the frame but it's not a paint-over as you claim. I know this because there are too many mistakes with regards to proportion:

- Extending the slant roof in the background, it intersects with the left figure at around the height of the nose, but in the painting it intersects with the middle of her neck.

- Similarly the line of the fence on the left is at the height of her hairline, but in the painting it is at the height of the middle of the head, and also more slanted than in the frame.

- On the right side, the white part of the pillar is similarly too low compared to the figure.

- The pole in the background has a lot of things off with regards to size, thickness, or location too.

Essentially, everything is a bit off with regards to location, size and distance. It doesn't really make sense to paint over something and then still do everything differently from the base layer, so it was probably just drawn from reference the normal way -- probably having the picture on another screen and drawing it again from scratch, rather than directly painting over the frame.

I agree with regards to Warhol but that doesn't really establish it as very common amongst painters.

replies(1): >>huggin+4E1
◧◩◪◨⬒⬓
188. bluebo+mt1[view] [source] [discussion] 2023-01-14 20:10:54
>>synu+69
One, of your compressor is specialised enough…so you can see how slippery this argument can be.
◧◩
189. Aerroo+Ru1[view] [source] [discussion] 2023-01-14 20:21:39
>>codefl+57
Photoshop can also recreate existing paintings if you just "prompt it with the correct input", no?
replies(1): >>codefl+zJ1
◧◩◪
190. codeon+dx1[view] [source] [discussion] 2023-01-14 20:37:41
>>willia+UZ
Lossy compression isn't reversible but presumably the content when compressed tjis way is still covered by copyright.
replies(2): >>willia+kH1 >>soerxp+JM3
◧◩◪◨⬒
191. deely3+YA1[view] [source] [discussion] 2023-01-14 21:08:09
>>jdiron+R61
Does "inspired" equal to "learned by software neural network"?
◧◩◪◨⬒⬓⬔⧯
192. XorNot+pC1[view] [source] [discussion] 2023-01-14 21:19:40
>>ifdefd+641
I mean no, it doesn't. It's like drawing something in Photoshop which is a copyright'd work: the act of creating it is the violation, it doesn't prove that Photoshop contains the content directly.

The way SD model weights work, if you managed to prompt engineer a recreation of one specific work, it would only have been generated as a product of all the information in the entire training set + noise seed + the prompt. And the prompt wouldn't look anything like a reasonable description of any specific work.

Which is to say, it means nothing because you can equally generate a likeness of works which are known not to be included in the training set (easy, you ask for a latent encoding of the image and it gives you one): equivalent to a JPEG codec.

replies(1): >>ifdefd+Z02
◧◩◪◨⬒⬓
193. huggin+4E1[view] [source] [discussion] 2023-01-14 21:32:28
>>zowie_+et1
>that seems to work almost entirely from imagination

I very much doubt that.

>Secondly, putting strangely much effort into a comment on Hacker News

Note sure what you are implying here, could you elaborate ? The reason I know about these images is because they've been posted, alongside many other similar examples, in discussions regarding AI art.

>I know this because there are too many mistakes with regards to proportion:

Have you ever used programs like Photoshop, Krita et al ? You can start painting directly over a photo, and then easily transform the proportions of all components in the image, and since you draw them in layers, they can be done without affecting eachother.

Here they are, side by side:

https://imgur.com/a/tIbBkk2 https://imgur.com/a/K1fEPtu

I have no doubt that he started painting these over the reference photos, and then used the 'warp tool' in his painting program of choice to alter the proportions, a very common technique.

And this is PERFECTLY FINE, the resulting artwork is transformative enough to be considered a new work of art, which is true for practically every piece of art I've seen generated by Stable Diffusion, the only one I've seen that I'm doubtful about is the 'bloodborne box art' one, which is THE example that is always brought up as it such an outlier.

replies(1): >>zowie_+dI1
◧◩◪◨⬒⬓
194. limite+tF1[view] [source] [discussion] 2023-01-14 21:43:25
>>jimnot+S5
Not a lawyer, but practically speaking copyright is lost when an item ceases to “exist” itself and can’t be restored. If you cut a painting in half - it’s absolutely still a copyrighted item. If you atomize it and don’t have technology to restore it, then copyright is meaningless. What item exactly is copyrighted?
◧◩◪◨
195. 93po+AF1[view] [source] [discussion] 2023-01-14 21:43:55
>>TheDon+ok
I find this hard to believe. If I took the famous pointillism painting "A Sunday Afternoon on the Island of La Grande Jatte" (the one-color-per-dot painting of a park) and I rearranged every color point based on an algorithm to create something that looks nothing like the original (and likely just looks like a jumbled mess), surely the copyright on the existing painting (which I doubt exists anymore) wouldn't prevent me from copyrighting my "new" work.
◧◩◪◨
196. willia+kH1[view] [source] [discussion] 2023-01-14 22:00:04
>>codeon+dx1
Pedantically, yes, lossy compression is not 100 percent reversible. Practically, the usefulness of compression is that it does return the original content with as little loss as possible… so lossy compression is mostly reversible.

All of my other points remain unchanged by this pedantry.

replies(1): >>ouid+q92
◧◩◪◨⬒⬓⬔
197. IncRnd+BH1[view] [source] [discussion] 2023-01-14 22:03:09
>>willia+J51
I can draw Biden, yes, but SD can only draw Biden by deriving it's output from the images on which it was trained. This is a simple tautology, because SD cannot draw Biden without having been trained on that data.

SD both creates derivative works and also sometimes creates pixel level copies from portions of the trained data.

replies(2): >>willia+QJ1 >>bobbru+ZX1
◧◩◪◨⬒
198. smusam+QH1[view] [source] [discussion] 2023-01-14 22:05:25
>>Aerroo+tp1
People have tried, unless the thing you want to recreat has been seen by it a lot (over trained) you won't get the same image. You don't have that much fine grained control via text only.

Best you can do is to mask and keep inpainting the area that looks different until it doesn't.

◧◩◪◨⬒⬓⬔
199. zowie_+dI1[view] [source] [discussion] 2023-01-14 22:08:25
>>huggin+4E1
> I very much doubt that.

You can see his actual workflow on his YouTube channel. He shows his painting process there but doesn't show his sketching process, but I hope that you believe that people are able to draw from imagination at least.

https://www.youtube.com/watch?v=7_ZLBKj_UlY

> Note sure what you are implying here, could you elaborate?

I just meant I was probably putting to much effort into an online discussion.

> I have no doubt that he started painting these over the reference photos, and then used the 'warp tool' in his painting program of choice to alter the proportions, a very common technique.

It's simply not a common technique at all. I'm not sure why you're making these statements because it feels like your knowledge of how illustrators work is extremely limited. I've heard of people photobashing -- which is when artists combine photo manipulation and digital painting to more easily produce realistic artworks. It's got mixed opinions about it and many consider it cheating but within the field of concept art it's common because it's quick and easy. However, there's huge amounts of people who can just draw and paint from sight or imagination. There's the hyperrealists who often act as a human photocopier, but artists who do stylized art of any kind are just people who can draw from imagination. I'm not sure why that's something you "very much doubt" to be quite honest. Just looking on YouTube for things like art timelapses, you can find huge amounts of people who draw entirely from imagination. Take Kim Jung Gi as a somewhat well known example. That guy was famous amongst illustrators for drawing complicated scenes directly in pen without any sketches. But there's really plenty of people that can do these things.

You seem to be under the impression that the average artist uses every shortcut available to get a good result, but that is simply not true. Most artists I know refuse to do anything like photobashing because they consider it cheating and because it isn't how they want to work, nevermind directly drawing on top of things. Drawing from sight isn't uncommon as a way to study art, so in case you're wondering why Sam Yang would be able to reproduce the frame so closely, it's because that's how artists study painting.

> Have you ever used programs like Photoshop, Krita et al

Yes, very often. The thing is: Just because it's possible does not mean it actually happens.

◧◩◪
200. codefl+zJ1[view] [source] [discussion] 2023-01-14 22:20:58
>>Aerroo+Ru1
So you're saying "if I put the made-up straw man argument that I intend to knock down in quotation marks, it's less obvious that it's not actually what the other person wrote"?
replies(1): >>Aerroo+Lz2
◧◩◪◨⬒⬓⬔⧯
201. willia+QJ1[view] [source] [discussion] 2023-01-14 22:22:47
>>IncRnd+BH1
Yes, and we are now using the artistic definition of “derived” and not the legal definition.

You cannot copyright “any image that resembles Joe Biden”.

replies(1): >>IncRnd+fR3
◧◩◪◨⬒⬓⬔⧯
202. yazadd+nL1[view] [source] [discussion] 2023-01-14 22:36:31
>>danari+261
I think that is a very fair argument. It may win in court it may lose. I’m excited for the precedent either way.

That’s said, it does raise the question, “should this precedent be extended to humans?”

i.e. Can humans be taught something based on copyrighted materials in the training set/curriculum?

replies(1): >>danari+RU1
◧◩◪
203. bsder+DP1[view] [source] [discussion] 2023-01-14 23:14:11
>>huggin+cY
> My assumption would be 'fair use'.

Why? That's not obvious to me at all.

These algorithms take the entire image and feed it into their maw to generate their neural network. That doesn't really sound like "fair use".

If these GPT systems were only doing scholarly work, there might be an argument. However, the moment the outputs are destined somewhere other than scholarly publications that "fair use" also goes right out the window.

If these algorithms took a 1% chunk of the image, like a collage would, and fed it into their algorithm, they'd have a better argument for "fair use". But, then, you don't have crowdsourced labelling that you can harvest for your training set as the cut down image probably doesn't correspond to all the prompts that the large image does.

> Stable Diffusion does not create 1:1 copies of artwork it has been trained on

What people aren't getting is that what the output looks like doesn't matter. This is a "color of your bits" problem--intent matters.

This was covered when colorizing old black and white films: https://chart.copyrightdata.com/Colorization.html "The Office will register as derivative works those color versions that reveal a certain minimum amount of individual creative human authorship." (Edit: And note that they were colorizing public domain films to dodge the question of original copyright.)

The current algorithms injest entire images with the intent to generate new images from them. There is no "extra thing" being injected by a human--there is a direct correspondence and the same inputs always produce the same outputs. The output is deterministically derived from the input (input images/text prompt/any internal random number generators).

You don't get to claim a new copyright or fair use just because you bumped a red channel 1%. GPT is a bit more complicated than that, but not very different in spirit.

replies(1): >>EMIREL+FR1
◧◩◪◨
204. EMIREL+FR1[view] [source] [discussion] 2023-01-14 23:36:34
>>bsder+DP1
The amount of the work taken is just one of the fair use factors. Courts often perform holistic analysis on all of them to decide if fair use applies.
replies(1): >>bsder+QW1
◧◩◪◨
205. rule72+SR1[view] [source] [discussion] 2023-01-14 23:37:44
>>manhol+94
This argument's pedantic and problematic for artists; take away a human's "dataset" and processes and they are also unable to produce a single original "pixel".
◧◩◪◨⬒⬓
206. rule72+6T1[view] [source] [discussion] 2023-01-14 23:49:22
>>bryanr+ug
If it's what I'm thinking about, I think they were forced to have decentralized image caching (i.e. the "user" is the one downloading images, Google just indexes).

LAION-5b is also just an indexer (in terms of images).

◧◩
207. michae+HT1[view] [source] [discussion] 2023-01-14 23:56:16
>>TheDon+E1
I understood it less as transforming the images and more of deriving math formulas from the patterns in the image, closer to creating a bar graph to understand data than making a copy.
◧◩◪◨⬒⬓⬔⧯▣
208. danari+RU1[view] [source] [discussion] 2023-01-15 00:08:52
>>yazadd+nL1
I think this is a reasonable question for the uninitiated—those for whom "training a neural network" seems like it would be a lot like "teaching a human"—but for those with deeper understanding (tbh, I would only describe my knowledge in both these areas as that of an interested amateur), it is a) a poor analogy, and b) already a settled question in law.

To address (b) first: Fair Use has long held that educational purposes are a valid reason for using copyrighted materials without express permission—for instance, showing a whole class a VHS or DVD, which would technically require a separate release otherwise.

For (a): I don't know anything about your background in ML, so pardon if this is all obvious, but at least current neural nets and other ML programs are not "AI" in anything like the kind of sense where "teaching" is an apt word to describe the process of creating the model. Certainly the reasoning behind the Fair Use exception for educating humans does not apply—there is no mind there to better; no person to improve the life, understanding, or skills of.

◧◩◪◨⬒
209. bsder+QW1[view] [source] [discussion] 2023-01-15 00:28:48
>>EMIREL+FR1
That is why I pointed out both the scholarly exemption as well as the collage exception.

There are arguments to be made for fair use--I'm just not sure the current crop of GPT falls under any of them.

replies(1): >>EMIREL+kZ1
◧◩◪◨⬒⬓⬔⧯
210. bobbru+ZX1[view] [source] [discussion] 2023-01-15 00:42:38
>>IncRnd+BH1
Can you draw Biden without ever having seen him or a picture of him? So,why is it that you are not deriving but SD is?
◧◩◪◨
211. max47+OY1[view] [source] [discussion] 2023-01-15 00:52:23
>>Xelyne+AV
Except that you can't recreate them. At least not without a process that would be similar to asking an artist to create a replica of a painting. Just because photoshop has the right color palet available to recreate art, it doesn't mean the software itself is one big massive copyright infrigement against every art piece that exist.
replies(1): >>synu+uB2
◧◩◪◨
212. bobbru+cZ1[view] [source] [discussion] 2023-01-15 00:55:31
>>manhol+94
That is not true. The dataset is needed, the same way that examples are used by a person learning to draw. But the dataset alone is not capable of producing images not derived from any part of it (and there are many examples of SD results that seem so far to be wholly original), so you can’t reduce stable diffusion to being only derived from the dataset. It may “remember” and generate parts of images in the dataset - but that is a bug, not a feature. With enough prompt tweaking, it may even generate a fairly good copy of pre-existing work - which was what the prompt requested, so responsibility should lie on the prompt writer, not on SD.

But the fact that it often generates new content, that didn’t exist before, or at least doesn’t breach the limits of fair use, goes against the argument made in the lawsuit.

replies(1): >>manhol+Xz2
◧◩◪◨⬒⬓
213. EMIREL+kZ1[view] [source] [discussion] 2023-01-15 00:56:51
>>bsder+QW1
But the point is that fair use is almost completely principles-based rather than rules-based. Besides the four factors in the statute and some judicial precedent it's pretty much at the discretion of the court.
replies(1): >>bsder+O62
◧◩◪◨⬒⬓⬔⧯▣
214. ifdefd+Z02[view] [source] [discussion] 2023-01-15 01:15:41
>>XorNot+pC1
> And the prompt wouldn't look anything like a reasonable description of any specific work.

I think this is the most relevant line of your argument. Because if you could just ask it like "show me the latest picture of [artist]" then you'll have a hard time convincing me that this is fundamentally different from a database with a fancy query language and lots of copyrighted work in it.

◧◩
215. SillyU+E42[view] [source] [discussion] 2023-01-15 01:55:57
>>Aerroo+kj1
It is a form compression that loses some much of the uniqueness which gives it the high ratio. If the concept is a little hard to grasp consider an AI model like a finite state machine, but it stores affinity and weights of the data's relationship to each other too.

In GPT this is words and phrases, e.g. "Frodo Baggins" high affinity, "Frodo Superman" will be negligible. Now consider all words that may link to those words - potentially billions of words (or phrases), but (probably/hopefully) none replicated. The phrases are out of specific context because they cover _all contexts_ in the training data. When you speak to GPT it randomises these words in response to you, typically choosing the words/phrases with the highest affinity, to the words you prompted, this almost gives it the appearance of emergent AI, because it is crossing different concepts (texts) in it's answers.

Stable Diffusion works similarly but with colours (words), and patterns/styles (phrases). Now if you ask for a green field in the style of Van Gogh, it could compare Van Gogh's work to a backdrop from Windows XP. You could argue depending on the degree of those things it gives you you are violating copyrights, however that narrow view doesn't take into account that although you've specifically asked for Van Gogh and that's where it concentrates, it's also pulling in work from potentially hundreds of other lower affinity sources. It's this dilution which means you'll never see an untainted original source image.

So in essence, it's the user who is breaching the copyright by specifying concentration on specific terms in the prompt, not the model. The model is simply a set of patterns, and the user is making those patterns breach copyright which IMHO is no different to the user copying a painting with a brush.

The brush isn't the thing you sue.

replies(1): >>Aerroo+6A2
◧◩◪◨⬒⬓⬔⧯
216. astran+L42[view] [source] [discussion] 2023-01-15 01:57:58
>>ghaff+XZ
That's extremely not how it works. If there's only one training example it's not going to remember anything like actual visual details of it.
replies(2): >>ghaff+z83 >>SillyU+Ee4
◧◩◪◨⬒⬓⬔⧯
217. astran+S42[view] [source] [discussion] 2023-01-15 01:58:29
>>derang+841
Fractional bits makes perfect sense. Do you know how arithmetic coders work?
◧◩◪◨⬒⬓⬔
218. UncleE+752[view] [source] [discussion] 2023-01-15 02:00:38
>>Xelyne+3U
I should have specified the OP has legal rights to the song and the end user listening was under the same granted/implied license as a program doing the web harvesting, my bad.
◧◩◪◨
219. rsuelz+b52[view] [source] [discussion] 2023-01-15 02:01:10
>>manhol+94
So, is any sort of creation that relies upon copyrighted or patented works copyright infringement? Is any academic research or art that references brands or other creations illegal? This is such a clear case of fair use that it could be a textbook example.
◧◩◪◨⬒⬓
220. rsuelz+q52[view] [source] [discussion] 2023-01-15 02:02:53
>>IncRnd+U6
So is your mental image of Joe Biden, unless you know him personally.
◧◩◪◨⬒
221. Spivak+t62[view] [source] [discussion] 2023-01-15 02:15:04
>>jdiron+R61
Olivia Rodrigo is a good case study here. Good For You was so heavily inspired by Paramore that Hayley Williams was given songwriter credit despite having no involvement in its making.

So humans can already run afoul of copyright this way, the bar for NNs might end up lower.

◧◩◪◨⬒⬓⬔
222. bsder+O62[view] [source] [discussion] 2023-01-15 02:19:00
>>EMIREL+kZ1
So? Copyright is a social construct. Fair use is a social construct.

Social constructs are not computer programs. Social constructs concern messy, unpredictable computing units called humans.

Precedent and continuity are something that US courts normally try to value. Yes, the rules can be fuzzy, but the courts generally tried to balance the needs of the competing parties. Unfortunately, there will never be a purely "rules based" decision tree on this kind of "fuzzy" thing.

Of course, recent Republican court appointments have torn up the idea of precedent and minimizing disruption in preference to partisan principles, so your concerns aren't unwarranted.

◧◩◪◨⬒
223. ouid+q92[view] [source] [discussion] 2023-01-15 02:47:46
>>willia+kH1
You can't rip something and compress it badly enough to not violate copyright when you sell it. The point of compression is to throw away information about the original in ascending order of importance.
replies(1): >>eurlei+Ry2
◧◩◪◨⬒⬓⬔⧯▣▦
224. mlsu+Af2[view] [source] [discussion] 2023-01-15 04:21:36
>>bluefi+O71
In theory, you can:

- Open Microsoft Paint

- Make a blank 400 x 400 image

- Select a pixel and input an R,G,B value

- Repeat the last two steps

To reproduce a copyrighted work. I'm sure people have done this with e.g. pixel art images of copyrighted IP of Mario or Link. At 400x400, it would take 160,000 pixels to do this. At 1 second per pixel, a human being could do this in about a week.

Because people have the capability of doing this, and in fact we have proof that people have done so using tools such as MS paint, AND because it is unlikely but possible that someone could reproduce protected IP using such a method, should we ban Microsoft Paint, or the paint tool, or the ability to input raw RGB inputs?

◧◩
225. angust+xu2[view] [source] [discussion] 2023-01-15 07:58:02
>>yazadd+W1
Stable diffusion (or any likelihood-based generative model) is a learned compression algorithm. It is not the "container of compressed bytes". You can use a trained generative model to compress images, by combining it with some kind of entropy coding / arithmetic coding.

In this sense, stable diffusion is more analogous to the JPEG algorithm than it is to a specific collection of JPEG files. As it stands, the originals trainng data is not stored, even in a compressed way.

◧◩◪◨⬒⬓
226. rlt+Vv2[view] [source] [discussion] 2023-01-15 08:11:27
>>b3mora+T11
I seriously doubt Microsoft / GitHub would care if Copilot or a similar model were trained on their proprietary source code. An advanced code completion tool does pose any significant risk of someone building a competitive product to GitHub or any other Microsoft products.

This is an intelligence augmentation tool. It’s effectively like I’m really good at reading billions of lines of code and incorporating the learnings into my own code. If you don’t want people learning from your code, don’t publish it.

◧◩◪◨⬒
227. rlt+ly2[view] [source] [discussion] 2023-01-15 08:39:51
>>Aerroo+tp1
> if you prompt it with enough specificity you might be able to just recreate the image you had

At some point the input must be considered part of the work. At the limit you could just describe every pixel, but that certainly wouldn’t mean the model contained the work.

◧◩◪◨⬒⬓
228. eurlei+Ry2[view] [source] [discussion] 2023-01-15 08:46:31
>>ouid+q92
>You can't rip something and compress it badly enough to not violate copyright when you sell it.

While I doubt that specific case has been tested in court, arguably you could. If you created glitch art (https://en.wikipedia.org/wiki/Glitch_art) via compression artifacts, and your work was sufficiently distinct from the original work, I think you would have a reasonable case for transformative use (https://en.wikipedia.org/wiki/Transformative_use).

replies(1): >>willia+pK4
◧◩◪◨
229. Aerroo+Lz2[view] [source] [discussion] 2023-01-15 08:58:25
>>codefl+zJ1
I'm pointing out that the model has a piece missing - the input from the user. You can't just flippantly dismiss it, because input from the user is a crucial part of recreating the works the model is supposedly copying. It's an important factor to consider, because any digital painting program will let you recreate any art work with the correct user input.

It's not sufficient to just consider whether it can reproduce an image, but also how much user input was required to do so.

◧◩◪◨⬒
230. manhol+Xz2[view] [source] [discussion] 2023-01-15 09:01:43
>>bobbru+cZ1
The model can generate original images, yes, and those images might be fair use. But it can also generate near verbatim copies of the source works or substantial parts thereof, so the model itself is not fair use, it's a wholly derivative work.

For example, if a publish a music remix tool with a massive database of existing music, creators might use to create collages that are original and fall under fair use. But the tool itself is not and requires permission from the rights owners.

◧◩◪
231. Aerroo+6A2[view] [source] [discussion] 2023-01-15 09:04:16
>>SillyU+E42
I think so too. I also think that this is a dangerous issue, because we don't really know how our brains work. If we set legal restrictions over this and then it turns out our brains work in a similar manner, then what?
◧◩◪◨⬒
232. manhol+cA2[view] [source] [discussion] 2023-01-15 09:05:56
>>smegge+dd1
What you and many other in the thread seem to be oblivious about is that algorithms are not people. Yes, it may come as a shock to autistic engineers, but the fact that a machine can do something to what a person does does not warant it equal protection under the law.

Copyright, and laws in general, exists to protect the human members of society not some abstract representation of them.

replies(1): >>weknow+Vq4
◧◩◪◨⬒
233. synu+uB2[view] [source] [discussion] 2023-01-15 09:25:07
>>max47+OY1
Past a certain level of overfitting you can definitely recreate them just by asking for them by name. And it's possible to unintentionally or even intentionally overfit.

So it would be quite easy to make a trademark laundering operation, in theory.

◧◩
234. vinter+BF2[view] [source] [discussion] 2023-01-15 10:17:33
>>yazadd+W1
> In fairness, Diffusion is arguably a very complex entropy coding similar to Arithmetic/Huffman coding.

Not the way it's used in Stable Diffusion models. Compressed data can be decompressed knowing only the decompression algorithm. To recover data from a stable diffusion model, you need to know the algorithm and the prompt.

A critical part of the information _isn't_ in the data you decompress, it has to come from you. (And this isn't that relevant, but it would be lossy, perceptual compression like jpeg or mp3, not lossless compression like Huffman or Arithmetic coding.)

◧◩◪◨
235. hgomer+nY2[view] [source] [discussion] 2023-01-15 14:05:15
>>TheDon+L6
That's an interesting essay and I agree it goes to the heart of the question. There's clearly an interesting question, even in the colour domain: is someone infringing copyright if the data they themselves are sharing has a perfectly legitimate colour that is the basis of their sharing? That's the plausible deniability bit that's so important: "Yes your honour, I did share that chunk of random data, but I did so because it's part of this totally legitimately coloured file I was wanting to share. I had no idea that someone added a new colour to the block. Obviously, I'm only sharing the original colour block; prove otherwise". At some point, the court has to decide the colour of the block from the perspective of the accused, which allows a basis for deniability.
◧◩◪◨⬒⬓
236. basch+vZ2[view] [source] [discussion] 2023-01-15 14:15:59
>>chongl+vT
That’s not how fair use works. It’s not a binary switch where commercial derivatives automatically require licensing. Such a college would be ruled transformative and non competitive.

Me having bought the magazines also has nothing to do with anything. Would apply equally if they were gifted or free or stolen.

◧◩◪◨
237. neuah+H43[view] [source] [discussion] 2023-01-15 14:57:33
>>smusam+QZ
Does SD have to recreate the entire image for it to violate copyright?

As a thought experiment, imagine a variant of something like SD was used for music generation rather than images. It was trained on all music on spotify and it is marketed as a paid tool for producers and artists. If the model reproduces specific sounds from certain songs, e.g. the specific beat from a song, hook, or melody, it would seem pretty straightforward that the generated content was derivative, even though only a feature of it was precisely reproduced. I could be wrong but as far as i am aware you need to get permission to use samples. Even if the content is not published those sounds are being sold by the company as inspiration, and therefore that should violate copyright. The training data is paramount because if you trained the model on stuff you generated yourself or on stuff with appropriate CC license, the resulting work would not violate copyright, or you could at least argue independent creation.

In the feature space of images and art, SD is doing something very similar, so i can see the argument that it violates copyright even without reproducing the whole training data.

Overall, i think we will ultimately need to decide how we want these technologies used, what restrictions should be on the training data, etc, and then create new laws specifically for the new technology, rather than trying to shoehorn it into existing copyright law.

replies(1): >>smusam+A75
◧◩◪◨⬒⬓⬔⧯▣
238. ghaff+z83[view] [source] [discussion] 2023-01-15 15:32:14
>>astran+L42
I realize that's not how it works. My point was that they're apparently taking deliberate steps to try to make sure the model trains over a large number of images and doesn't overfit on a small sample given a sufficiently specific "in the style of," etc.
◧◩◪◨⬒⬓
239. woah+bn3[view] [source] [discussion] 2023-01-15 17:14:42
>>b3mora+T11
Microsoft is not training copilot on your proprietary code that you keep on your own systems, just like they are not training it on their proprietary code.
◧◩◪◨
240. soerxp+JM3[view] [source] [discussion] 2023-01-15 19:50:14
>>codeon+dx1
At what point does it become lossy enough that it's not protected, though? You can imagine a lossy compression algorithm that merely stores a 1 for images that are "more red" and a 0 for images that are "more blue." Such a compression algorithm would be storing some information about the thing it's compressing, but the closest reconstruction you could get from the compressed data is a red square or a blue square. Surely that's not copyright infringement? What about an algorithm that counts the fingers portrayed in an image and just reconstructs an image with the same amount of fingers? Where's the line?
◧◩◪◨⬒⬓⬔⧯▣
241. IncRnd+fR3[view] [source] [discussion] 2023-01-15 20:17:16
>>willia+QJ1
This isn't about what can be copyrighted but that there are copyrighted images being used without following the legal requirements.
◧◩◪◨⬒⬓⬔⧯▣
242. SillyU+Ee4[view] [source] [discussion] 2023-01-15 22:30:39
>>astran+L42
Actually that's partly how it works.

A trained model holds relationships between patterns/colours in artwork and their affinity to the other images in the model (ignoring the English tagging of images data within this model for a minute). To this degree, it holds relationships between millions of images and the degree of similarities (i.e. affinity weighting of the patterns within them) in a big blob (the model).

When you ask for a dragon by $ARTIST it will find within it's model an area of data with high affinity to a dragon and that of $ARTIST. What has been glossed over in discussion here is that there are millions of other bits of related images - that have lower affinity - from lots of unrelated artwork which gives the generated image uniqueness. Because of this, you can never recreate 1:1 the original image, it's always diluted by the relationships from the huge mass of other training data, e.g. a colour from a dinosaur exhibit in a museum may also be incorporated as it looks like a dragon, along with many other minor traits from millions of other images, chosen at random (and other seed values).

Another interesting point is that a picture of a smiling dark haired woman would have high affinity with Mona Lisa, but when you prompt for Mona Lisa you may get parts of that back and not the patterns from the Mona Lisa*, even though it looks the same. That arguably (not getting Mona Lisa) is no longer the copyrighted data.

* Nb. this is a contrived example, since in SD the real Mona Lisa weightings will out number the individual dark haired woman's many times, however this concept might be (more) appropriate for minor artists whose work is not popular enough to form a significantly large amount of weighting in the training data.

◧◩◪◨⬒⬓
243. weknow+Vq4[view] [source] [discussion] 2023-01-16 00:04:33
>>manhol+cA2
It seems like you're using "autistic" as an insult here. If that's not your intention you might want to edit this comment to use different verbage.
replies(1): >>manhol+dc5
◧◩◪◨⬒⬓⬔
244. willia+pK4[view] [source] [discussion] 2023-01-16 02:55:41
>>eurlei+Ry2
I'm pretty sure you were downvoted by someone who walks into MoMA and whispers to their partner that a three year old could have drawn that...
◧◩
245. gdubs+xL4[view] [source] [discussion] 2023-01-16 03:07:00
>>codefl+57
Models like MidJourney and StableDiffusion can generate very close renditions of some very well known paintings – and if it has enough of a conceptual understanding of even a less known painting, it can do a pretty impressive rendition of that as well.

But it's virtually impossible for these models to make an exact replica – a photocopy - of an existing painting, because that would make it break some laws of information theory probably. It's not a lossless compression engine. Paintings like, "Girl With the Pearl Earring" appear so frequently in the datasets that the models tend to overfit on them – which is actually not something you want when designing a model. It tends to create issues for you. But that's why a painting like that can be simulated somewhat accurately. But even then – it's never going to be 100%.

◧◩◪◨⬒⬓
246. anothe+xQ4[view] [source] [discussion] 2023-01-16 04:06:18
>>bluefi+RT
It's not really. It's more like making an entire compression scheme that is very good at compressing images encountered in real life, rather than say, noisy images.
◧◩◪◨⬒⬓
247. anothe+GQ4[view] [source] [discussion] 2023-01-16 04:07:40
>>danari+RI
Stable Diffusion is essentially a Compression Codec though. It's one optimised to compress real world images and art, by using statistics gathered from real world images and art.

It's like the compression that occurs when I say "Mona Lisa" and you read it, and can know many aspects of that painting.

replies(1): >>danari+266
◧◩◪◨⬒
248. smusam+A75[view] [source] [discussion] 2023-01-16 07:00:31
>>neuah+H43
Do you know that the final trained model is only 2GB? There is no way it can reproduce anything verbatim. There is also Riffusion that can generate music after being trained on FFTs of music.
◧◩◪◨⬒⬓⬔
249. manhol+dc5[view] [source] [discussion] 2023-01-16 07:47:59
>>weknow+Vq4
What do you mean, autism is well established as a personality trait that diminishes empathy and the ability to understand other people's desires and emotions, while having a strong affinity to things, for example machines and algorithms.

Legislation is driven by people who are, on aggregate, not autistic. So it's entirely appropriate to presume that a person not understanding how that process works is indeed autistic, especially if they suggest machines are subjects of law by analogy with human beings.

It's not that autists are bad people, they are just outliers in the political spectrum, as you can see from the complete disconnect of up-voted AI-related comments on Hacker News, where autistic engineers are clearly over-represented, versus just about any venue where other professionals, such as painters or musicians, congregate. Just try to suggest to them that a corporation has the right to use their work for free and profit from it while leaving them unemployed, because the algorithm the corporation uses to exploit them is in some abstract sense similar to how their brain works. That position is so for out on the spectrum that presuming a personality peculiarity of the emitter is the absolutely most charitable interpretation.

◧◩◪◨
250. madaxe+Ne5[view] [source] [discussion] 2023-01-16 08:12:27
>>Xelyne+uS
If an encrypted file for which there is no key is treatable as derivative by law, then so should be an md5 hash. Both require vast brute force to extract/establish the original data, but both could be said to contain a derived representation of the work in question.
◧◩◪◨⬒⬓⬔
251. danari+266[view] [source] [discussion] 2023-01-16 15:19:12
>>anothe+GQ4
I will admit to knowing the overall underlying technology better than the details of what specific implementations consist of. My understanding is, though, that "Stable Diffusion" is both a specific refinement (or set of refinements) of the same ML techniques that created DALL-E, Midjourney, and other ML art generators, and the trained model that the group working on it created to go with it.

So while it would be possible to create a "Public Diffusion" that took the Stable Diffusion refinements of the ML techniques and created a model built solely out of public-domain art, as it stands, "Stable Diffusion" includes by definition the model that is built from the copyrighted works in question.

◧◩◪◨⬒⬓⬔⧯
252. Xelyne+myl[view] [source] [discussion] 2023-01-20 22:20:56
>>andyba+Kd1
If I make it sound like a sinister conspiracy or cash grab, that's because that's what it as long as it's a private entity and not a public endeavor.

I don't deny that this might be a worthy pursuit or the right direction to be looking, or that that's the reason some people are in it. I just question the motivations of a private company valued at $10b which is going to have a lot more control over the direction of the industry than those passionate individuals.

[go to top]