zlacker

[parent] [thread] 29 comments
1. akjetm+(OP)[view] [source] 2023-01-14 07:36:22
I don't think you have to reproduce an entire original work to demonstrate copyright violation. Think about sampling in hip hop for example. A 2 second sample, distorted, re-pitched, etc. can be grounds for a copyright violation.
replies(3): >>Salgat+e >>limite+C >>mirekr+Q1
2. Salgat+e[view] [source] 2023-01-14 07:41:38
>>akjetm+(OP)
The difference here is that the images aren't stored, but rather an extremely abstract description of the image was used to very slightly adjust a network of millions of nodes in a tiny direction. No semblance of the original image even remotely exists in the model.
replies(4): >>akjetm+E >>visarg+92 >>AlotOf+04 >>Xelyne+HT
3. limite+C[view] [source] 2023-01-14 07:46:57
>>akjetm+(OP)
Perhaps different media has different rules? You can’t necessarily apply music sampling rules to text, for example. Eg I don’t think incorporating a phrase from someone else’s poem into my poem would be grounds for a copyright violation.
replies(1): >>IncRnd+K5
◧◩
4. akjetm+E[view] [source] [discussion] 2023-01-14 07:47:55
>>Salgat+e
there are some artists with very strong, recognizable styles. if you provide one of these artists' name in your prompt and get a result back that employs their strong, recognizable style, i think that demonstrates that the network has a latent representation of the artists work stored inside of it.
replies(5): >>andyba+51 >>eega+J1 >>realus+P1 >>astran+17 >>djbebs+so1
◧◩◪
5. andyba+51[view] [source] [discussion] 2023-01-14 07:51:58
>>akjetm+E
I was with you right up until the final sentence.

How did "style" become "work"?

replies(2): >>limite+N1 >>WA+j2
◧◩◪
6. eega+J1[view] [source] [discussion] 2023-01-14 08:00:09
>>akjetm+E
So, what you are saying is that it is illegal to paint in the style of another artist? I‘m no lawyer, but I‘m pretty sure that is completely legss as long as you don’t claim your paintings ARE from the other artist.
◧◩◪◨
7. limite+N1[view] [source] [discussion] 2023-01-14 08:00:55
>>andyba+51
That’s the key question of the lawsuit IMO!
◧◩◪
8. realus+P1[view] [source] [discussion] 2023-01-14 08:01:37
>>akjetm+E
No that doesn't, that demonstrates that the model has a abstract features and characteristics of this artists stored in the model, not work.

You can't bring back the training images no matter how hard you try.

9. mirekr+Q1[view] [source] 2023-01-14 08:02:10
>>akjetm+(OP)
Can it? I thought up to 15s you can copy it verbatim without violation.
◧◩
10. visarg+92[view] [source] [discussion] 2023-01-14 08:04:53
>>Salgat+e
Not to mention that it works by inverting noise. Different noise, different result. Let's recognise the important contribution of noise here.
◧◩◪◨
11. WA+j2[view] [source] [discussion] 2023-01-14 08:07:05
>>andyba+51
Because in some cases, adding a style prompt gives almost the original image: https://www.reddit.com/r/StableDiffusion/comments/wby0ob/it_...
replies(1): >>andyba+Y7
◧◩
12. AlotOf+04[view] [source] [discussion] 2023-01-14 08:22:47
>>Salgat+e
This is very much a 'color of your bits' topic, but I'm not sure why the internal representation matters. It's pretty trivial to recreate famous works like the Mona Lisa or Starry Night or Monet's Water Lily Pond. Obviously some representation of the originals exist inside the model+prompt. Why wouldn't that apply to other images in the training sets?
replies(5): >>XorNot+U5 >>Kim_Br+Jn >>Fillig+3q >>huggin+HK >>derang+wf1
◧◩
13. IncRnd+K5[view] [source] [discussion] 2023-01-14 08:40:49
>>limite+C
"Copyright currently protects poetry just like it protects any other kind of writing or work of authorship. Poetry, therefore, is subject to the same minimal standards for originality that are used for other written works, and the same tests determine whether copyright infringement has occurred." [1]

[1] https://scholarship.law.vanderbilt.edu/vlr/vol58/iss3/13/

◧◩◪
14. XorNot+U5[view] [source] [discussion] 2023-01-14 08:42:38
>>AlotOf+04
Because you're silently invoking additional data (the prompt + noise seed), which is not present in the training weights. You have the prompt + noise seed for any given output.

An MPEG codec doesn't contain every movie in the world just because it could represent them if given the right file.

The white light coming off a blank canvas also doesn't contain a copy of the Mona Lisa which will be revealed once someone obscures some of the light.

replies(1): >>ifdefd+nh
◧◩◪
15. astran+17[view] [source] [discussion] 2023-01-14 08:54:49
>>akjetm+E
Or it means their style is so easy to recognize that you can see it even when it doesn't exist.

The most common example of this (Greg Rutkowski) is not in StableDiffusion's training set.

◧◩◪◨⬒
16. andyba+Y7[view] [source] [discussion] 2023-01-14 09:04:18
>>WA+j2
And yet nobody has managed to demonstrate reconstruction of a large enough section of a work that is still under copyright to prove the point.

The only thing so far discovered is either a) older public domain works nearly fully reproduced b) small fragments of newer works or c) "likenesses"

◧◩◪◨
17. ifdefd+nh[view] [source] [discussion] 2023-01-14 10:57:19
>>XorNot+U5
OK so let me encrypt a movie and distribute that. Then you tell people they need to invoke additional data to watch the movie. Also give some hints (try the movie title lol).
replies(1): >>XorNot+7y
◧◩◪
18. Kim_Br+Jn[view] [source] [discussion] 2023-01-14 12:04:14
>>AlotOf+04
Longer term, by analogy, it will then of course turn into a "what color is your neural net" topic.

Which runs into some very interesting historical precedents.

((I wonder if there's a split between people who think AI emancipation might happen this century versus people who think that such a thing is silly to contemplate))

◧◩◪
19. Fillig+3q[view] [source] [discussion] 2023-01-14 12:28:11
>>AlotOf+04
It applies to these specific images because there were thousands and thousands of copies in the training set. That’s not true for newer works.
replies(1): >>zowie_+sN
◧◩◪◨⬒
20. XorNot+7y[view] [source] [discussion] 2023-01-14 13:40:21
>>ifdefd+nh
If you distribute a random byte stream, and someone uses that as a one time pad to encrypt a movie, then are you distributing the movie?

The answer is of course not, and the same principle applies if someone uses Stable Diffusion to find a latent space encoding for a copyright image (the 231 byte number - had to go double check what the grid size actually is).

replies(1): >>ifdefd+u21
◧◩◪
21. huggin+HK[view] [source] [discussion] 2023-01-14 15:31:42
>>AlotOf+04
>It's pretty trivial to recreate famous works like the Mona Lisa or Starry Night or Monet's Water Lily Pond.

A recreation of a piece of art does not mean a copy, I've personally seen hundreds of recreations of Edvard Munch's 'The Scream', all of them perfectly legal.

Even in a massively overtrained model, it is practically impossible to create a 1:1 copy of a piece of art the model was trained upon.

And of course that would be a pointless exercise to begin with, why would anyone want to generate 1:1 copies (or anything near that) of existing images ?

The whole 'magic' of Stable Diffusion is that you can create new works of art in the combined styles of art, photography etc that it has been trained on.

replies(1): >>AlotOf+q91
◧◩◪◨
22. zowie_+sN[view] [source] [discussion] 2023-01-14 15:51:42
>>Fillig+3q
That's not true. As an example of a more recent copyright-protected work that Stability AI consistently reproduces fairly faithfully, I invite you to try out the prompt "bloodborne box art".
◧◩
23. Xelyne+HT[view] [source] [discussion] 2023-01-14 16:41:23
>>Salgat+e
> No semblance of the original image even remotely exists in the model

What does this mean? It doesn't mean you can't recreate the original, because that's been done. It doesn't mean that literally the bits for the image aren't present in the encoded data, because that's true for any compression algorithm.

replies(1): >>smusam+sY
◧◩◪
24. smusam+sY[view] [source] [discussion] 2023-01-14 17:14:35
>>Xelyne+HT
Do you have any examples of recreating an image with these models? Something other than Mona lisa or other famous artworks because they have caused over fitting.
◧◩◪◨⬒⬓
25. ifdefd+u21[view] [source] [discussion] 2023-01-14 17:45:36
>>XorNot+7y
I think it boils down to one question: can you prompt the model to show mostly unchanged pictures from artists? Then it's definitely problematic. If not, then I don't have enough knowledge of the topic to give a strong opinion. (my previous answer was just an use case that fits your argument)
replies(1): >>XorNot+NA1
◧◩◪◨
26. AlotOf+q91[view] [source] [discussion] 2023-01-14 18:25:42
>>huggin+HK
A work doesn't have to be identical to be considered a derivative work, which is why we also don't consider every JPEG a newly copyrighted image distinct from the source material.

As an example of a plausible scenario where copyright might actually be violated, consider this: an NGO wants images on their website. They type in something like 'afghan girl' or 'struggling child' and unknowingly use the recreations of the famous photographs they get.

◧◩◪
27. derang+wf1[view] [source] [discussion] 2023-01-14 19:01:10
>>AlotOf+04
It’s not quite a one to one. Copyright law isn’t as arbitrary as it would seem in my experience. Also there’s the conflation of two things here: whether the model is within copyright violation and whether the works generated by it are

The “color of your bits” only applies to the process of creating a work. Stable Diffusion’s training of the algorithm could be seen as violating copyright but that doesn’t spread to the works generated by it.

In the same vein, one can claim copyright on an image generated by stable diffusion even if the creation of the algorithm is safe from copyright violation.

“some representation of the originals exist inside the model+prompt” is also not sufficient for the model to be in violation of copyright of any one art piece. Some latent representation of the concept of an art piece or style isn’t enough.

It’s also important to note the distinction that there is no training data stored in its original form as part of the model during training, it’s simply used to tweak a function with the purpose of translating text to images. Some could say that’s like using the color from a picture of a car on the internet. Some might say it’s worse but it’s all subjective unless the opposition can draw new ties of the actual technical process to things already precedent.

◧◩◪
28. djbebs+so1[view] [source] [discussion] 2023-01-14 19:50:57
>>akjetm+E
That seems to indicate to me that the original work is actually not under copyright, since if it is the only method of achieving such an image in such a style, then there is no originality to be copyrighted.
◧◩◪◨⬒⬓⬔
29. XorNot+NA1[view] [source] [discussion] 2023-01-14 21:19:40
>>ifdefd+u21
I mean no, it doesn't. It's like drawing something in Photoshop which is a copyright'd work: the act of creating it is the violation, it doesn't prove that Photoshop contains the content directly.

The way SD model weights work, if you managed to prompt engineer a recreation of one specific work, it would only have been generated as a product of all the information in the entire training set + noise seed + the prompt. And the prompt wouldn't look anything like a reasonable description of any specific work.

Which is to say, it means nothing because you can equally generate a likeness of works which are known not to be included in the training set (easy, you ask for a latent encoding of the image and it gives you one): equivalent to a JPEG codec.

replies(1): >>ifdefd+nZ1
◧◩◪◨⬒⬓⬔⧯
30. ifdefd+nZ1[view] [source] [discussion] 2023-01-15 01:15:41
>>XorNot+NA1
> And the prompt wouldn't look anything like a reasonable description of any specific work.

I think this is the most relevant line of your argument. Because if you could just ask it like "show me the latest picture of [artist]" then you'll have a hard time convincing me that this is fundamentally different from a database with a fancy query language and lots of copyrighted work in it.

[go to top]