zlacker

[parent] [thread] 31 comments
1. anothe+(OP)[view] [source] 2023-01-14 08:08:50
Great. Now the defence shows an artist that can recreate an image. Cool, now people who look at images get copyright suits filed against them for encoding those images in their heads.
replies(2): >>dylan6+R >>smusam+IV
2. dylan6+R[view] [source] 2023-01-14 08:17:09
>>anothe+(OP)
Just because I look at an image does not mean that I can recreate it. storing it in the training data means the AI can recreate it.

There's a world of difference that you are just writing off.

replies(3): >>realus+i1 >>XorNot+03 >>turtle+dQ
◧◩
3. realus+i1[view] [source] [discussion] 2023-01-14 08:20:48
>>dylan6+R
> storing it in the training data means the AI can recreate it.

No it doesn't, it means that abstract facts related to this image might be stored.

replies(2): >>dylan6+14 >>bluefi+JP
◧◩
4. XorNot+03[view] [source] [discussion] 2023-01-14 08:38:28
>>dylan6+R
No, it means there is a 512 bit number you can combine with the training data to reproduce a reasonable though not exact likeness (attempts to use SD and others as compression algorithms show they're pretty bad at it, because while they can get "similar" they'll outright confabulate details in a plausible looking way - i.e. redrawing the streets of San Francisco in images of the golden gate bridge).

Which of course then arrives at the problem: the original data plainly isn't stored in a byte exact form, and you can only recover it by providing an astounding specific input string (the 512 bit latent space vector). But that's not data which is contained within Stable Diffusion. It's equivalent to trying to sue a compression codec because a specific archive contains a copyrighted image.

replies(2): >>yazadd+W3 >>danari+JE
◧◩◪
5. yazadd+W3[view] [source] [discussion] 2023-01-14 08:48:57
>>XorNot+03
> It's equivalent to trying to sue a compression codec because a specific archive contains a copyrighted image.

This is the most salient point in this whole HN thread!

You can’t sue Stable Diffusion or the creators of it! That just seems silly.

But (I don’t know I’m not a lawyer) there might be an argument to sue an instance of Stable Diffusion and the creators of it.

I haven’t picked a side of this debate yet, but it has already become a fun debate to watch.

replies(2): >>astran+m4 >>techdr+ir
◧◩◪
6. dylan6+14[view] [source] [discussion] 2023-01-14 08:49:50
>>realus+i1
The pedantry gets tiring. If the AI can't recreate it exactly, it can recreate a likeness that is compelling enough that the average person would think it was the same. If it can't now, it will as it gets better. That's the point of using the training data.
replies(3): >>astran+j4 >>realus+G4 >>hhjink+1e
◧◩◪◨
7. astran+j4[view] [source] [discussion] 2023-01-14 08:52:36
>>dylan6+14
That is not the point of using the training data. It's specifically trained to not do that.

See https://openai.com/blog/dall-e-2-pre-training-mitigations/ "Preventing Image Regurgitation".

replies(1): >>ghaff+PV
◧◩◪◨
8. astran+m4[view] [source] [discussion] 2023-01-14 08:53:33
>>yazadd+W3
You can't (successfully) sue the creators of Stable Diffusion because they're an academic group in Germany, a country that has an explicit allowance in copyright law for training non-commercial models.
◧◩◪◨
9. realus+G4[view] [source] [discussion] 2023-01-14 08:56:40
>>dylan6+14
> If the AI can't recreate it exactly, it can recreate a likeness that is compelling enough that the average person would think it was the same

That's the opposite goal of this image model. Sure you might find other types of research models which are meant to do that but that's not stablediffusion and the likes.

◧◩◪◨
10. hhjink+1e[view] [source] [discussion] 2023-01-14 10:45:31
>>dylan6+14
Why does this argument apply to an Artificial Intelligence, but not a human one? A human is not breaking copyright just by being able recreate a copyrighted work they've studied.
replies(1): >>ghaff+8S
◧◩◪◨
11. techdr+ir[view] [source] [discussion] 2023-01-14 13:00:44
>>yazadd+W3
Exactly, the quarrel here is between the users of Stable Diffusion, some of which are deliberately, legally speaking with intent (prompt crafting to get a specific output demonstrates clear intent), trying to use Stable Diffusion to produce images that are highly derivative of and may or may not be declared legally infringing works of another artist, and the artists who’s works are being potentially infringed upon.

You can’t sue Canon for helping a user take better infringing copies of a painting, nor can you sue Apple or Nikon or Sony or Samsung… you can sue the user making an infringing image, not the tools they used to make the infringing image… the tools have no mens rea.

◧◩◪
12. danari+JE[view] [source] [discussion] 2023-01-14 15:04:51
>>XorNot+03
> It's equivalent to trying to sue a compression codec because a specific archive contains a copyrighted image.

That's plainly untrue, as Stable Diffusion is not just the algorithm, but the trained model—trained on millions of copyrighted images.

replies(2): >>yazadd+RW >>anothe+yM4
◧◩◪
13. bluefi+JP[view] [source] [discussion] 2023-01-14 16:30:07
>>realus+i1
This just sounds like really fancy, really lossy compression to me.

Compression that returns something different from the original most of the time, but still could return the original.

replies(1): >>anothe+pM4
◧◩
14. turtle+dQ[view] [source] [discussion] 2023-01-14 16:33:57
>>dylan6+R
If you spent a decade trying to draw it, wouldn't your brain have the right "weights" to execute it pretty exactly going forward?

Except with computers, they don't need to eat or sleep, converse or attend stand-ups.

And once you're able to draw that one picture, you could probably draw similar ones. Your own style may emerge too.

Just thinking. Copywriters, students, and scribes used to copy stuff verbatim, sometimes just to "learn" it.

The product of that study could be published works, a synthesis of ideas from elsewhere, and so on. We would say it belonged to the executor, though.

So the AI learned, and what it has created belongs to it. Maybe.

Or, once we acknowledge AI can "see" images, precedent opens the way to citizenship (humanship?)

◧◩◪◨⬒
15. ghaff+8S[view] [source] [discussion] 2023-01-14 16:49:00
>>hhjink+1e
It depends to what degree it's literal copying. See e.g. the Obama "Hope" poster. [1] Though that case is muddied by the fact that the artist lied about the source of his inspiration. Had it in fact been an older photo of JFK in a similar pose, there probably wouldn't have been a controversy.

[1] https://en.wikipedia.org/wiki/Barack_Obama_%22Hope%22_poster

16. smusam+IV[view] [source] 2023-01-14 17:12:42
>>anothe+(OP)
Don't think stable Diffusion can reproduce any single image its trained on, not matter what prompts you use.

It does have Mona lisa because of over fitting. But that's because there is too much Mona lisa on internet.

These artist taking part in suit won't be able to recreat any of their work.

replies(2): >>Aerroo+ll1 >>neuah+z03
◧◩◪◨⬒
17. ghaff+PV[view] [source] [discussion] 2023-01-14 17:13:37
>>astran+j4
That's probably a very relevant point. (I'm guessing.) If I ask for an image of a red dragon in the style of $ARTIST, and the algorithm goes off and says "Oh, I've got the perfect one already in my data"--or even "I've got a few like that, I'll just paste them together"--that's a problem.
replies(1): >>astran+D02
◧◩◪◨
18. yazadd+RW[view] [source] [discussion] 2023-01-14 17:20:38
>>danari+JE
But in fairness, even a human could know how to violate copyright but cannot be sued until they do violate it.

SD might know how to violate copyright but is that enough to sue it? Or can you only sue violations it helps create?

replies(1): >>danari+U11
◧◩◪◨⬒
19. danari+U11[view] [source] [discussion] 2023-01-14 17:57:20
>>yazadd+RW
I would assert (with no legal backing, since this is the first suit that actually attempts to address the issue either way) that the trained model is a copyright infringement in itself. It is a novel kind of copyright infringement, to be sure, but I believe that use of copyrighted material in a neural net's training set without the creator's permission should be considered copyright infringement without any further act required to make it so.
replies(1): >>yazadd+fH1
◧◩
20. Aerroo+ll1[view] [source] [discussion] 2023-01-14 19:47:33
>>smusam+IV
I think there's a chance they might be able to recreate some simpler work if they make the prompts specific enough. When you set up a prompt you're essentially telling the system what you want it to generate - if you prompt it with enough specificity you might be able to just recreate the image you had.

Kind of like recreating your image one object at a time. It might not be exact, but close enough.

replies(2): >>smusam+ID1 >>rlt+du2
◧◩◪
21. smusam+ID1[view] [source] [discussion] 2023-01-14 22:05:25
>>Aerroo+ll1
People have tried, unless the thing you want to recreat has been seen by it a lot (over trained) you won't get the same image. You don't have that much fine grained control via text only.

Best you can do is to mask and keep inpainting the area that looks different until it doesn't.

◧◩◪◨⬒⬓
22. yazadd+fH1[view] [source] [discussion] 2023-01-14 22:36:31
>>danari+U11
I think that is a very fair argument. It may win in court it may lose. I’m excited for the precedent either way.

That’s said, it does raise the question, “should this precedent be extended to humans?”

i.e. Can humans be taught something based on copyrighted materials in the training set/curriculum?

replies(1): >>danari+JQ1
◧◩◪◨⬒⬓⬔
23. danari+JQ1[view] [source] [discussion] 2023-01-15 00:08:52
>>yazadd+fH1
I think this is a reasonable question for the uninitiated—those for whom "training a neural network" seems like it would be a lot like "teaching a human"—but for those with deeper understanding (tbh, I would only describe my knowledge in both these areas as that of an interested amateur), it is a) a poor analogy, and b) already a settled question in law.

To address (b) first: Fair Use has long held that educational purposes are a valid reason for using copyrighted materials without express permission—for instance, showing a whole class a VHS or DVD, which would technically require a separate release otherwise.

For (a): I don't know anything about your background in ML, so pardon if this is all obvious, but at least current neural nets and other ML programs are not "AI" in anything like the kind of sense where "teaching" is an apt word to describe the process of creating the model. Certainly the reasoning behind the Fair Use exception for educating humans does not apply—there is no mind there to better; no person to improve the life, understanding, or skills of.

◧◩◪◨⬒⬓
24. astran+D02[view] [source] [discussion] 2023-01-15 01:57:58
>>ghaff+PV
That's extremely not how it works. If there's only one training example it's not going to remember anything like actual visual details of it.
replies(2): >>ghaff+r43 >>SillyU+wa4
◧◩◪
25. rlt+du2[view] [source] [discussion] 2023-01-15 08:39:51
>>Aerroo+ll1
> if you prompt it with enough specificity you might be able to just recreate the image you had

At some point the input must be considered part of the work. At the limit you could just describe every pixel, but that certainly wouldn’t mean the model contained the work.

◧◩
26. neuah+z03[view] [source] [discussion] 2023-01-15 14:57:33
>>smusam+IV
Does SD have to recreate the entire image for it to violate copyright?

As a thought experiment, imagine a variant of something like SD was used for music generation rather than images. It was trained on all music on spotify and it is marketed as a paid tool for producers and artists. If the model reproduces specific sounds from certain songs, e.g. the specific beat from a song, hook, or melody, it would seem pretty straightforward that the generated content was derivative, even though only a feature of it was precisely reproduced. I could be wrong but as far as i am aware you need to get permission to use samples. Even if the content is not published those sounds are being sold by the company as inspiration, and therefore that should violate copyright. The training data is paramount because if you trained the model on stuff you generated yourself or on stuff with appropriate CC license, the resulting work would not violate copyright, or you could at least argue independent creation.

In the feature space of images and art, SD is doing something very similar, so i can see the argument that it violates copyright even without reproducing the whole training data.

Overall, i think we will ultimately need to decide how we want these technologies used, what restrictions should be on the training data, etc, and then create new laws specifically for the new technology, rather than trying to shoehorn it into existing copyright law.

replies(1): >>smusam+s35
◧◩◪◨⬒⬓⬔
27. ghaff+r43[view] [source] [discussion] 2023-01-15 15:32:14
>>astran+D02
I realize that's not how it works. My point was that they're apparently taking deliberate steps to try to make sure the model trains over a large number of images and doesn't overfit on a small sample given a sufficiently specific "in the style of," etc.
◧◩◪◨⬒⬓⬔
28. SillyU+wa4[view] [source] [discussion] 2023-01-15 22:30:39
>>astran+D02
Actually that's partly how it works.

A trained model holds relationships between patterns/colours in artwork and their affinity to the other images in the model (ignoring the English tagging of images data within this model for a minute). To this degree, it holds relationships between millions of images and the degree of similarities (i.e. affinity weighting of the patterns within them) in a big blob (the model).

When you ask for a dragon by $ARTIST it will find within it's model an area of data with high affinity to a dragon and that of $ARTIST. What has been glossed over in discussion here is that there are millions of other bits of related images - that have lower affinity - from lots of unrelated artwork which gives the generated image uniqueness. Because of this, you can never recreate 1:1 the original image, it's always diluted by the relationships from the huge mass of other training data, e.g. a colour from a dinosaur exhibit in a museum may also be incorporated as it looks like a dragon, along with many other minor traits from millions of other images, chosen at random (and other seed values).

Another interesting point is that a picture of a smiling dark haired woman would have high affinity with Mona Lisa, but when you prompt for Mona Lisa you may get parts of that back and not the patterns from the Mona Lisa*, even though it looks the same. That arguably (not getting Mona Lisa) is no longer the copyrighted data.

* Nb. this is a contrived example, since in SD the real Mona Lisa weightings will out number the individual dark haired woman's many times, however this concept might be (more) appropriate for minor artists whose work is not popular enough to form a significantly large amount of weighting in the training data.

◧◩◪◨
29. anothe+pM4[view] [source] [discussion] 2023-01-16 04:06:18
>>bluefi+JP
It's not really. It's more like making an entire compression scheme that is very good at compressing images encountered in real life, rather than say, noisy images.
◧◩◪◨
30. anothe+yM4[view] [source] [discussion] 2023-01-16 04:07:40
>>danari+JE
Stable Diffusion is essentially a Compression Codec though. It's one optimised to compress real world images and art, by using statistics gathered from real world images and art.

It's like the compression that occurs when I say "Mona Lisa" and you read it, and can know many aspects of that painting.

replies(1): >>danari+U16
◧◩◪
31. smusam+s35[view] [source] [discussion] 2023-01-16 07:00:31
>>neuah+z03
Do you know that the final trained model is only 2GB? There is no way it can reproduce anything verbatim. There is also Riffusion that can generate music after being trained on FFTs of music.
◧◩◪◨⬒
32. danari+U16[view] [source] [discussion] 2023-01-16 15:19:12
>>anothe+yM4
I will admit to knowing the overall underlying technology better than the details of what specific implementations consist of. My understanding is, though, that "Stable Diffusion" is both a specific refinement (or set of refinements) of the same ML techniques that created DALL-E, Midjourney, and other ML art generators, and the trained model that the group working on it created to go with it.

So while it would be possible to create a "Public Diffusion" that took the Stable Diffusion refinements of the ML techniques and created a model built solely out of public-domain art, as it stands, "Stable Diffusion" includes by definition the model that is built from the copyrighted works in question.

[go to top]