zlacker

[parent] [thread] 6 comments
1. danari+(OP)[view] [source] 2023-01-14 15:04:51
> It's equivalent to trying to sue a compression codec because a specific archive contains a copyrighted image.

That's plainly untrue, as Stable Diffusion is not just the algorithm, but the trained model—trained on millions of copyrighted images.

replies(2): >>yazadd+8i >>anothe+P74
2. yazadd+8i[view] [source] 2023-01-14 17:20:38
>>danari+(OP)
But in fairness, even a human could know how to violate copyright but cannot be sued until they do violate it.

SD might know how to violate copyright but is that enough to sue it? Or can you only sue violations it helps create?

replies(1): >>danari+bn
◧◩
3. danari+bn[view] [source] [discussion] 2023-01-14 17:57:20
>>yazadd+8i
I would assert (with no legal backing, since this is the first suit that actually attempts to address the issue either way) that the trained model is a copyright infringement in itself. It is a novel kind of copyright infringement, to be sure, but I believe that use of copyrighted material in a neural net's training set without the creator's permission should be considered copyright infringement without any further act required to make it so.
replies(1): >>yazadd+w21
◧◩◪
4. yazadd+w21[view] [source] [discussion] 2023-01-14 22:36:31
>>danari+bn
I think that is a very fair argument. It may win in court it may lose. I’m excited for the precedent either way.

That’s said, it does raise the question, “should this precedent be extended to humans?”

i.e. Can humans be taught something based on copyrighted materials in the training set/curriculum?

replies(1): >>danari+0c1
◧◩◪◨
5. danari+0c1[view] [source] [discussion] 2023-01-15 00:08:52
>>yazadd+w21
I think this is a reasonable question for the uninitiated—those for whom "training a neural network" seems like it would be a lot like "teaching a human"—but for those with deeper understanding (tbh, I would only describe my knowledge in both these areas as that of an interested amateur), it is a) a poor analogy, and b) already a settled question in law.

To address (b) first: Fair Use has long held that educational purposes are a valid reason for using copyrighted materials without express permission—for instance, showing a whole class a VHS or DVD, which would technically require a separate release otherwise.

For (a): I don't know anything about your background in ML, so pardon if this is all obvious, but at least current neural nets and other ML programs are not "AI" in anything like the kind of sense where "teaching" is an apt word to describe the process of creating the model. Certainly the reasoning behind the Fair Use exception for educating humans does not apply—there is no mind there to better; no person to improve the life, understanding, or skills of.

6. anothe+P74[view] [source] 2023-01-16 04:07:40
>>danari+(OP)
Stable Diffusion is essentially a Compression Codec though. It's one optimised to compress real world images and art, by using statistics gathered from real world images and art.

It's like the compression that occurs when I say "Mona Lisa" and you read it, and can know many aspects of that painting.

replies(1): >>danari+bn5
◧◩
7. danari+bn5[view] [source] [discussion] 2023-01-16 15:19:12
>>anothe+P74
I will admit to knowing the overall underlying technology better than the details of what specific implementations consist of. My understanding is, though, that "Stable Diffusion" is both a specific refinement (or set of refinements) of the same ML techniques that created DALL-E, Midjourney, and other ML art generators, and the trained model that the group working on it created to go with it.

So while it would be possible to create a "Public Diffusion" that took the Stable Diffusion refinements of the ML techniques and created a model built solely out of public-domain art, as it stands, "Stable Diffusion" includes by definition the model that is built from the copyrighted works in question.

[go to top]