zlacker

[return to "We’ve filed a law­suit chal­leng­ing Sta­ble Dif­fu­sion"]
1. supriy+c3[view] [source] 2023-01-14 07:30:50
>>zacwes+(OP)
Sometimes I have to wonder about the hypocrisy you can see on HN threads. When its software development, many here seem to understand the merits of a similar lawsuit against Copilot[1], but as soon as its a different group such as artists, then it's "no, that's not how a NN works" or "the NN model works just the same way as a human would understand art and style."

[1] https://news.ycombinator.com/item?id=34274326

◧◩
2. TheMid+w4[view] [source] 2023-01-14 07:49:18
>>supriy+c3
I believe Copilot was giving exact copies of large parts open source projects, without the license. Are image generators giving exact (or very similar) copies of existing works?

I feel like this is the main distinction.

◧◩◪
3. rivers+T5[view] [source] 2023-01-14 08:06:53
>>TheMid+w4
> Are image generators giving exact (or very similar) copies of existing works?

um, yes.[1][2] What else would they be trained on?

According to the model card:

[1] https://github.com/CompVis/stable-diffusion/blob/main/Stable...

it was trained on this data set(which has hyperlinks to images, so feel free to peruse):

[2] https://huggingface.co/datasets/laion/laion2B-en

◧◩◪◨
4. chii+M6[view] [source] 2023-01-14 08:14:32
>>rivers+T5
> What else would they be trained on?

why does it matter how it was trained? The question is, does the generative AI _output_ copyrighted images?

Training is not a right that the copyright holder owns exclusively. Reproducing the works _is_, but if the AI only reproduces a style, but not a copy, then it isn't breaking any copyright.

[go to top]