zlacker

[return to "Who knew the first AI battles would be fought by artists?"]
1. cardan+G3[view] [source] 2022-12-15 12:15:07
>>dredmo+(OP)
I don't see the point. There is a copyright (and in that regard most of these images are fine) and then there is trademark which they might violate.

Regardless, the human generating and publishing these images is obviously responsible to ensure they are not violating any IP property. So they might get sued by Disney. I don't get why the AI companies would be effected in any way. Disney is not suing Blender if I render an image of Mickey Mouse with it.

Though I am sure that artists might find an likely ally in Disney against the "AI"'s when they tell them about their idea of making art-styles copyright-able Being able to monopolize art styles would be indeed a dream come true for those huge corporations.

◧◩
2. xg15+P7[view] [source] 2022-12-15 12:39:56
>>cardan+G3
If thouse mouse images are generated, that implies that Disney content is already part of the training data and models.

So in effect, they are pitting Disney's understanding of copyright (maximally strict) against that of the AI companies (maximally loose).

Even if it's technically the responsibility of the user not to publish generated images that contain copyrighted content, I can't imagine that Disney is very happy with a situation where everyone can download Stable Diffusion and generate their own arbitrary artwork of Disney characters in a few minutes.

So that strategy might actually work. I wish them good luck and will restock my popcorn reserves just in case :)

The problem I see though is that both sides are billion dollar companies - and there is probably a lot of interest in AI tech within Disney themselves. So it might just as well happen that both sides find some kind of agreement that's beneficial for both of them and leaves the artists holding the bag.

◧◩◪
3. astran+69[view] [source] 2022-12-15 12:46:18
>>xg15+P7
> If thouse mouse images are generated, that implies that Disney content is already part of the training data and models.

It doesn't mean that. You could "find" Mickey in the latent space of any model using textual inversion and an hour of GPU time. He's just a few shapes.

(Main example: the most popular artist StableDiffusion 1 users like to imitate is not in the StableDiffusion training images. His name just happens to work in prompts by coincidence.)

◧◩◪◨
4. Taywee+Od[view] [source] 2022-12-15 13:11:29
>>astran+69
If you can find a copyrighted work in that model that wasn't put there with permission, then why would that model and its output not violate the copyright?
◧◩◪◨⬒
5. mcv+0u[view] [source] 2022-12-15 14:29:03
>>Taywee+Od
The idea behind that is probably that any artist learns from seeing other artists' copyrighted art, even if they're not allowed to reproduce it. This is easily seen from the fact that art goes through fashions; artists copy styles and ideas from each other and expand on that.

Of course that probably means that those copyrighted images exist in some encoded form in the data or neural network of the AI, and also in our brain. Is that legal? With humans it's unavoidable, but that doesn't have to mean that it's also legal for AI. But even if those copyrighted images exist in some form in our brains, we know not to reproduce them and pass them off as original. The AI does that. Maybe it needs a feedback mechanism to ensure its generated images don't look too much like copyrighted images from its data set. Maybe art-AI necessarily also has to become a bit of a legal-AI.

[go to top]