For example, I know artists who are vehemently against DALL-E, Stable Diffusion, etc. and regard it as stealing, but they view Copilot and GPT-3 as merely useful tools. I also know software devs who are extremely excited about AI art and GPT-3 but are outraged by Copilot.
For myself, I am skeptical of intellectual property in the first place. I say go for it.
When Microsoft steals all code on their platform and sells it, they get lauded. When "Open" AI steals thousands of copyrighted images and sells them, they get lauded.
I am skeptical of imaginary property myself, but fuck this one set of rules for the poor, another set of rules for the masses.
I haven't been following super closely but I don't know of any claims or examples where input images were recreated to a significant degree by stable diffusion.
You put it as a remix, but remixes are credited and expressed as such.
I don’t see Midjourney (et al) as remixes, myself. More like “inspired by.”
https://twitter.com/ebkim00/status/1579485164442648577
Not sure if this was fed the original image as an input or not.
Also seen a couple cases where people explicitly trained a network to imitate an artist's work, like the deceased Kim Jung Gi.
I think over time we are going to see the following:
- If you take say a star wars poster, and inpaint in a trained face over luke's, and sell that to people as a service, you will probably be approached for copyright and trademark infringement.
- If you are doing the above with a satirical take, you might be able to claim fair use.
- If you are using AI as a "collage generator" to smash together a ton of prompts into a "unique" piece, you may be safe from infringement but you are taking a risk as you don't know what % of source material your new work contains. I'd like to imagine if you inpaint in say 20 details with various sub-prompts that you are getting "safer".
So much for “generation” - it seems as if these models are just overfitting on extremely small subset of input data that it did not utterly failed to train on, almost that there could be geniuses who would be able to directly generate weight data from said images without all the gradient descent thing.