zlacker

[return to "AI companies have all kinds of arguments against paying for copyrighted content"]
1. andy99+gf[view] [source] 2023-11-05 18:17:33
>>rntn+(OP)
Copyright holders make all kinds of arguments for why they should be get money for incidental exposure to their work. This is all about greed and jealousy. If someone uses AI to make infringing content, existing laws already cover that. The fact that an ML model could be used to generate infringing content, and has exposure to or "knowledge" of some copyrighted material is immaterial. People just see someone else making money and want to try and get a piece of it.
◧◩
2. rvz+rh[view] [source] 2023-11-05 18:28:56
>>andy99+gf
All I see is AI companies poorly justifying their grift that they know they don't want to pay for the content that they are commercializing without permission and pull the fair use excuses.

It is no wonder why OpenAI had to pay Shutterstock for training on their data and Getty suing Stability AI for training on their watermarked images and using it commercially without permission and actors / actresses filing lawsuits against commercial voice cloners which costs them close to nothing, as those companies either take down the cloned voice offering or shutdown.

These weak arguments from these AI folks sound like excuses justifying a newly found grift.

◧◩◪
3. Tadpol+sj[view] [source] 2023-11-05 18:39:01
>>rvz+rh
When you're viewing everyone with a different opinion than you as a grifter, corporate rat, or some other malicious entity, you've disabled the ability or desire for people to try to engage with you. You won't be convinced, and you're already being uncivil and fallacious.

AI outputs should be regulated, of course. Obviously impersonation and copyright law already applies to AI systems. But a discussion on training inputs is entirely novel to man and our laws, and it's a very nuanced and important topic. And as AI advances, it becomes increasingly difficult because of the diminishing distinction between "organic" learning and "artificial" learning. As well as when stopping AI from — as an example — learning from research papers means we miss out on life-saving medication. Where do property rights conflict with human rights?

They're important conversations to have, but you've destroyed the opportunity to have them from the starting gun.

◧◩◪◨
4. rvz+2r[view] [source] 2023-11-05 19:20:26
>>Tadpol+sj
> When you're viewing everyone with a different opinion than you as a grifter, corporate rat, or some other malicious entity, you've disabled the ability or desire for people to try to engage with you.

I think we have given it plenty of time for such a discussion and the amount of events and actions happening around training on copyrighted works from images, songs and deepfakes for the lawsuits and licensing deals to happen and it all converging to paying for the data; hence OpenAI and may others doing so due to risks in such lawsuits.

> AI outputs should be regulated, of course. Obviously impersonation and copyright law already applies to AI systems. But a discussion on training inputs is entirely novel to man and our laws, and it's a very nuanced and important topic. And as AI advances, it becomes increasingly difficult because of the diminishing distinction between "organic" learning and "artificial" learning.

Copyright law does not care, nor is the overlying problem about using such a generative AI system for non-commercial uses such as for education or private use-cases. The line is being drawn as soon as it is commercialized and the fair use excuses fall apart. Even if the AI advances, so does the traceability methods and questions on the dataset being used. [0]

It costs musicians close to nothing to target and file lawsuits against commercial voice cloners. Not even training on copyrighted songs was an option for tools like DanceDiffusion [1] due to that same risk which is why training on public domain sounds audio was the safer alternative rather than run the risk of lawsuits and ask questions on the training set by tons of musicians.

[0] https://c2pa.org

[1] https://techcrunch.com/2023/09/13/stability-ai-gunning-for-a...

◧◩◪◨⬒
5. Tadpol+Ju[view] [source] 2023-11-05 19:42:20
>>rvz+2r
> I think we have given it plenty of time for such a discussion

I don't see how this justifies needlessly divisive rhetoric.

No matter how long the disagreement lasts, you aren't my enemy because you have a different opinion on how we should handle this conundrum. I know you mean the best and are trying to help.

> Copyright law does not care

Copyright law works fine with AI outputs. As does trademark law. I don't see an AI making a fanart Simpsons drawing being any more novel a legal problem than the myriad of humans that do it on YouTube already. Or people who sell handmade Pokemon plushies on Etsy without Nintendo's permission.

But the question is on inputs and how the carve-outs of "transformative" and "educational use" can be interpreted — model training may very well be considered education or research. I think it's been made rather clear that nobody has a real answer to this, copyright law didn't particularly desire to address if an artist is "stealing" when they borrow influence from other artists and use similar styles or themes (without consent) for their own career.

I don't envy the judges or legislators involved in making these future-defining decisions.

[go to top]