zlacker

[parent] [thread] 23 comments
1. lp4vn+(OP)[view] [source] 2023-12-27 14:27:26
For me it's quite obvious that if you make a profit from an engine that has as an input copyrighted material, then you owe something to the owner of this copyrighted content. We have seen this same problem with artists claiming stable diffusion engines were using their art.
replies(4): >>theonl+v2 >>tiahur+93 >>gwrigh+84 >>hacker+Tk
2. theonl+v2[view] [source] 2023-12-27 14:41:50
>>lp4vn+(OP)
Do all automakers that now develop electric cars owe Tesla something as they cashed in once they saw Tesla's successful copyrighted material l? A model is semantic, it contains the idea which is not copyrightable. Only how it is expressed could be copyrighted (i.e. if it outputs the copyright work verbatim). If this were not the case we would have plenty of monopolies and the world would fall apart.
replies(2): >>noitpm+g4 >>pastor+q6
3. tiahur+93[view] [source] 2023-12-27 14:45:06
>>lp4vn+(OP)
Should Stranger Things have to pay Goonies and Steven King?
replies(1): >>jprete+Y8
4. gwrigh+84[view] [source] 2023-12-27 14:51:07
>>lp4vn+(OP)
If you study copyrighted material for four years at a university and then go on to earn money based on your education, do you owe something to the authors of your text books?

I'm not sure how we should treat LLMs with respect to publicly accessible but copyrighted material, but it seems clear to me that "profiting" from copyrighted material isn't a sufficient criteria to cause me to "owe something to the owner".

replies(4): >>CJeffe+f6 >>ametra+k9 >>ttypri+Sb >>sensan+Yd
◧◩
5. noitpm+g4[view] [source] [discussion] 2023-12-27 14:52:01
>>theonl+v2
If Tesla thinks their competitors have violated any of their parents they are well within their rights to seek damages...
replies(1): >>theonl+H8
◧◩
6. CJeffe+f6[view] [source] [discussion] 2023-12-27 15:02:03
>>gwrigh+84
We don’t, and shouldn’t, give LLMs the same rights as people.
replies(3): >>PH95Vu+bf >>gwrigh+Ap >>Boiled+2E
◧◩
7. pastor+q6[view] [source] [discussion] 2023-12-27 15:03:16
>>theonl+v2
https://www.tesla.com/blog/all-our-patent-are-belong-you
◧◩◪
8. theonl+H8[view] [source] [discussion] 2023-12-27 15:13:51
>>noitpm+g4
Ofcourse, my comment needs to be read in the context of what I'm responding to, they said input which I disagree with, output maybe there's a slight chance they have a case (depending on how openai has programmed it to output) and even then it's doubtful.
◧◩
9. jprete+Y8[view] [source] [discussion] 2023-12-27 15:15:30
>>tiahur+93
If they used copyrighted material or trademarks, they almost certainly _did_ pay the Goonies property rightholders and Stephen King for the privilege. Why would you think they didn't?
replies(1): >>tiahur+Rk
◧◩
10. ametra+k9[view] [source] [discussion] 2023-12-27 15:17:28
>>gwrigh+84
“Teh al al m is just leik a people. Check and mate.”
◧◩
11. ttypri+Sb[view] [source] [discussion] 2023-12-27 15:33:07
>>gwrigh+84
The reproduction of that material in an educational setting is protected by Fair Use.
replies(1): >>gwrigh+Ro
◧◩
12. sensan+Yd[view] [source] [discussion] 2023-12-27 15:43:36
>>gwrigh+84
Do people ever get tired of this argument that relies on anthropomorphizing these AI black boxes?

A computer isn't a human, and we already have laws that have a different effect depending on if it's a computer doing it or a human. LLMs are no different, no matter how catchy hyping them up as being == Humans may be.

replies(3): >>gwrigh+tq >>skepti+bC >>pauldd+Th1
◧◩◪
13. PH95Vu+bf[view] [source] [discussion] 2023-12-27 15:51:55
>>CJeffe+f6
this seems so obvious and yet people miss it.
◧◩◪
14. tiahur+Rk[view] [source] [discussion] 2023-12-27 16:25:21
>>jprete+Y8
The writers were obviously trained on the copyrighted material of Goonies and Steven King, and there has never been any reporting that Netflix has paid those copyright holders. This isn't surprising because copyright violation requires copying.

My understanding is that GPT is a word probability lookup table based on a review of the training material. A statistical analysis of NYT is not copying.

And this doesn't even to look at whether fair use might apply. Since tabulating word frequencies isn't copying, GPT isn't violating anyone's copyright.

15. hacker+Tk[view] [source] 2023-12-27 16:25:30
>>lp4vn+(OP)
I think we're in a new paradigm and need to look at this differently. The end goal is to train models on all the output of humanity. Everyone will have contributed to it (artists, writers, coders on github... the people who taught the writers, the people who invented the English language, the people who created the daily events that were reported on, etc). We're better off letting ML companies free access to almost everything, while taxing the output. The bargain is "you took from everyone, so you give to everyone". This is probably a more win-win setup that respects the reality that it's really the public commons that is generating the value here.
replies(1): >>Captai+4C
◧◩◪
16. gwrigh+Ro[view] [source] [discussion] 2023-12-27 16:45:08
>>ttypri+Sb
I don't think that is relevant to my comment. Whether the material is purchased, borrowed from a library, or legally reproduced under "fair use", I'm still asserting that I don't "owe" the creators any of my profit that I earn from taking advantage of what I learned.
replies(1): >>ttypri+FYa
◧◩◪
17. gwrigh+Ap[view] [source] [discussion] 2023-12-27 16:48:28
>>CJeffe+f6
I think this is a misleading way to frame things. It is people who build, train, and operate the LLM. It isn't about giving "rights" to the LLM, it is about constructing a legal framework for the people who are creating LLMs and businesses around LLMs.
◧◩◪
18. gwrigh+tq[view] [source] [discussion] 2023-12-27 16:53:54
>>sensan+Yd
I didn't anthropomorphize the LLMS. It isn't about laws for the LLM, it is about laws for people would build and operate the LLM.

If you want to assert that groups of people that build and operate LLMs should operate under a different set of laws and regulations than individuals that read books in the library regarding "profit", I'm open to that idea. But that is not at all the same as "anthropomorphizing these AI black boxes".

◧◩
19. Captai+4C[view] [source] [discussion] 2023-12-27 17:58:48
>>hacker+Tk
Copyright Is Brain Damage by Nina Paley [1] claimed that culture is like a bunch of neurons passing and evolving data to each other, and copyright is like severing the ties between the neurons, like brain damage. It also presented [2] an alternative way of viewing art and science, as products of the common culture, not a product purely from the creator, to be privatised. This sounds really relevant to your comment.

Furthermore, if we manage to "untrain" AI on certain pieces of content, then copyright would really become "brain" damage too. Like, the perceptrons and stuff.

[1] https://www.youtube.com/watch?v=XO9FKQAxWZc

[2] No, I'm not an AI, just autistic.

◧◩◪
20. skepti+bC[view] [source] [discussion] 2023-12-27 18:00:11
>>sensan+Yd
Great comment. The amount of anthropomorphizing that goes on in these threads is just baffling to me.

It seems obvious to me that, despite what current law says, there is something not right about what large companies are doing when they create LLMs.

If they are going to build off of humanity's collective work, their product should benefit all of humanity, and not just shareholders.

◧◩◪
21. Boiled+2E[view] [source] [discussion] 2023-12-27 18:08:48
>>CJeffe+f6
We're not "giving them the same rights as people", we're trying to define the rights of the set of "intelligent" things that can learn (regardless of if their conscious or not). And up until recently, people were the only members of that set.

Now there are (or very, very soon there will be) two members in that set. How do we properly define the rules for members of that set?

If something can learn from reading do ban it from reading copyrighted material, even if it can memorize some of it? Clearly that would be a failure for humans a ban of that form. Should we have that ban for all things that can learn?

There is a reasonable argument that if you want things to learn they have to learn on a wide variety, and on our best works (which are often copyrighted).

And the statements above have no implication of being free of cost (or not), just that I think blocking "learning programs / LLMs" from being able to access, learn from or reproduce copyright text is a net loss for society.

◧◩◪
22. pauldd+Th1[view] [source] [discussion] 2023-12-27 21:46:31
>>sensan+Yd
> we already have laws that have a different effect depending on if it's a computer doing it or a human

which laws?

we generally accept computers as agents of their owners.

for example, a law that applies to a human travel agent also applies to a computerized travel agency service.

◧◩◪◨
23. ttypri+FYa[view] [source] [discussion] 2023-12-31 14:50:20
>>gwrigh+Ro
The parent comment asks whether an “engine” trained on copyrighted data is entitled to decide profit. Your comment is about a human receiving knowledge that facilitates profit. Of course, these are legally independent scenarios.

Take a college student who scans all her textbooks, relying on fair use. If she is the only user, is she obligated to pay a premium for mining?

What about the scenario in which she sells that engine to other book owners? What if they only owned the book a short time in school?

replies(1): >>gwrigh+ySg
◧◩◪◨⬒
24. gwrigh+ySg[view] [source] [discussion] 2024-01-02 20:58:32
>>ttypri+FYa
I agree that they are different scenarios that may lead to different legal frameworks. My point though was that asserting that the "profit" motive is sufficient to conclude something is owed to the creators is faulty logic. Individuals can generate profit from what they learn and we don't generally require them to share their profits with the creators of copyrighted material that they used to educate themselves.
[go to top]