zlacker

[parent] [thread] 13 comments
1. gwrigh+(OP)[view] [source] 2023-12-27 14:51:07
If you study copyrighted material for four years at a university and then go on to earn money based on your education, do you owe something to the authors of your text books?

I'm not sure how we should treat LLMs with respect to publicly accessible but copyrighted material, but it seems clear to me that "profiting" from copyrighted material isn't a sufficient criteria to cause me to "owe something to the owner".

replies(4): >>CJeffe+72 >>ametra+c5 >>ttypri+K7 >>sensan+Q9
2. CJeffe+72[view] [source] 2023-12-27 15:02:03
>>gwrigh+(OP)
We don’t, and shouldn’t, give LLMs the same rights as people.
replies(3): >>PH95Vu+3b >>gwrigh+sl >>Boiled+Uz
3. ametra+c5[view] [source] 2023-12-27 15:17:28
>>gwrigh+(OP)
“Teh al al m is just leik a people. Check and mate.”
4. ttypri+K7[view] [source] 2023-12-27 15:33:07
>>gwrigh+(OP)
The reproduction of that material in an educational setting is protected by Fair Use.
replies(1): >>gwrigh+Jk
5. sensan+Q9[view] [source] 2023-12-27 15:43:36
>>gwrigh+(OP)
Do people ever get tired of this argument that relies on anthropomorphizing these AI black boxes?

A computer isn't a human, and we already have laws that have a different effect depending on if it's a computer doing it or a human. LLMs are no different, no matter how catchy hyping them up as being == Humans may be.

replies(3): >>gwrigh+lm >>skepti+3y >>pauldd+Ld1
◧◩
6. PH95Vu+3b[view] [source] [discussion] 2023-12-27 15:51:55
>>CJeffe+72
this seems so obvious and yet people miss it.
◧◩
7. gwrigh+Jk[view] [source] [discussion] 2023-12-27 16:45:08
>>ttypri+K7
I don't think that is relevant to my comment. Whether the material is purchased, borrowed from a library, or legally reproduced under "fair use", I'm still asserting that I don't "owe" the creators any of my profit that I earn from taking advantage of what I learned.
replies(1): >>ttypri+xUa
◧◩
8. gwrigh+sl[view] [source] [discussion] 2023-12-27 16:48:28
>>CJeffe+72
I think this is a misleading way to frame things. It is people who build, train, and operate the LLM. It isn't about giving "rights" to the LLM, it is about constructing a legal framework for the people who are creating LLMs and businesses around LLMs.
◧◩
9. gwrigh+lm[view] [source] [discussion] 2023-12-27 16:53:54
>>sensan+Q9
I didn't anthropomorphize the LLMS. It isn't about laws for the LLM, it is about laws for people would build and operate the LLM.

If you want to assert that groups of people that build and operate LLMs should operate under a different set of laws and regulations than individuals that read books in the library regarding "profit", I'm open to that idea. But that is not at all the same as "anthropomorphizing these AI black boxes".

◧◩
10. skepti+3y[view] [source] [discussion] 2023-12-27 18:00:11
>>sensan+Q9
Great comment. The amount of anthropomorphizing that goes on in these threads is just baffling to me.

It seems obvious to me that, despite what current law says, there is something not right about what large companies are doing when they create LLMs.

If they are going to build off of humanity's collective work, their product should benefit all of humanity, and not just shareholders.

◧◩
11. Boiled+Uz[view] [source] [discussion] 2023-12-27 18:08:48
>>CJeffe+72
We're not "giving them the same rights as people", we're trying to define the rights of the set of "intelligent" things that can learn (regardless of if their conscious or not). And up until recently, people were the only members of that set.

Now there are (or very, very soon there will be) two members in that set. How do we properly define the rules for members of that set?

If something can learn from reading do ban it from reading copyrighted material, even if it can memorize some of it? Clearly that would be a failure for humans a ban of that form. Should we have that ban for all things that can learn?

There is a reasonable argument that if you want things to learn they have to learn on a wide variety, and on our best works (which are often copyrighted).

And the statements above have no implication of being free of cost (or not), just that I think blocking "learning programs / LLMs" from being able to access, learn from or reproduce copyright text is a net loss for society.

◧◩
12. pauldd+Ld1[view] [source] [discussion] 2023-12-27 21:46:31
>>sensan+Q9
> we already have laws that have a different effect depending on if it's a computer doing it or a human

which laws?

we generally accept computers as agents of their owners.

for example, a law that applies to a human travel agent also applies to a computerized travel agency service.

◧◩◪
13. ttypri+xUa[view] [source] [discussion] 2023-12-31 14:50:20
>>gwrigh+Jk
The parent comment asks whether an “engine” trained on copyrighted data is entitled to decide profit. Your comment is about a human receiving knowledge that facilitates profit. Of course, these are legally independent scenarios.

Take a college student who scans all her textbooks, relying on fair use. If she is the only user, is she obligated to pay a premium for mining?

What about the scenario in which she sells that engine to other book owners? What if they only owned the book a short time in school?

replies(1): >>gwrigh+qOg
◧◩◪◨
14. gwrigh+qOg[view] [source] [discussion] 2024-01-02 20:58:32
>>ttypri+xUa
I agree that they are different scenarios that may lead to different legal frameworks. My point though was that asserting that the "profit" motive is sufficient to conclude something is owed to the creators is faulty logic. Individuals can generate profit from what they learn and we don't generally require them to share their profits with the creators of copyrighted material that they used to educate themselves.
[go to top]