zlacker

[parent] [thread] 3 comments
1. CJeffe+(OP)[view] [source] 2023-12-27 15:02:03
We don’t, and shouldn’t, give LLMs the same rights as people.
replies(3): >>PH95Vu+W8 >>gwrigh+lj >>Boiled+Nx
2. PH95Vu+W8[view] [source] 2023-12-27 15:51:55
>>CJeffe+(OP)
this seems so obvious and yet people miss it.
3. gwrigh+lj[view] [source] 2023-12-27 16:48:28
>>CJeffe+(OP)
I think this is a misleading way to frame things. It is people who build, train, and operate the LLM. It isn't about giving "rights" to the LLM, it is about constructing a legal framework for the people who are creating LLMs and businesses around LLMs.
4. Boiled+Nx[view] [source] 2023-12-27 18:08:48
>>CJeffe+(OP)
We're not "giving them the same rights as people", we're trying to define the rights of the set of "intelligent" things that can learn (regardless of if their conscious or not). And up until recently, people were the only members of that set.

Now there are (or very, very soon there will be) two members in that set. How do we properly define the rules for members of that set?

If something can learn from reading do ban it from reading copyrighted material, even if it can memorize some of it? Clearly that would be a failure for humans a ban of that form. Should we have that ban for all things that can learn?

There is a reasonable argument that if you want things to learn they have to learn on a wide variety, and on our best works (which are often copyrighted).

And the statements above have no implication of being free of cost (or not), just that I think blocking "learning programs / LLMs" from being able to access, learn from or reproduce copyright text is a net loss for society.

[go to top]