zlacker

[parent] [thread] 15 comments
1. logicc+(OP)[view] [source] 2023-12-27 15:09:23
By that logic you should have to pay the copyright holder of every library book you ever read, because you could later produce some content you memorised verbatim.
replies(6): >>passwo+j1 >>nullin+x1 >>macNch+u2 >>Wiggly+74 >>alexey+Zj >>015a+Om
2. passwo+j1[view] [source] 2023-12-27 15:15:33
>>logicc+(OP)
> the copyright holder of every library book

gets paid

3. nullin+x1[view] [source] 2023-12-27 15:17:18
>>logicc+(OP)
Copyright holders do get paid for library copies, in the US.
replies(1): >>exitb+O6
4. macNch+u2[view] [source] 2023-12-27 15:22:47
>>logicc+(OP)
The rules we have now were made in the context of human brains doing the learning from copyrighted material, not machine learning models. The limitations on what most humans can memorize and reproduce verbatim are extraordinarily different from an LLM. I think it only makes sense to re-explore these topics from a legal point of view given we’ve introduced something totally new.
replies(1): >>whichf+Qv
5. Wiggly+74[view] [source] 2023-12-27 15:32:37
>>logicc+(OP)
The difference here is scale. For someone to reproduce a book verbatim from memory it would take years of studying that book. For an LLM this would take seconds.

The LLM could reproduce the whole library quicker than a person could reproduce a single book.

◧◩
6. exitb+O6[view] [source] [discussion] 2023-12-27 15:46:48
>>nullin+x1
You make it seem as if the copyright holder is making more money on a library book, than on one sold in retail, which does not appear to be the case in the US.
replies(1): >>willse+Ge
◧◩◪
7. willse+Ge[view] [source] [discussion] 2023-12-27 16:32:31
>>exitb+O6
The library pays for the books and the copyright holder gets paid. This is no different from buying a book retail, which you can read and share with family and friends after reading, or sell it, where it can be read again and sold again. The book is the product, not a license for one person to access the book.
8. alexey+Zj[view] [source] 2023-12-27 17:00:33
>>logicc+(OP)
That is the case. It's just that the fair price is fairly low and is often covered by the government in the name of the greater good.

When for-profit companies seek access to library material they pay a much much higher price.

9. 015a+Om[view] [source] 2023-12-27 17:17:32
>>logicc+(OP)
What do you actually believe, with that statement? Do you believe Libraries are operating illegally? That they aren't paying rightsholders?

Also: GPT is not a legal entity in the united states. Humans have different rights than computer software. You are legally allowed to borrow books from the library. You are legally allowed to recite the content you read. You're not allowed to sell verbatim recitation of what you read. This is, obvious, I think? But its exactly what LLMs are doing right now.

replies(1): >>stale2+mF
◧◩
10. whichf+Qv[view] [source] [discussion] 2023-12-27 18:06:28
>>macNch+u2
Human brains are still the main legal agents in play. LLMs are just a computer programs used by humans.

Suppose I research for a book that I'm writing - it doesn't matter whether I type it on a Mac, PC, or typewriter. It doesn't matter if I use the internet or the library. It doesn't matter if I use an AI powered voice-to-text keyboard or an AI assistant.

If I release a book that has a chapter which was blatantly copied from another book, I might be sued under copyright law. That doesn't mean that we should lock me out of the library, or prevent my tools from working there.

replies(2): >>macNch+aC >>015a+AX1
◧◩◪
11. macNch+aC[view] [source] [discussion] 2023-12-27 18:41:32
>>whichf+Qv
I see two separate issues, the one you describe which is maybe slightly more clear cut: if a person uses an AI trained on copyrighted works as a tool to create and publish their own works, they are responsible if those resulting works infringe.

The other question, which I think is more topical to this lawsuit, is whether the company that trains and publishes the model itself is infringing, given they're making available something that is able to reproduce near-verbatim copyrighted works, even if they themselves have not directly asked the model to reproduce them.

I certainly don't have the answers, but I also don't think that simplistic arguments that the cat is already out of the bag or that AIs are analogous to humans learning from books are especially helpful, so I think it's valid and useful for these kinds of questions to be given careful legal consideration.

◧◩
12. stale2+mF[view] [source] [discussion] 2023-12-27 18:57:35
>>015a+Om
> Humans have different rights than computer software

Fortunately, the computer isn't the one being sued.

Instead it is the humans who use the computer. And those humans maintain their existing rights, even if they use a computer.

replies(1): >>015a+dX1
◧◩◪
13. 015a+dX1[view] [source] [discussion] 2023-12-28 04:55:15
>>stale2+mF
Maybe (though there exist plenty of examples to the contrary). However, the NYT isn't suing you, ChatGPT user; they're suing OpenAI.
replies(1): >>stale2+E52
◧◩◪
14. 015a+AX1[view] [source] [discussion] 2023-12-28 04:58:46
>>whichf+Qv
> Human brains are still the main legal agents in play.

No, they're not. This is The New York Times (a corporation) vs OpenAI and Microsoft (two more corporations).

replies(1): >>rajama+ia2
◧◩◪◨
15. stale2+E52[view] [source] [discussion] 2023-12-28 06:38:37
>>015a+dX1
Gotcha.

OpenAI is run by humans as well though.

So the same argument applies.

Those humans have fair use rights as well.

◧◩◪◨
16. rajama+ia2[view] [source] [discussion] 2023-12-28 07:36:26
>>015a+AX1
Aren't corporations considered 'persons' in the US?
[go to top]