zlacker

[return to "The New York Times is suing OpenAI and Microsoft for copyright infringement"]
1. Aurorn+84[view] [source] 2023-12-27 14:26:49
>>ssgodd+(OP)
The arguments about being able to mimic New York Times “style” are weak, but the fact that they got it to emit verbatim NY Times content seems bad for OpenAI:

> As outlined in the lawsuit, the Times alleges OpenAI and Microsoft’s large language models (LLMs), which power ChatGPT and Copilot, “can generate output that recites Times content verbatim

◧◩
2. iandan+rb[view] [source] 2023-12-27 15:06:58
>>Aurorn+84
Arguing whether it can is not a useful discussion. You can absolutely train a net to memorize and recite text. As these models get more powerful they will memorize more text. The critical thing is how hard is it to make them recite copyrighted works. Critically the question is, did the developers put reasonable guardrails in place to prevent it?

If a person with a very good memory reads an article, they only violate copyright if they write it out and share it, or perform the work publicly. If they have a reasonable understanding of the law they won't do so. However a malicious person could absolutely trick or force them to produce the copyrighted work. The blame in that case however is not on the person who read and recited the article but on the person who tricked them.

That distinction is one we're going to have to codify all over again for AI.

◧◩◪
3. JW_000+in[view] [source] 2023-12-27 16:15:28
>>iandan+rb
> If a person with a very good memory reads an article, they only violate copyright if they write it out and share it, or perform the work publicly. If they have a reasonable understanding of the law they won't do so. However a malicious person could absolutely trick or force them to produce the copyrighted work. The blame in that case however is not on the person who read and recited the article but on the person who tricked them.

Is that really true? Also, what if the second person is not malicious? In the example of ChatGPT, the user may accidentally write a prompt that causes the model to recite copyrighted text. I don't think a judge will look at this through the same lens as you are.

[go to top]