zlacker

[return to "The New York Times is suing OpenAI and Microsoft for copyright infringement"]
1. kbos87+Na[view] [source] 2023-12-27 15:03:43
>>ssgodd+(OP)
Solidly rooting for NYT on this - it’s felt like many creative organizations have been asleep at the wheel while their lunch gets eaten for a second time (the first being at the birth of modern search engines.)

I don’t necessarily fault OpenAI’s decision to initially train their models without entering into licensing agreements - they probably wouldn’t exist and the generative AI revolution may never have happened if they put the horse before the cart. I do think they should quickly course correct at this point and accept the fact that they clearly owe something to the creators of content they are consuming. If they don’t, they are setting themselves up for a bigger loss down the road and leaving the door open for a more established competitor (Google) to do it the right way.

◧◩
2. theGnu+Ri[view] [source] 2023-12-27 15:47:48
>>kbos87+Na
It’s likely fair use.
◧◩◪
3. spunke+sv[view] [source] 2023-12-27 16:57:52
>>theGnu+Ri
> It's likely fair use.

I agree. You can even listen to the NYT Hard Fork podcast (that I recommend btw https://www.nytimes.com/2023/11/03/podcasts/hard-fork-execut...) where they recently had Harvard copyright law professor Rebecca Tushnet on as a guest.

They asked her about the issue of copyrighted training data. Her response was:

""" Google, for example, with the book project, doesn’t give you the full text and is very careful about not giving you the full text. And the court said that the snippet production, which helps people figure out what the book is about but doesn’t substitute for the book, is a fair use.

So the idea of ingesting large amounts of existing works, and then doing something new with them, I think, is reasonably well established. The question is, of course, whether we think that there’s something uniquely different about LLMs that justifies treating them differently. """

Now for my take: Proving that OpenAI trained on NYT articles is not sufficient IMO. They would need to prove that OpenAI is providing a substitutable good via verbatim copying, which I don't think you can easily prove. It takes a lot of prompt engineering and luck to pull out any verbatim articles. It's well-established that LLMs screw up even well-known facts. It's quite hard to accurately pull out the training data verbatim.

◧◩◪◨
4. pagane+sy[view] [source] 2023-12-27 17:15:59
>>spunke+sv
Genuinely asking, is the “verbatim” thing set in stone? I mean, an entity spewing out NYTimes-like articles after having been trained on lots of NYTimes content sounds like a very grey zone, in the “spirit” of copyright law some may judge it as indeed not-lawful.

Of course, I’m not a lawyer and I know that in the US sticking to precedents (which mention the “verbatim” thing) takes a lot of precedence over judging something based on the spirit of the law, but stranger things have happened.

◧◩◪◨⬒
5. spunke+xJ[view] [source] 2023-12-27 18:16:17
>>pagane+sy
There's already precedence for this in news: News outlets constantly report on each other's stories. That's why they care so much about being first on a story, because once they break it, it is fair game for everyone else to report on it too.

Here's a hypothetical: suppose there is a random fact about some news event that has only been reported in a single article. Do they suddenly have a monopoly on that fact, and deserve compensation whenever that fact gets picked up and repeated by other news articles or books or TV shows or movies (or AI models)?

[go to top]