zlacker

[return to "The shady world of Brave selling copyrighted data for AI training"]
1. 6gvONx+qs[view] [source] 2023-07-15 15:13:30
>>rand0m+(OP)
> Fair use is a doctrine in the law of the United States that allows limited use of copyrighted material without requiring permission from the rights holders. It provides for the legal, non-licensed citation or incorporation of copyrighted material in another author's work under a four-factor balancing test:

> 1) The purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes

> 2) The nature of the copyrighted work

> 3) The amount and substantiality of the portion used in relation to the copyrighted work as a whole

> 4) The effect of the use upon the potential market for or value of the copyrighted work

[emphasis from TFA]

HN always talks about derivative work and transformativeness, but never about these. The fourth one especially seems clear in its implications for models.

Regardless, it makes it seem much less clear cut than people here often say.

◧◩
2. _fbpp+dw[view] [source] 2023-07-15 15:38:41
>>6gvONx+qs
The entire fair use claim is derived not from any legal basis, but rather, that "it has to be fair use" because it would be legally catastrophic for OpenAI et al if it weren't true.

If you look at the core argument in favour of fair use, it's that "LLMs do not copy the training data", yet this is obviously false.

For Github copilot and ChatGPT examples of it reciting large sections of training data are well known. Plenty can be found on HN. It doesn't generate a new valid windows serial key on the fly, it's memorized them.

If one wants to be cynical, it's not hard to see OpenAI/etc patching in filters to remove copyrighted content from the output precisely because it's legally catastrophic for their "fair use" claim to have the model spit out copyrighted content. As this is both copyright infringement by itself, and evidence that no matter how the internals of these models work, they store some of the training data anyway.

◧◩◪
3. twoodf+9F[view] [source] 2023-07-15 16:22:48
>>_fbpp+dw
It actually doesn’t even matter if LLMs reproduce copyrighted data from their training. The issue is that a human copied the data from its source into memory for use in training, and this copy was likely not fair use under cases like MAI Systems.

The Supreme Court hasn’t ruled on a software case like this, as far as I know. But given the recent 7-2 decision against Andy Warhol’s estate for his copying of photographs of Prince, this doesn’t seem like a Court that’s ready to say copying terabytes of unlicensed material for a commercial purpose is OK.

I’m going to guess this ends with Congress setting up some kind of clearinghouse for copyrighted training material: You opt in to be included, you get fees from OpenAI when they use what you added. This isn’t unprecedented: Congress set up special rules and processes for things like music recordings repeatedly over the years.

https://scholarship.law.edu/cgi/viewcontent.cgi?referer=&htt...

◧◩◪◨
4. gyudin+eS[view] [source] 2023-07-15 17:31:16
>>twoodf+9F
So how is that supposed to work with people sending it legally obtained copyrighted materials for an analyze?
[go to top]