zlacker

[parent] [thread] 3 comments
1. ethbr1+(OP)[view] [source] 2023-12-27 16:20:12
At the root, it seems like there's also a gap in copyright with respect to AI around transformative.

Is using something, in its entirety, as a tiny bit of a massive data set, in order to produce something novel... infringing?

That's a pretty weird question that never existed when copyright was defined.

replies(2): >>layer8+F2 >>bawolf+U6
2. layer8+F2[view] [source] 2023-12-27 16:33:01
>>ethbr1+(OP)
Replace the AI model by a human, and it should become pretty clear what is allowed and what isn’t, in terms of published output. The issue is that an AI model is like a human that you can force to produce copyright-infringing output, or at least where you have little control over whether the output is copyright-infringing or not.
replies(1): >>bawolf+K7
3. bawolf+U6[view] [source] 2023-12-27 16:55:18
>>ethbr1+(OP)
I think it did come up back in the day sort of, for example with libraries.

More importantly, ever case is unique so what really came up was a set of principles for what defines fair use, which will definitely guide this.

◧◩
4. bawolf+K7[view] [source] [discussion] 2023-12-27 16:59:49
>>layer8+F2
Its less clear than you think, and comes down more on how OpenAI is commercially benefiting and competiting with NYT than what they actually did. (See four factors of fair use)
[go to top]