Take estimated losses of the NYT from this "innovation" and multiply by 10^x where is "x" high enough to make tech companies stop and think before they break laws next time. That would be my approach at least.
>>necrof+O
The training isn't the issue per se, it's the regurgitation of verbatim text (or close enough to be immediately identifiable) within a for-profit product. Worse still that the regurgitation is done without attribution.
>>necrof+O
The legal argument, which I'm sure you are very well aware of, is that training a model on data, reorganizing, and then presenting that data as your own is copyright infringement.
>>LargeT+Y1
Can you elaborate a bit more? That’s actually just a claim, not a legal argument.
Copyright law allows for transformative uses that add something new, with a further purpose or different character, and do not substitute for the original use of the work. Are LLM’s not transformative?
>>profes+jc
Agreed, it is unclear. It's also a very commonly discussed issue with generative AI and there's been a significant amount of buzz around this. Is the NYT testing the legal waters? Maybe. Will this case set precedent? Yes. Is this a silly, random, completely unhinged case to bring?