>>necrof+(OP)
The training isn't the issue per se, it's the regurgitation of verbatim text (or close enough to be immediately identifiable) within a for-profit product. Worse still that the regurgitation is done without attribution.
>>necrof+(OP)
The legal argument, which I'm sure you are very well aware of, is that training a model on data, reorganizing, and then presenting that data as your own is copyright infringement.
>>LargeT+a1
Can you elaborate a bit more? That’s actually just a claim, not a legal argument.
Copyright law allows for transformative uses that add something new, with a further purpose or different character, and do not substitute for the original use of the work. Are LLM’s not transformative?
>>profes+vb
Agreed, it is unclear. It's also a very commonly discussed issue with generative AI and there's been a significant amount of buzz around this. Is the NYT testing the legal waters? Maybe. Will this case set precedent? Yes. Is this a silly, random, completely unhinged case to bring?