zlacker

[return to "A federal judge sides with Anthropic in lawsuit over training AI on books"]
1. 3PS+V1[view] [source] 2025-06-24 16:32:07
>>moose4+(OP)
Broadly summarizing.

This is OK and fair use: Training LLMs on copyrighted work, since it's transformative.

This is not OK and not fair use: pirating data, or creating a big repository of pirated data that isn't necessarily for AI training.

Overall seems like a pretty reasonable ruling?

◧◩
2. ticula+J7[view] [source] 2025-06-24 17:01:36
>>3PS+V1
Definitely seems reasonable to say "you can train on this data but you have to have a legal copy"

Personally I like to frame most AI problems by substituting a human (or humans) for the AI. Works pretty well most of the time.

In this case if you hired a bunch of artists/writers that somehow had never seen a Disney movie and to train them to make crappy Disney clones you made them watch all the movies it certainly would be legal to do so but only if they had legit copies in the training room. Pirating the movies would be illegal.

Though the downside is it does create a training moat. If you want to create the super-brain AI that's conversant on the corpus of copyrighted human literature you're going to need a training library worth millions

◧◩◪
3. martin+oL[view] [source] 2025-06-24 20:40:16
>>ticula+J7
> Personally I like to frame most AI problems by substituting a human (or humans) for the AI. Works pretty well most of the time.

Human time is inherently valuable, computer time is not.

The issue with LLMs is that they allow doing things at a massive scale which would previously be prohibitively time consuming. (You could argue but them how much electricity is worth one human life?)

If I "write" a book by taking another and replacing every word with a synonym, that's obviously plagiarism and obviously copyright infringement. How about also changing the word order? How about rewording individual paragraphs while keeping the general structure? It's all still derivative work but as you make it less detectable, the time and effort required is growing to become uneconomical. An LLM can do it cheaply. It can mix and match parts of many works but it's all still a derivative of those works combined. After all, if it wasn't, it would produce equally good output with a tiny fraction of the training data.

The outcome is that a small group of people (those making LLMs and selling access to their output) get to make huge amounts of money off of the work of a group that is several orders of magnitude larger (essentially everyone who has written something on the internet) without compensating the larger group.

That is fundamentally exploitative, whether the current laws accounted for that situation or not.

[go to top]