zlacker

[return to "A federal judge sides with Anthropic in lawsuit over training AI on books"]
1. 3PS+V1[view] [source] 2025-06-24 16:32:07
>>moose4+(OP)
Broadly summarizing.

This is OK and fair use: Training LLMs on copyrighted work, since it's transformative.

This is not OK and not fair use: pirating data, or creating a big repository of pirated data that isn't necessarily for AI training.

Overall seems like a pretty reasonable ruling?

◧◩
2. SoKami+W8[view] [source] 2025-06-24 17:09:39
>>3PS+V1
What if I overfit my LLM so it spits out copyrighted work with special prompting? Where to draw the line in training?
◧◩◪
3. ninety+Ad[view] [source] 2025-06-24 17:37:58
>>SoKami+W8
I mean the human brain can memorize things as well and it’s not illegal. It’s only illegal if said memorized thing is distributed.
◧◩◪◨
4. martin+1O[view] [source] 2025-06-24 20:55:35
>>ninety+Ad
Humans don't scale. LLMs do.

Even if LLMs were actual human-level AI (they are not - by far), a small bunch of rich people could use them to make enormous amounts of money without putting in the enormous amounts of work humans would have to.

All the while "training" (= precomputing transformations which among other things make plagiarism detection difficult) on work which took enormous amounts of human labor without compensating those workers.

[go to top]