zlacker

[return to "A federal judge sides with Anthropic in lawsuit over training AI on books"]
1. 3PS+V1[view] [source] 2025-06-24 16:32:07
>>moose4+(OP)
Broadly summarizing.

This is OK and fair use: Training LLMs on copyrighted work, since it's transformative.

This is not OK and not fair use: pirating data, or creating a big repository of pirated data that isn't necessarily for AI training.

Overall seems like a pretty reasonable ruling?

◧◩
2. ninety+Nc[view] [source] 2025-06-24 17:33:46
>>3PS+V1
Agreed. If I memorize a book and I am deployed into the world to talk about what I memorized that is not a violation of copyright. Which is reasonable logically because essentially this is what an LLM is doing.
◧◩◪
3. layer8+he[view] [source] 2025-06-24 17:41:47
>>ninety+Nc
It might be different if you are a commercial product which couldn’t have been created without incorporating the contents of all those books.

Humans, animals, hardware and software are treated differently by law because they have different constraints and capabilities.

◧◩◪◨
4. ninety+Di[view] [source] 2025-06-24 18:09:45
>>layer8+he
But a commercial product is reaching parity with human capability.

Let's be real, Humans have special treatment (more special than animals as we can eat and slaughter animals but not other humans) because WE created the law to serve humans.

So in terms of being fair across the board LLMs are no different. But there's no harm in giving ourselves special treatment.

◧◩◪◨⬒
5. layer8+Fm[view] [source] 2025-06-24 18:33:10
>>ninety+Di
Generative AIs are very different from humans because they can be copied losslessly and scaled tremendously, and also have no individual liability, nor awareness of how similar their output is to something in their training material. They are very different in constraints and capabilities from humans in all sorts of ways. For one, a human will likely never reproduce a book they read without being aware that that’s what they are doing.
[go to top]