zlacker

[return to "A federal judge sides with Anthropic in lawsuit over training AI on books"]
1. Nobody+fc[view] [source] 2025-06-24 17:29:23
>>moose4+(OP)
One aspect of this ruling [1] that I find concerning: on pages 7 and 11-12, it concedes that the LLM does substantially "memorize" copyrighted works, but rules that this doesn't violate the author's copyright because Anthropic has server-side filtering to avoid reproducing memorized text. (Alsup compares this to Google Books, which has server-side searchable full-text copies of copyrighted books, but only allows users to access snippets in a non-infringing manner.)

Does this imply that distributing open-weights models such as Llama is copyright infringement, since users can trivially run the model without output filtering to extract the memorized text?

[1]: https://storage.courtlistener.com/recap/gov.uscourts.cand.43...

◧◩
2. riskab+5J[view] [source] 2025-06-24 20:25:05
>>Nobody+fc
A judge already ruled that models themselves don't constitute copyright infringement in Kadrey v. Meta Platforms, Inc. (https://casetext.com/case/kadrey-v-meta-platforms-inc). The EFF has a good summary about it:

> the court dismissed “nonsensical” claims that Meta’s LLaMA models are themselves infringing derivative works.

See: https://www.eff.org/deeplinks/2025/02/copyright-and-ai-cases...

◧◩◪
3. qoez+ZC2[view] [source] 2025-06-25 14:46:17
>>riskab+5J
Time to overfit on some books and publicize them as a libgen mirror.
◧◩◪◨
4. london+2t4[view] [source] 2025-06-26 07:22:15
>>qoez+ZC2
I think this could lead to interesting results outside the legalities.

Imagine you're getting it to spit out lord of the rings, but midway through you inject into the output 'Suddenly, the ring split in two. No longer one ring to rule them all, but two!'.

You then let the model write the rest of the story!

[go to top]