zlacker

[return to "A federal judge sides with Anthropic in lawsuit over training AI on books"]
1. Nobody+fc[view] [source] 2025-06-24 17:29:23
>>moose4+(OP)
One aspect of this ruling [1] that I find concerning: on pages 7 and 11-12, it concedes that the LLM does substantially "memorize" copyrighted works, but rules that this doesn't violate the author's copyright because Anthropic has server-side filtering to avoid reproducing memorized text. (Alsup compares this to Google Books, which has server-side searchable full-text copies of copyrighted books, but only allows users to access snippets in a non-infringing manner.)

Does this imply that distributing open-weights models such as Llama is copyright infringement, since users can trivially run the model without output filtering to extract the memorized text?

[1]: https://storage.courtlistener.com/recap/gov.uscourts.cand.43...

◧◩
2. ticula+hf[view] [source] 2025-06-24 17:46:41
>>Nobody+fc
Yep, broadly capable open models are on track for annihilation. The cost of legally obtaining all the training materials will require hefty backing.

Additionally that if you download a model file that contains enough of the source material to be considered infringing (even without using the LLM, assume you can extract the contents directly out of the weights) then it might as well be a .zip with a PDF in it, the model file itself becomes an infringing object whereas closed models can be held accountable by not what they store but what they produce.

◧◩◪
3. bonobo+JN[view] [source] 2025-06-24 20:54:15
>>ticula+hf
This technology is a really bad way of storing, reproducing and transmitting the books themselves. It's probabilistic and lossy. It may be possible to reproduce some paragraphs, but no reasonable person would expect to read The Da Vinci Code by prompting the LLM. Surely the marketed use cases and the observed real use by users has to make it clear that the intended and vastly overwhelming use of an LLM is transformative, "digestive" synthesis of many sources to construct a merged, abstracted, generalized system that can function in novel uses, answering never before seen prompts in a useful manner, overwhelmingly without reproducing existing written works. It surely matters what the purpose of the thing is both in intention and observed practice. It's not a viable competing alternative to reading the actual book.
◧◩◪◨
4. spit2w+oX[view] [source] 2025-06-24 22:02:09
>>bonobo+JN
Not The DaVinci Code, but I recently tried reading "OCaml Programming: Correct + Efficient + Beautiful" through Gemini. The book is open, so I rightly assumed it was "in there". I read by saying "Give me the first paragraph of Chapter 6" and then something like "Next 3 paragraphs". If I had a question, I was able to ask it and get some more info and have something like a dialog.

As far as I could tell, the book didn't match what's posted online today. The text was somewhat consistent on a topic, yet poorly written and made references to sections that I don't think existed. No amount of prompting could locate them. I'm not convinced the material presented to me was actually the book, although it seemed consistent with the topic of the chapter.

I tried to ascertain when the book had been scraped, yet couldn't find a match in Archive.org or in the book's git repo.

Eventually I gave up and just continued reading the PDF.

[go to top]