zlacker

[return to "The New York Times is suing OpenAI and Microsoft for copyright infringement"]
1. dissid+B6[view] [source] 2023-12-27 14:41:17
>>ssgodd+(OP)
Even if they win against openAI, how would this prevent something like a Chinese or Russian LLM from “stealing” their content and making their own superior LLM that isnt weakened by regulation like the ones in the United States.

And I say this as someone that is extremely bothered by how easily mass amounts of open content can just be vacuumed up into a training set with reckless abandon and there isn’t much you can do other than put everything you create behind some kind of authentication wall but even then it’s only a matter of time until it leaks anyway.

Pandora’s box is really open, we need to figure out how to live in a world with these systems because it’s an un winnable arms race where only bad actors will benefit from everyone else being neutered by regulation. Especially with the massive pace of open source innovation in this space.

We’re in a “mutually assured destruction” situation now, but instead of bombs the weapon is information.

◧◩
2. llm_ne+97[view] [source] 2023-12-27 14:43:57
>>dissid+B6
I don't think they're looking to prevent the inevitable, but rather see a target with a fat wallet from which a lot of money can be extracted. I'm not saying this in a negative way, but much of the "this is outrageous!" reaction to AI hasn't been about the building of models, but rather the realization that a few players are arguably getting very rich on those models so other people want their piece of the action.
◧◩◪
3. dissid+S8[view] [source] 2023-12-27 14:53:29
>>llm_ne+97
If NYT wins this, then there is going to be a massive push for payouts from basically everyone ever…I don’t see that wallet being fat for long.
◧◩◪◨
4. alexey+ub[view] [source] 2023-12-27 15:07:16
>>dissid+S8
If LLMs actually create added value and don't just burn VC money then they should be able to pay a fair price for the work of people they're relying upon.

If your business is profitable only when you get your raw materials for free it's not a very good business.

◧◩◪◨⬒
5. logicc+Ub[view] [source] 2023-12-27 15:09:23
>>alexey+ub
By that logic you should have to pay the copyright holder of every library book you ever read, because you could later produce some content you memorised verbatim.
◧◩◪◨⬒⬓
6. macNch+oe[view] [source] 2023-12-27 15:22:47
>>logicc+Ub
The rules we have now were made in the context of human brains doing the learning from copyrighted material, not machine learning models. The limitations on what most humans can memorize and reproduce verbatim are extraordinarily different from an LLM. I think it only makes sense to re-explore these topics from a legal point of view given we’ve introduced something totally new.
◧◩◪◨⬒⬓⬔
7. whichf+KH[view] [source] 2023-12-27 18:06:28
>>macNch+oe
Human brains are still the main legal agents in play. LLMs are just a computer programs used by humans.

Suppose I research for a book that I'm writing - it doesn't matter whether I type it on a Mac, PC, or typewriter. It doesn't matter if I use the internet or the library. It doesn't matter if I use an AI powered voice-to-text keyboard or an AI assistant.

If I release a book that has a chapter which was blatantly copied from another book, I might be sued under copyright law. That doesn't mean that we should lock me out of the library, or prevent my tools from working there.

◧◩◪◨⬒⬓⬔⧯
8. 015a+u92[view] [source] 2023-12-28 04:58:46
>>whichf+KH
> Human brains are still the main legal agents in play.

No, they're not. This is The New York Times (a corporation) vs OpenAI and Microsoft (two more corporations).

◧◩◪◨⬒⬓⬔⧯▣
9. rajama+cm2[view] [source] 2023-12-28 07:36:26
>>015a+u92
Aren't corporations considered 'persons' in the US?
[go to top]