zlacker

[return to "The New York Times is suing OpenAI and Microsoft for copyright infringement"]
1. dissid+B6[view] [source] 2023-12-27 14:41:17
>>ssgodd+(OP)
Even if they win against openAI, how would this prevent something like a Chinese or Russian LLM from “stealing” their content and making their own superior LLM that isnt weakened by regulation like the ones in the United States.

And I say this as someone that is extremely bothered by how easily mass amounts of open content can just be vacuumed up into a training set with reckless abandon and there isn’t much you can do other than put everything you create behind some kind of authentication wall but even then it’s only a matter of time until it leaks anyway.

Pandora’s box is really open, we need to figure out how to live in a world with these systems because it’s an un winnable arms race where only bad actors will benefit from everyone else being neutered by regulation. Especially with the massive pace of open source innovation in this space.

We’re in a “mutually assured destruction” situation now, but instead of bombs the weapon is information.

◧◩
2. ndsipa+D7[view] [source] 2023-12-27 14:47:30
>>dissid+B6
This suggests to me that copyright laws are becoming out of date.

The original intent was to provide an incentive for human authors to publish work, but has become more out of touch since the internet allowed virtually free publishing and copying. I think with the dawn of LLMs, copyright law is now mainly incentivising lawyers.

◧◩◪
3. phone8+I8[view] [source] 2023-12-27 14:52:33
>>ndsipa+D7
What incentive do people have to publish work if their work is going to primarily be consumed by a LLM and spat out without attribution at people who are using the LLM?
◧◩◪◨
4. repels+5c[view] [source] 2023-12-27 15:10:31
>>phone8+I8
I think you're making a profound point here.

I believe you equate incentive to monetary rewards. And while that it probably true for the majority of news outlets, money isn't always necessarily what motivates journalists.

So considering the hypothetical situation where journalists (or more generally, people that might publish stuff) were somehow compensated. But in this hypothetical, they would not be attributed (or only to very limited extent) because LLMs are just bad at attribution.

Shouldn't in that case the fact that information distribution by the LLM were "better" be enough to satisfy the deeper goal of wanting to publish stuff? Ie.: reach as many people looking for that information as possible, without blasting it out or targeting and tracking audiences?

[go to top]