zlacker

[return to "The New York Times is suing OpenAI and Microsoft for copyright infringement"]
1. solard+Aj[view] [source] 2023-12-27 15:53:06
>>ssgodd+(OP)
I hope this results in Fair Use being expanded to cover AI training. This is way more important to humanity's future than any single media outlet. If the NYT goes under, a dozen similar outlets can replace them overnight. If we lose AI to stupid IP battles in its infancy, we end up handicapping probably the single most important development in human history just to protect some ancient newspaper. Then another country is going to do it anyway, and still the NYT is going to get eaten.
◧◩
2. aantix+1l[view] [source] 2023-12-27 16:01:23
>>solard+Aj
Why can't AI at least cite its source? This feels like a broader problem, nothing specific to the NYTimes.

Long term, if no one is given credit for their research, either the creators will start to wall off their content or not create at all. Both options would be sad.

A humane attribution comment from the AI could go a long way - "I think I read something about this <topic X> in the NYTimes <link> on January 3rd, 2021."

It appears that without attribution, long term, nothing moves forward.

AI loses access to the latest findings from humanity. And so does the public.

◧◩◪
3. make3+5m[view] [source] 2023-12-27 16:07:46
>>aantix+1l
"Why can't AI at least cite its source" each article seen alters the weights a tiny, non-human understandable amount. it doesn't have a source, unless you think of the whole humongous corpus that it is trained on
◧◩◪◨
4. pxoe+Wo[view] [source] 2023-12-27 16:24:44
>>make3+5m
that just sounds like "we didn't even try to build those systems in that way, and we're all out of ideas, so it basically will never work"

which is really just a very, very common story with ai problems, be it sources/citations/licenses/usage tracking/etc., it's all just 'too complex if not impossible to solve', which just seems like a facade for intentionally ignoring those problems for benefit at this point. those problems definitely exist, why not try to solve them? because well...actually trying to solve them would entail having to use data properly and pay creators, and that'd just cut into bottom line. the point is free data use without having to pay, so why would they try to ruin that for themselves?

◧◩◪◨⬒
5. KHRZ+us[view] [source] 2023-12-27 16:42:31
>>pxoe+Wo
Just a question, do you remember a source for all the knowledge in your mind, or did you at least try to remember?
◧◩◪◨⬒⬓
6. pxoe+zt[view] [source] 2023-12-27 16:47:13
>>KHRZ+us
a computer isn't a human. aren't computers good at storing data? why can't they just store that data? they literally have sources in datasets. why can't they just reference those sources?

human analogies are cute, but they're completely irrelevant. it doesn't change that it's specifically about computers, and doesn't change or excuse how computers work.

◧◩◪◨⬒⬓⬔
7. Levitz+Nb1[view] [source] 2023-12-27 20:44:08
>>pxoe+zt
I'm sorry if this is too callous, but if you don't understand what you are talking about you should first familiarize yourself with the problem, then make claims about what should be done.

It would be great if we could tell specifically how something like ChatGPT creates its output, it would be great for research, so it's not like there is no interest in it, but it's just not an easy thing to do. It's more "Where did you get your identity from?" than "What's the author of that book?". You might think "But sometimes what the machine gives CAN literally be the answer to 'What is the author of that book?'" but even in those cases the answer is not restricted to the work alone, there is an entire background that makes it understand that thing is what you want.

[go to top]