zlacker

[return to "The New York Times is suing OpenAI and Microsoft for copyright infringement"]
1. solard+Aj[view] [source] 2023-12-27 15:53:06
>>ssgodd+(OP)
I hope this results in Fair Use being expanded to cover AI training. This is way more important to humanity's future than any single media outlet. If the NYT goes under, a dozen similar outlets can replace them overnight. If we lose AI to stupid IP battles in its infancy, we end up handicapping probably the single most important development in human history just to protect some ancient newspaper. Then another country is going to do it anyway, and still the NYT is going to get eaten.
◧◩
2. aantix+1l[view] [source] 2023-12-27 16:01:23
>>solard+Aj
Why can't AI at least cite its source? This feels like a broader problem, nothing specific to the NYTimes.

Long term, if no one is given credit for their research, either the creators will start to wall off their content or not create at all. Both options would be sad.

A humane attribution comment from the AI could go a long way - "I think I read something about this <topic X> in the NYTimes <link> on January 3rd, 2021."

It appears that without attribution, long term, nothing moves forward.

AI loses access to the latest findings from humanity. And so does the public.

◧◩◪
3. make3+5m[view] [source] 2023-12-27 16:07:46
>>aantix+1l
"Why can't AI at least cite its source" each article seen alters the weights a tiny, non-human understandable amount. it doesn't have a source, unless you think of the whole humongous corpus that it is trained on
◧◩◪◨
4. pxoe+Wo[view] [source] 2023-12-27 16:24:44
>>make3+5m
that just sounds like "we didn't even try to build those systems in that way, and we're all out of ideas, so it basically will never work"

which is really just a very, very common story with ai problems, be it sources/citations/licenses/usage tracking/etc., it's all just 'too complex if not impossible to solve', which just seems like a facade for intentionally ignoring those problems for benefit at this point. those problems definitely exist, why not try to solve them? because well...actually trying to solve them would entail having to use data properly and pay creators, and that'd just cut into bottom line. the point is free data use without having to pay, so why would they try to ruin that for themselves?

◧◩◪◨⬒
5. simonw+Pp[view] [source] 2023-12-27 16:28:28
>>pxoe+Wo
What makes you think AI researchers (including the big labs like OpenAI and Anthropic) aren't trying to solve these problems?
◧◩◪◨⬒⬓
6. pxoe+Rs[view] [source] 2023-12-27 16:44:14
>>simonw+Pp
the solutions haven't arrived. neither have changes in lieu of having solutions. "trying" isn't an actual, present, functional change. and it just gets passed around as an excuse for companies to keep doing whatever they're doing.
◧◩◪◨⬒⬓⬔
7. pama+UI[view] [source] 2023-12-27 18:13:04
>>pxoe+Rs
Please recall how much the world changed in just the last year. What would be your expected timescale for the solution of this particular problem and why is it more important than instilling models with the ability to logically plan and answer correctly?
◧◩◪◨⬒⬓⬔⧯
8. pxoe+lN8[view] [source] 2023-12-30 15:02:50
>>pama+UI
the timeline for LLMs and image generation has been 6+ years. it is not a thing where it "arrived just this year, and only just changing". it's been in a development for a long time. and yet.
[go to top]