zlacker

[parent] [thread] 7 comments
1. konsch+(OP)[view] [source] 2023-07-08 08:17:14
I don't think that's problematic. That's how societies work. They learn.
replies(2): >>gumbal+d1 >>manuel+02
2. gumbal+d1[view] [source] 2023-07-08 08:30:53
>>konsch+(OP)
AI is not “societies” or “people” and it most certainly doesnt “learn” as the two would. Perhaps thats what openai’s effective marketing campaign taught gullible folks but that’s not how it works at all. A”I” ingests massive amounts of people’s intellectual work, often without consent, mixes it and resells it without royalties.
replies(1): >>chii+y6
3. manuel+02[view] [source] 2023-07-08 08:41:01
>>konsch+(OP)
“How societies work” can be used to justify essentially everything and I do not think it’s a good argument.
◧◩
4. chii+y6[view] [source] [discussion] 2023-07-08 09:36:57
>>gumbal+d1
> ingests massive amounts of people’s intellectual work, often without consent, mixes it and resells it without royalties.

but when people do that, it is allowed isnt it? So what is special about AI, other than the scale?

replies(1): >>gumbal+k7
◧◩◪
5. gumbal+k7[view] [source] [discussion] 2023-07-08 09:46:38
>>chii+y6
This debate is becoming tiring - yes, humans are allowed to according to terms and conditions. We could use the same argument in claiming that a database is just human memory at scale, thus it should be allowed to store any data it wants and then serve it, yet we dont permit that. Similarly a laptop can sing because just like a human it emits sound, yet you have to pay for what it emits.

AI is software, it doesnt “learn” as a human does and even if it did it would still have to be bound by the same rules as any other piece of software and human.

replies(1): >>chii+Da
◧◩◪◨
6. chii+Da[view] [source] [discussion] 2023-07-08 10:32:26
>>gumbal+k7
> it would still have to be bound by the same rules as any other piece of software and human.

exactly, so there's zero reason to prevent anyone from using a piece of software (which slurps a lot of information off the internet), and produce new works that do not break currently copyrighted content.

replies(1): >>gumbal+iy
◧◩◪◨⬒
7. gumbal+iy[view] [source] [discussion] 2023-07-08 14:08:08
>>chii+Da
Well that goes without saying. The issue is not the tool the issue is how its created and used. No problem in using publicly available ai friendly licensed content. The issue is using copyrighted content without consent and without honouring licensing terms.
replies(1): >>chii+Sb2
◧◩◪◨⬒⬓
8. chii+Sb2[view] [source] [discussion] 2023-07-09 02:16:33
>>gumbal+iy
> ai friendly licensed content

> The issue is using copyrighted content without consent

the consent is given implicitly if the content is available to the public for viewing. The copyright isn't being violated by an ai training model, as it isn't copied. The information contained within the works is not what's being copyrighted - it's the expression.

If the ai training algorithm is capable of extracting the information out of the works, and use it in another environment as part of some other works, you cannot claim copyright over such information.

This applies to style, patterns and other abstract information that could be extracted from works. It's as if a chef, upon reading many recipe books, produces a new recipe book (that contains information extracted from them) - the original creators of those recipe books cannot claim said chef had violated any copyright.

[go to top]