zlacker

[return to "Who knew the first AI battles would be fought by artists?"]
1. meebob+kc[view] [source] 2022-12-15 13:03:10
>>dredmo+(OP)
I've been finding that the strangest part of discussions around art AI among technical people is the complete lack of identification or empathy: it seems to me that most computer programmers should be just as afraid as artists, in the face of technology like this!!! I am a failed artist (read, I studied painting in school and tried to make a go at being a commercial artist in animation and couldn't make the cut), and so I decided to do something easier and became a computer programmer, working for FAANG and other large companies and making absurd (to me!!) amounts of cash. In my humble estimation, making art is vastly more difficult than the huge majority of computer programming that is done. Art AI is terrifying if you want to make art for a living- and, if AI is able to do these astonishingly difficult things, why shouldn't it, with some finagling, also be able to do the dumb, simple things most programmers do for their jobs?

The lack of empathy is incredibly depressing...

◧◩
2. Alexan+Xh1[view] [source] 2022-12-15 17:47:59
>>meebob+kc
Setting aside questions of whether there is copyright infringement going on, I think this is an unprecedented case in the history of automation replacing human labor.

Jobs have been automated since the industrial revolution, but this usually takes the form of someone inventing a widget that makes human labor unnecessary. From a worker's perspective, the automation is coming from "the outside". What's novel with AI models is that the workers' own work is used to create the thing that replaces them. It's one thing to be automated away, it's another to have your own work used against you like this, and I'm sure it feels extra-shitty as a result.

◧◩◪
3. wwwest+gr1[view] [source] 2022-12-15 18:32:55
>>Alexan+Xh1
Absolutely this -- and in many (maybe most cases), there was no consent for the use of the work in training the model, and quite possibly no notice or compensation at all.

That's a huge ethical issue whether or not it's explicitly addressed in copyright/ip law.

◧◩◪◨
4. api+zw1[view] [source] 2022-12-15 18:57:49
>>wwwest+gr1
I really think there's likely to be gigantic class action lawsuits in the near future, and I support them. People did not consent for their data and work to be used in this way. In many cases people have already demonstrated using custom tailored prompts that these models have been trained on copyrighted works that are not public domain.
◧◩◪◨⬒
5. archon+5x1[view] [source] 2022-12-15 18:59:34
>>api+zw1
Consent isn't required if they're making their work available for public viewing.
◧◩◪◨⬒⬓
6. gransh+hR1[view] [source] 2022-12-15 20:29:43
>>archon+5x1
For VIEWING. This is like blatantly taking your gpl licensed code and using it for commercial purposes
◧◩◪◨⬒⬓⬔
7. archon+732[view] [source] 2022-12-15 21:25:52
>>gransh+hR1
A thing that can be viewed can be learned from.

I can't copy your GPL code. I might be able to write my own code that does the same thing.

I'm going to defend this statement in advance. A lot of software developers white knight more than they strictly have to; they claim that learning from GPL code unavoidably results in infringing reproduction of that code.

Courts, however, apply a test [1], in an attempt to determine the degree to which the idea is separable from the expression of that idea. Copyright protects particular expression, not idea, and in the case that the idea cannot be separated from the expression, the expression cannot be copyrighted. So either I'm able to produce a non-infringing expression of the idea, or the expression cannot be copyrighted, and the GPL license is redundant.

[1] https://en.wikipedia.org/wiki/Abstraction-Filtration-Compari...

[go to top]