zlacker

[parent] [thread] 4 comments
1. bsder+(OP)[view] [source] 2023-01-14 23:14:11
> My assumption would be 'fair use'.

Why? That's not obvious to me at all.

These algorithms take the entire image and feed it into their maw to generate their neural network. That doesn't really sound like "fair use".

If these GPT systems were only doing scholarly work, there might be an argument. However, the moment the outputs are destined somewhere other than scholarly publications that "fair use" also goes right out the window.

If these algorithms took a 1% chunk of the image, like a collage would, and fed it into their algorithm, they'd have a better argument for "fair use". But, then, you don't have crowdsourced labelling that you can harvest for your training set as the cut down image probably doesn't correspond to all the prompts that the large image does.

> Stable Diffusion does not create 1:1 copies of artwork it has been trained on

What people aren't getting is that what the output looks like doesn't matter. This is a "color of your bits" problem--intent matters.

This was covered when colorizing old black and white films: https://chart.copyrightdata.com/Colorization.html "The Office will register as derivative works those color versions that reveal a certain minimum amount of individual creative human authorship." (Edit: And note that they were colorizing public domain films to dodge the question of original copyright.)

The current algorithms injest entire images with the intent to generate new images from them. There is no "extra thing" being injected by a human--there is a direct correspondence and the same inputs always produce the same outputs. The output is deterministically derived from the input (input images/text prompt/any internal random number generators).

You don't get to claim a new copyright or fair use just because you bumped a red channel 1%. GPT is a bit more complicated than that, but not very different in spirit.

replies(1): >>EMIREL+22
2. EMIREL+22[view] [source] 2023-01-14 23:36:34
>>bsder+(OP)
The amount of the work taken is just one of the fair use factors. Courts often perform holistic analysis on all of them to decide if fair use applies.
replies(1): >>bsder+d7
◧◩
3. bsder+d7[view] [source] [discussion] 2023-01-15 00:28:48
>>EMIREL+22
That is why I pointed out both the scholarly exemption as well as the collage exception.

There are arguments to be made for fair use--I'm just not sure the current crop of GPT falls under any of them.

replies(1): >>EMIREL+H9
◧◩◪
4. EMIREL+H9[view] [source] [discussion] 2023-01-15 00:56:51
>>bsder+d7
But the point is that fair use is almost completely principles-based rather than rules-based. Besides the four factors in the statute and some judicial precedent it's pretty much at the discretion of the court.
replies(1): >>bsder+bh
◧◩◪◨
5. bsder+bh[view] [source] [discussion] 2023-01-15 02:19:00
>>EMIREL+H9
So? Copyright is a social construct. Fair use is a social construct.

Social constructs are not computer programs. Social constructs concern messy, unpredictable computing units called humans.

Precedent and continuity are something that US courts normally try to value. Yes, the rules can be fuzzy, but the courts generally tried to balance the needs of the competing parties. Unfortunately, there will never be a purely "rules based" decision tree on this kind of "fuzzy" thing.

Of course, recent Republican court appointments have torn up the idea of precedent and minimizing disruption in preference to partisan principles, so your concerns aren't unwarranted.

[go to top]