zlacker

[parent] [thread] 8 comments
1. TaupeR+(OP)[view] [source] 2022-12-15 12:55:35
Not at all, for many reasons.

1) the artist is not literally copying the copyrighted pixel data into their "system" for training

2) An individual artist is not a multi billion dollar company with a computer system that spits out art rapidly using copyrighted pixel data. A categorical difference.

replies(3): >>brushf+41 >>endorp+e1 >>astran+k1
2. brushf+41[view] [source] 2022-12-15 13:01:06
>>TaupeR+(OP)
Those reasons don't make sense to me.

On 1, human artists are copying copyrighted pixel data into their system for training. That system is the brain. It's organic RAM.

On 2, money shouldn't make a difference. Jim Carrey should still be allowed to paint even though he's rich.

If Jim uses Photoshop instead of brushes, he can spit out the style ideas he's copied and transformed in his brain more rapidly - but he should still be allowed to do it.

replies(3): >>astran+04 >>Taywee+35 >>Alexan+Vj
3. endorp+e1[view] [source] 2022-12-15 13:02:13
>>TaupeR+(OP)
Have to disagree with point 1, often this is what artists are doing. More strictly in the music part (literally playing others songs), less strictly in the drawing part. But copying, incorporating and developing are some of the core foundations of art.
4. astran+k1[view] [source] 2022-12-15 13:02:29
>>TaupeR+(OP)
Diffusion models don't copy the pixels you show them. You cannot generally tell which training images inspired which output images.

(That's as opposed to a large language model, which does memorize text.)

Also, you can train it to imitate an artist's style just by showing it textual descriptions of the style. It doesn't have to see any images.

replies(1): >>mejuto+Io
◧◩
5. astran+04[view] [source] [discussion] 2022-12-15 13:16:33
>>brushf+41
> On 1, human artists are copying copyrighted pixel data into their system for training. That system is the brain. It's organic RAM.

They probably aren't doing that. Studying the production methods and WIPs is more useful for a human. (ML models basically guess how to make images until they produce one that "looks like" something you show it.)

replies(1): >>Mezzie+Eb1
◧◩
6. Taywee+35[view] [source] [discussion] 2022-12-15 13:21:15
>>brushf+41
A human can grow and learn based on their own experiences separate from their art image input. They'll sometimes get creative and develop their own unique style. Through all analogies, the AI is still a program with input and output. Point 1 doesn't fit for the same reason it doesn't work for any compiler. Until AI can innovate itself and hold its own copyright, it's still a machine transformation.
◧◩
7. Alexan+Vj[view] [source] [discussion] 2022-12-15 14:32:18
>>brushf+41
I think the parent's point about (2) wasn't about money, but category. A human is a human and has rights, an AI model is a tool and does not have rights. The two would not be treated equally under the law in any other circumstances, so why would you equate them when discussing copyright?
◧◩
8. mejuto+Io[view] [source] [discussion] 2022-12-15 14:49:32
>>astran+k1
> Also, you can train it to imitate an artist's style just by showing it textual descriptions of the style. It doesn't have to see any images.

And the weights. The weights it has learned come originally from the images.

◧◩◪
9. Mezzie+Eb1[view] [source] [discussion] 2022-12-15 18:10:50
>>astran+04
They do sometimes, or at least they used to. I have some (very limited) visual art training, and one of the things I/we did in class was manually mash up already existing works. In my case I smushed the Persistence of Memory and the Arnolfini portrait. It was pretty clear copycat; the work was divided into squares and I poorly replicated the Arnolfini Portrait from square to square.
[go to top]