zlacker

[return to "GitHub Copilot, with “public code” blocked, emits my copyrighted code"]
1. kweing+v6[view] [source] 2022-10-16 20:27:21
>>davidg+(OP)
I’ve noticed that people tend to disapprove of AI trained on their profession’s data, but are usually indifferent or positive about other applications of AI.

For example, I know artists who are vehemently against DALL-E, Stable Diffusion, etc. and regard it as stealing, but they view Copilot and GPT-3 as merely useful tools. I also know software devs who are extremely excited about AI art and GPT-3 but are outraged by Copilot.

For myself, I am skeptical of intellectual property in the first place. I say go for it.

◧◩
2. tpxl+O7[view] [source] 2022-10-16 20:39:26
>>kweing+v6
When Joe Rando plays a song from 1640 on a violin he gets a copyright claim on Youtube. When Jane Rando uses devtools to check a website source code she gets sued.

When Microsoft steals all code on their platform and sells it, they get lauded. When "Open" AI steals thousands of copyrighted images and sells them, they get lauded.

I am skeptical of imaginary property myself, but fuck this one set of rules for the poor, another set of rules for the masses.

◧◩◪
3. rtkwe+Te[view] [source] 2022-10-16 21:45:01
>>tpxl+O7
I think copilot is a clearer copyright violation than any of the stable diffusion projects though because code has a much narrower band of expression than images. It's really easy to look at the output of CoPilot and match it back to the original source and say these are the same. With stable diffusion it's much closer to someone remixing and aping the images than it is reproducing originals.

I haven't been following super closely but I don't know of any claims or examples where input images were recreated to a significant degree by stable diffusion.

◧◩◪◨
4. makeit+jn[view] [source] 2022-10-16 23:01:38
>>rtkwe+Te
I think the is exacty the gap the gp is mentionning: to a trained artist it is clear as water that the original image has been lifted wholesale, even if for instance the colors are adjusted here and there.

You put it as a remix, but remixes are credited and expressed as such.

◧◩◪◨⬒
5. omnimu+Qo[view] [source] 2022-10-16 23:14:16
>>makeit+jn
Exactly to a programmer copilot is clear violation, to a writer gpt-3 is clear violation, to an artist dalle-2 is clear violation. The artist might love copilot, the writer might love dalle, the programmer might love gpt-3.

Its all the same they just dont realize this.

◧◩◪◨⬒⬓
6. sidewn+Yu[view] [source] 2022-10-17 00:08:48
>>omnimu+Qo
Does dalle-2 verbatim reproduce artwork? I have never used it.
◧◩◪◨⬒⬓⬔
7. CapsAd+Kx[view] [source] 2022-10-17 00:39:31
>>sidewn+Yu
It's kind of like having millions of parameters you can tweak to get to an image. So an image does not really exist in the model.

I can imagine Mona Lisa in my head, but it doesn't really "exist" verbatim in my head. It's only an approximation.

I believe copilot works the same way (?)

◧◩◪◨⬒⬓⬔⧯
8. heavys+1z[view] [source] 2022-10-17 00:51:09
>>CapsAd+Kx
NNs can and do encode information from their training sets in the models themselves, sometimes verbatim.

Sometimes the original information is there in the model, encoded/compressed/however you want to look at it, and can be reproduced.

[go to top]