zlacker

[return to "GitHub Copilot, with “public code” blocked, emits my copyrighted code"]
1. kweing+v6[view] [source] 2022-10-16 20:27:21
>>davidg+(OP)
I’ve noticed that people tend to disapprove of AI trained on their profession’s data, but are usually indifferent or positive about other applications of AI.

For example, I know artists who are vehemently against DALL-E, Stable Diffusion, etc. and regard it as stealing, but they view Copilot and GPT-3 as merely useful tools. I also know software devs who are extremely excited about AI art and GPT-3 but are outraged by Copilot.

For myself, I am skeptical of intellectual property in the first place. I say go for it.

◧◩
2. tpxl+O7[view] [source] 2022-10-16 20:39:26
>>kweing+v6
When Joe Rando plays a song from 1640 on a violin he gets a copyright claim on Youtube. When Jane Rando uses devtools to check a website source code she gets sued.

When Microsoft steals all code on their platform and sells it, they get lauded. When "Open" AI steals thousands of copyrighted images and sells them, they get lauded.

I am skeptical of imaginary property myself, but fuck this one set of rules for the poor, another set of rules for the masses.

◧◩◪
3. rtkwe+Te[view] [source] 2022-10-16 21:45:01
>>tpxl+O7
I think copilot is a clearer copyright violation than any of the stable diffusion projects though because code has a much narrower band of expression than images. It's really easy to look at the output of CoPilot and match it back to the original source and say these are the same. With stable diffusion it's much closer to someone remixing and aping the images than it is reproducing originals.

I haven't been following super closely but I don't know of any claims or examples where input images were recreated to a significant degree by stable diffusion.

◧◩◪◨
4. mr_toa+Xn[view] [source] 2022-10-16 23:07:35
>>rtkwe+Te
> I haven't been following super closely but I don't know of any claims or examples where input images were recreated to a significant degree by stable diffusion.

I think that the argument being made by some artists is that the training process itself violates copyright just by using the training data.

That’s quite different from arguing that the output violates copyright, which is what the tweet in this case was about.

[go to top]