zlacker

[return to "GitHub Copilot, with “public code” blocked, emits my copyrighted code"]
1. _ryanj+2z[view] [source] 2022-10-17 00:51:24
>>davidg+(OP)
Howdy, folks. Ryan here from the GitHub Copilot product team. I don’t know how the original poster’s machine was set-up, but I’m gonna throw out a few theories about what could be happening.

If similar code is open in your VS Code project, Copilot can draw context from those adjacent files. This can make it appear that the public model was trained on your private code, when in fact the context is drawn from local files. For example, this is how Copilot includes variable and method names relevant to your project in suggestions.

It’s also possible that your code – or very similar code – appears many times over in public repositories. While Copilot doesn’t suggest code from specific repositories, it does repeat patterns. The OpenAI codex model (from which Copilot is derived) works a lot like a translation tool. When you use Google to translate from English to Spanish, it’s not like the service has ever seen that particular sentence before. Instead, the translation service understands language patterns (i.e. syntax, semantics, common phrases). In the same way, Copilot translates from English to Python, Rust, JavaScript, etc. The model learns language patterns based on vast amounts of public data. Especially when a code fragment appears hundreds or thousands of times, the model can interpret it as a pattern. We’ve found this happens in <1% of suggestions. To ensure every suggestion is unique, Copilot offers a filter to block suggestions >150 characters that match public data. If you’re not already using the filter, I recommend turning it on by visiting the Copilot tab in user settings.

This is a new area of development, and we’re all learning. I’m personally spending a lot of time chatting with developers, copyright experts, and community stakeholders to understand the most responsible way to leverage LLMs. My biggest take-away: LLM maintainers (like GitHub) must transparently discuss the way models are built and implemented. There’s a lot of reverse-engineering happening in the community which leads to skepticism and the occasional misunderstanding. We’ll be working to improve on that front with more blog posts from our engineers and data scientists over the coming months.

◧◩
2. svnt+pD[view] [source] 2022-10-17 01:37:10
>>_ryanj+2z
This doesn’t at all address the primary issue, which is one of licensing.

Is it a valid defense against copyright infringement to say “we don’t know where we got it, maybe someone else copied it from you first?”

If someone violated the copyright of a song by sampling too much of it and released it in the public domain (or failed to claim it at all), and you take the entire sample from them, would that hold up in a legal setting? I doubt it.

◧◩◪
3. dark-s+Q81[view] [source] 2022-10-17 08:06:47
>>svnt+pD
> Is it a valid defense against copyright infringement to say “we don’t know where we got it, maybe someone else copied it from you first?”

If you do something, it's ultimately you who has to make sure that it is not against the law. "I didn't know" is never a good defense. If you pay with counterfeit cash, it is you who will be arrested, even if you didn't know it was counterfeit. If you use code from somewhere else (no matter if it's by copy/pasting or by using Copilot), it is you who has to make certain that it doesn't infringe on any copyright.

Just because a tool can (accidentally) make you break the law, doesn't mean the tool is to blame (cf. BitTorrent, Tor, KaliLinux, ...)

◧◩◪◨
4. unsafe+ba1[view] [source] 2022-10-17 08:21:46
>>dark-s+Q81
BitTorrent doesn't automatically download a pirated copy of Lion King when you ask it for something to watch...
◧◩◪◨⬒
5. dark-s+Do1[view] [source] 2022-10-17 10:56:31
>>unsafe+ba1
BitTorrent (and, to a larger degree, EDonkey) did and still do that. Who tells you that what you're downloading is indeed what you think it is. You can click on a magnet link that claims to download a Debian ISO just to find out later that it's something else entirely. To make matters worse, BitTorrent even uploads to potentially hundreds of other clients while you're still downloading, so while downloading something might not be illegal in your jurisdiction, uploading/distributing most certainly is, and you can get into lots of trouble for uploading (parts of a) copyrighted wortk to hundreds or thousands of other users
◧◩◪◨⬒⬓
6. unsafe+0C1[view] [source] 2022-10-17 12:44:02
>>dark-s+Do1
BitTorrent is certainly not a good example to follow, but I do think that copilot is more wrong.

They should definitely include disclaimers and make seeding opt-in (though I don't know how safe you are legally when you download a Lion King copy labeled Debian.iso). That said, they don't have the information necessary to tell whether what you're doing is legal or not.

Copilot _has_ that information. The model spits out code that it read. They could disallow publishing or commercially using code generated by it while they're sorting it out, but they made the decision not to.

AI is hard, but the model is clearly handing out literal copies of GPL code. Github knows this and they still don't tell you about it when you click install.

◧◩◪◨⬒⬓⬔
7. dark-s+WA4[view] [source] 2022-10-18 07:40:33
>>unsafe+0C1
It doesn't matter if the information is there or not, since an algorithm cannot commit a copyright violation. There is at least one human involved, and the human is the one who is responsible.

A car has all the information that it's going faster than the speed limit, or that it just ran a red light. But in the end it's the driver who is responsible. It's not the tool (car, Copilot) that commits the illegal act, it's the user using that tool

◧◩◪◨⬒⬓⬔⧯
8. unsafe+nY8[view] [source] 2022-10-19 12:17:39
>>dark-s+WA4
In the case of Copilot, you don't even have a speedometer.
[go to top]