zlacker

[return to "GitHub Copilot, with “public code” blocked, emits my copyrighted code"]
1. _ryanj+2z[view] [source] 2022-10-17 00:51:24
>>davidg+(OP)
Howdy, folks. Ryan here from the GitHub Copilot product team. I don’t know how the original poster’s machine was set-up, but I’m gonna throw out a few theories about what could be happening.

If similar code is open in your VS Code project, Copilot can draw context from those adjacent files. This can make it appear that the public model was trained on your private code, when in fact the context is drawn from local files. For example, this is how Copilot includes variable and method names relevant to your project in suggestions.

It’s also possible that your code – or very similar code – appears many times over in public repositories. While Copilot doesn’t suggest code from specific repositories, it does repeat patterns. The OpenAI codex model (from which Copilot is derived) works a lot like a translation tool. When you use Google to translate from English to Spanish, it’s not like the service has ever seen that particular sentence before. Instead, the translation service understands language patterns (i.e. syntax, semantics, common phrases). In the same way, Copilot translates from English to Python, Rust, JavaScript, etc. The model learns language patterns based on vast amounts of public data. Especially when a code fragment appears hundreds or thousands of times, the model can interpret it as a pattern. We’ve found this happens in <1% of suggestions. To ensure every suggestion is unique, Copilot offers a filter to block suggestions >150 characters that match public data. If you’re not already using the filter, I recommend turning it on by visiting the Copilot tab in user settings.

This is a new area of development, and we’re all learning. I’m personally spending a lot of time chatting with developers, copyright experts, and community stakeholders to understand the most responsible way to leverage LLMs. My biggest take-away: LLM maintainers (like GitHub) must transparently discuss the way models are built and implemented. There’s a lot of reverse-engineering happening in the community which leads to skepticism and the occasional misunderstanding. We’ll be working to improve on that front with more blog posts from our engineers and data scientists over the coming months.

◧◩
2. briffl+rP[view] [source] 2022-10-17 04:01:45
>>_ryanj+2z
Their are lots of sheets properly licensed that show the notes to play ‘stairway to heaven’. Many intro to guitar books, etc. If I publish myself playing that song without the copyright owners permission (and typically attribution) I am looking at some very, very negative outcomes. The fact that there are many copies correctly licensed (or not) does not obsolve me of anything. Curious how this any different?
◧◩◪
3. zimpen+M91[view] [source] 2022-10-17 08:17:49
>>briffl+rP
> If I publish myself playing that song without the copyright owners permission

Music licensing is bonkers but AFAIR (at least in the UK) I think you're allowed to do covers without explicit permission[1] - you'll have to give the original writers/composers the appropriate share of any money you make.

[1] Which is why you (used to?) get, e.g., supermarkets playing covers of songs rather than the originals because it's cheaper.

◧◩◪◨
4. Timwi+bA1[view] [source] 2022-10-17 12:29:52
>>zimpen+M91
What's the appropriate share?
◧◩◪◨⬒
5. zimpen+Cx2[view] [source] 2022-10-17 16:52:51
>>Timwi+bA1
In the UK, at least, it seemed to depend on several decades of accumulated rules and whatnots that only the PRS understood[1] (but I haven't been involved in anything related to music licensing for a few years and even then it was baffling.)

[1] Things like "was it on the radio or a TV show or a live performance or a recording? who was the composer? which licensing region was it in?" etc.

[go to top]