For example, I know artists who are vehemently against DALL-E, Stable Diffusion, etc. and regard it as stealing, but they view Copilot and GPT-3 as merely useful tools. I also know software devs who are extremely excited about AI art and GPT-3 but are outraged by Copilot.
For myself, I am skeptical of intellectual property in the first place. I say go for it.
I celebrate Microsofts shameless plundering of Github to create new products that increase productivity. The incredible thing is that people trusted Microsoft to use their code on their terms to begin with. This is a company who has been finding ways to make open source code into a proprietary product since the 90s.
Nobody can stop people from replicating what Microsoft did in the long run anyways. Eventually any consumer with enough access to source code will be able to make their own copilot. Even if copilot is criminalised Microsoft can just sell access to the entire GitHub dataset and let other people commit the "crime". Then you're right back where we started with having to sue the end users of copilot for infringement instead of Microsoft.
Use private repos or face the inevitability that copilot-like products will scrape your code.
There is no arguing against it though, you can't stop it, all this stuff is coming eventually to all of these areas, might as well try and find ways to use the oppurutinies while you can while some of this is still new.
Lawmakers need to jump on this stuff ASAP. Some say that it's no different from a person looking at existing code or art and recreating it from memory or using it as inspiration. But the law changes when technology gets involved already, anyway. There's no law against you and I having a conversation, but I may not be able to record it depending on the jurisdiction. Similarly, there's no law against you looking at artwork that I post online, but it's not out of question that a law could exist preventing you from using it as part of an ML training dataset.
Suing Github = signing up for a ~decade long incredibly expensive and time-consuming legal battle against one of the richest companies in the world
There may be a slight difference in effort between these two options.
Oh, the comments! Those are covered by copyright for sure.
But to be honest if your code is open source im pretty sure Microsoft don't care about licence they'll just use it cause "reasons" same about stable diffusion they don't give a fuk about data if its in internet they'll use it so its topic that probably will be regulated in few years.
Until then lets hope they'll get milked (both Microsoft and NovelAI) for illegal content usage and I srsly hope at least few layers will try milking it asap especially NovelAI which illegally usage a lot of copyrighted art in the training data.
When Microsoft steals all code on their platform and sells it, they get lauded. When "Open" AI steals thousands of copyrighted images and sells them, they get lauded.
I am skeptical of imaginary property myself, but fuck this one set of rules for the poor, another set of rules for the masses.
I dunno the title says it used public code when it was meant to block public code.
We need regulation around it.
Nope. DALL-E generates images with the Getty Watermark, so clearly there’s copyrighted materials in its training set: https://www.reddit.com/r/dalle2/comments/xdjinf/its_pretty_o...
I am also not a hypocrite; I do not like DALL-E or Stable Diffusion either.
As a sibling comment implies, these AI tools give more power to people who control data, i.e., big companies or wealthy people, while at the same time, they take power away from individuals.
Copilot is bad for society. DALL-E and Stable Diffusion are bad for society.
I don't know what the answer is, but I sure wish I had the resources to sue these powerful entities.
for something to show up verbatim in the output of a textual AI model it needs to be an input many times.
I wonder if the problem is not copilot, but many people using this person's code without license or credit, and copilot being trained on those pieces of code as well. copilot may just be exposing a problem rather than creating one.
I don't know much about AI, and I don't use copilot.
What did the photograph do to the portrait artist? What did the recording do to the live musician?
Here’s some highfalutin art theory on the matter, from almost a hundred years ago: https://en.wikipedia.org/wiki/The_Work_of_Art_in_the_Age_of_...
If you try to outlaw it, the day before the laws come into effect, I'm going to download the very best models out there and run it on my home computer. I'll start organising with other scofflaws and building our own AI projects in the fashion of leelachesszero with donated compute time.
You can shut down the commercial versions of these tools. You can scare large corporations from banning the use of these tools by corporations. You can pull an uno reverse card and use modified versions of the tools to CHECK for copyright infringement and sue people under existing laws AND you'll probably even be able to statistically prove somebody is an AI user. But STOPPING the use of these tools? Go ahead and try, won't happen.
Not to detract from the well founded licensing discussion, but who is it that finds this madlibs approach useful in coding?
Can these enterprises really make sure, that their code won’t be used to train Copilot? I am skeptical.
Conservatism consists of exactly one proposition, to wit:
There must be in-groups whom the law protects but does not bind, alongside out-groups whom the law binds but does not protect.
—Composer Frank Wilhoit[1]
[1]: https://crookedtimber.org/2018/03/21/liberals-against-progre...
Those pretty little licenses are a waste of storage if no one enforces them.
For what it's worth, the people I know who are opposed to this sort of "useful tool" don't discriminate by profession.
For some reason, code seems to lend itself to exact copying by AIs (and also some humans) rather than comprehension and imitation.
That can't possibly be a valid claim, right? AFAIK copyright is "gone" after the original author dies + ~70 years. Before fairly recently it was even shorter. Something from 1640 surely can't be claimed under copyright protection. There are much more recent changes where that might not be the case, but 1640?
> When Jane Rando uses devtools to check a website source code she gets sued.
Again, that doesn't sound like a valid suit. Surely she would win? In the few cases I've heard of where suits like this are brought against someone they've easily won them.
They are still their own separate works!
If a painter paints a person for commission, and then that person also commissions a photographer to take a picture of them, is the photographer infringing on the copyright of the painter? Absolutely not; the works are separate.
If a recording artist records a public domain song that another artist performs live, is the recording artist infringing on the live artist? Heavens, no; the works are separate.
On the other hand, these "AI's" are taking existing works and reusing them.
Say I write a song, and in that song, I use one stanza from the chorus of one of your songs. Verbatim. Would you have a copyright claim against me for that? Of course, you would!
That's what these AI's do; they copy portions and mix them. Sometimes they are not substantial portions. Sometimes, they are, with verbatim comments (code), identical structure (also code), watermarks (images), composition (also images), lyrics (songs), or motifs (also songs).
In the reverse of your painter and photographer example, we saw US courts hand down judgment against an artist who blatantly copied a photograph. [1]
Anyway, that's the difference between the tools of photography (creates a new thing) and sound recording (creates a new thing) versus AI (mixes existing things).
And yes, sound mixing can easily stray into copyright infringement. So can other copying of various copyrightable things. I'm not saying humans don't infringe; I'm saying that AI does by construction.
[1]: https://www.reuters.com/world/us/us-supreme-court-hears-argu...
I can write an HTML form, then prompt copilot to generate a serializable class that can be used to deserialize that form on the server. I can write a test for one of our internal apis, and for every subsequent test I can just write the name of what I expect it to check and it generates a test that correctly uses our internal APIs and verifies the expected behavior.
You can have problems with the ethics of how GitHub and OpenAI produced what they did, but to describe it the way that you did requires never having really attempted to use it seriously.
Is it fine if an author publishes a short story publicly on the web for someone else to submit it to a contest as their own work?
Not sure I agree, but I can at least see the point for Copilot and DALL-E - but Stable Diffusion? It's open source, it runs on (some) home-use laptops. How is that taking away power from indies?
Just look at the sheer number of apps building on or extending SD that were published on HN, and that's probably just the tip of the iceberg. Quite a few of them at least looked like side projects by solo devs.
"There is no such thing as liberalism — or progressivism, etc.
There is only conservatism. No other political philosophy actually exists; by the political analogue of Gresham’s Law, conservatism has driven every other idea out of circulation."
If I take a song, cut it up, and sing over it, my release is valid. If I parody your work, that's my work. If you paint a picture of a building and I go to that spot and take a photograph of that building it is my work.
I can derive all sorts of things, things that I own, from things that others have made.
Fair use is a thing: https://www.copyright.gov/fair-use/
As for talking about the originals, would an artist credit every piece of inspiration they have ever encountered over a lifetime? Publishing a seed seems fine as a nice thing to do, but pointing at the billion pictures that went into the drawing seems silly.
[0] https://techcrunch.com/2012/03/22/microsoft-and-tivo-drop-th...
If a company or person assume they got copyright permission to any work public accessible then they will quickly find out that such assumption is wrong, and that they require explicit permission.
I imagine that Disney would take issue with SD if material that Disney owned the copyright to was used in SD. They would sue. SD would have to be taken off the market.
Thus, Disney has the power to ensure that their copyrighted material remains protected from outside interests, and they can still create unique things that bring in audiences.
Any small-time artist that produces something unique will find their material eaten up by SD in time, and then, because of the sheer number of people using SD, that original material will soon have companions that are like it because they are based on it in some form. Then, the original won't be as unique.
Anyone using SD will not, by definition, be creating anything unique.
And when it comes to art, music, photography, and movies, uniqueness is the best selling point; once something is not unique, it becomes worth less because something like it could be gotten somewhere else.
SD still has the power to devalue original work; it just gives normal people that power on top of giving it to the big companies, while the original works of big companies remain safe because of their armies of lawyers.
To be fair I thought it might be at least a week or two.
There is no such thing as being a Liberal or Progressive, there is only being a Conservative or anti-Conservative, and while there is much nuänce and policy to debate about that, it boils down to deciding whether you actually support or abhor the idea of "the law" (which is a much broader concept than just the legal system) existing to enforce or erase the distinction between in-groups and out-groups.
But that's just my read on it. Getting back to intellectual property, it has become a bitter joke on artists and creatives, who are held up as the beneficiaries of intellectual property laws in theory, but in practice are just as much of an out-group as everyone else.
We are bound by the law—see patent trolls, for example—but not protected by it unless we have pockets deep enough to sue Disney for not paying us.
They are absolutely completely and utterly bullshit. Nobody with half an ear for music will mistake my playing of Bach's G Minor Sonata with Arthur Grumiaux (too many out of tune notes :-D). But yet, YouTube still manages to match this to my playing, probably because they have never heard it before now (I recorded it mere minutes before).
So no, it isn't a valid claim, but this algorithm trained on certain examples of work, manages to make bad classifications with potentially devastating ramifications for the creator (I'm not a monetized YouTube artist, but if this triggered a complete lockout of my Google account(s), this likely end Very Badly).
I think it's a very relevant comparison to the GP's examples.
It's not, but good luck talking to a human at Youtube when the video gets taken down.
> Again, that doesn't sound like a valid suit. Surely she would win?
Assuming she could afford the lawyer, and that she lives through the stress and occasional mistreatment by the authority, yes, probably. Both are big ifs, though.
I wonder if there is a crowdfunding platform like gofundme, for lawsuits. Or can gofundme itself can be used for this purpose? It would be fantastic to sue the mega polluters, lying media like Fox etc.
That said, even with a lot of money, are these cases winnable? Especially given the current state of Supreme Court and other federal courts?
I'm not a lawyer, but my understanding is that while the "1640's violin composition" itself may be out of copyright, if I record myself playing it, my recording of that piece is my copyright. So if you took my file (somehow) and used it without my permission, and I could prove it, I could claim copyright infringement.
That's my understanding, and I've personally operated that way to avoid any issues since it errs on the side of safety. (Want to use old music, make sure the license of the recording explicitly says public domain or has license info)
But it can still be weaponized to prevent legitimate resubmissions of parallel works, that can potentially deplatform legitimate users, depending on the reviewer and the clarity of the rebuttal.
BTW, what happened after the photograph is that there were less portrait artists. And after the recording there were less live musicians. There are certainly no less artists nor musicians, though!
Let me be perfectly clear. I'm all for the tech. The capabilities are nice. The thing I'm strongly against is training these models on any data without any consent.
GPT-3 is OK, training it with public stuff regardless of its license is not.
Copilot is OK, training on with GPL/LGPL licensed code without consent is not.
DALL-E/MidJourney/Stable Diffusion is OK. Training it with non public domain or CC0 images is not.
"We're doing something amazing, hence we need no permission" is ugly to put it very lightly.
I've left GitHub because of CoPilot. Will leave any photo hosting platform if they hint any similar thing with my photography, period.
It has indeed happened.
https://boingboing.net/2018/09/05/mozart-bach-sorta-mach.htm...
Sony later withdrew their copyright claim.
There are two pieces to copyright when it comes to public domain:
* The work (song) itself -- can't copyright that
* The recording -- you are the copyright owner. No one, without your permission, can re-post your recording
And of course, there is derivative work. You own any portion that is derivative of the original work.
Will they “accidentally” include proprietary code from say, Oracle? Nope. They’ll make sure of it. But Joe Random? Sure
I've been finding co-pilot really useful but I'll be pausing it for now, and I'm glad I have only been using it on personal projects and not anything for work. This crosses the line in my head from legal ambiguity to legal "yeah that's gonna have to stop".
> with "public code" blocked
mean? Are you able set a setting in GitHub to tell GitHub that you don't want your code used for Copilot training data? Is this an abuse of the license you sign with GitHub, or did they update it at some point to allow your code to be automatically used in Copilot? I'm not crazy about the idea of paying GitHub for them to make money off of my code/data.
That's freedom of speech for everyone who can afford a lawyer to bring suit against a music rights-management company.
I haven't been following super closely but I don't know of any claims or examples where input images were recreated to a significant degree by stable diffusion.
https://blog.barac.at/a-business-experiment-in-data-dignity
Yes I am quoting myself
Do you have any evidence for those claims, or anything resembling those examples?
Music copyright has long expired for classical music, and big shots are definitely not exempt from where it applies. Just look at how much heat Ed Sheeran, one of the biggest contemporary pop stars, got for "stealing" a phrase that was literally just chanting "Oh-I" a few times (just to be clear, I am familiar with the case and find it infuriating that this petty rent-seeking attempt went to trial at all, even if Sheeran ended up being completely cleared, but to great personal distress as he said afterwards).
And who ever got sued for using dev tools? Is there even a way to find that out?
So even if we’re assuming these were wholly original works that the author placed under something like a Creative Commons license, the fact that it incorporated an image they had no rights to would at the very least create a fairly tangled copyright situation that any really rigorous evaluation of the copyright status of every image in the training set would tend to argue towards rejecting as not worth the risk of litigation.
But the more likely scenario here is that they did minimal at best filtering of the training set for copyrights.
I disagree, but this is a debate worth having.
This is why I disagree: humans don't copy just copyrighted material.
I am in the middle of developing and writing a romance short story. Why? Because my writing has a glaring weakness: characters, and romance stands or falls on characters. It's a good exercise to strengthen that weakness.
Anyway, both of the two people in the (eventual) couple developed from my real life, and not from any copyrighted material. For instance, the man will basically be a less autistic and less selfish version of myself. The woman will basically be the kind of person that annoys me the most in real life: bright, bubbly, always touching people, etc.
There is no copyrighted material I am getting these characters from.
In addition, their situation is not typical of such stories, but it does have connections to my life. They will (eventually) end up in a ballroom dance competition. Why that? So the male character hates it. I hate ballroom dance during a three-week ballroom dancing course in 6th grade, the girls made me hate ballroom dancing. I won't say how, but they did.
That's the difference between humans and machines: machines can only copyright and mix other copyrightable material; humans can copy real life. In other words, machines can only copy a representation; humans can copy the real thing.
Oh, and the other difference is emotion. I've heard that people without the emotional center of their brains can take six hours to choose between blue and black pens. There is something about emotions that drives decision-making, and it's decision-making that drives art.
When you consider that brain chemistry, which is a function of genetics and previous choices, is a big part of emotions, then it's obvious that those two things, genetics and previous choices, are also inputs to the creative process. Machines don't have those inputs.
Those are the non-religious reasons why I think humans have more originality than machines, including neural networks.
https://en.m.wikipedia.org/wiki/Viacom_International_Inc._v.....
I don't see how copilot or similar tools can solve this problem without vetting each project.
do you have the "don't reproduce code verbatim" preference set?
I suspect he has a different problem which (thanks to Microsoft) is now a problem he has to care about: his code probably shows up in one or more repos copy-pasted with improper LGPL attribution. There'd be no way for Copilot to know that had happened, and it would have mixed in the code.
(As a side note: understanding why an ML engine outputs a particular result is still an open area of research AFAIK.)
https://www.radioclash.com/archives/2021/05/02/youtuber-gets...
For being sued for looking at source here is the first result on Google
https://www.wired.com/story/missouri-threatens-sue-reporter-...
These things are tools to make more involved things. You're not going to be remembered for all the AI art you prompted into existence, no matter how many "good ones" you manage to generate. No one is going to put you into the Guggenheim for it.
Likewise, programmers aren't going to become more depraved or something by using Copilot. I think that kind of prescriptive purism needs to Go Away Forever, personally.
Since I’m posing the question, here’s my guess:
- Their stock would take at least a short term hit because it’s an unconventional and uncharacteristic move
- The code would reveal more about their strategic interests to competitors than they’d like, but probably nothing revelatory
- It might confirm or reinforce some negative perceptions of their business practices
- It might dispel some too
- It may reduce some competitive advantage amongst enormous businesses, and may elevate some very large firms to potential competitors
- It would provide little to no new advantage to smaller players who aren’t already in reach of competing with them and/or don’t have the resources to capitalize on access to the code
- It would probably significantly improve public perception of the company and its future intentions, at least among developers and the broader tech community
In other words, a wash. Overall business impact would be roughly neutral. The code has more strategic than technical value, there are few who could leverage the technical value that is any kind of revenue center with growth potential. Any disadvantage would be negated by the public image goodwill it generated.
Maybe my take is naive though! Maybe it would really hurt Microsoft long term if suddenly everyone can fork Windows 11, or steal ideas for their idiosyncratic office suite, or get really clever about how to get funded to go head to head with Azure armed with code everyone else can access too.
Those are effectively cases of cryptomnesia[0]. Part and parcel of learning.
If you don't want broad access your work, don't upload it to a public repository. It's very simple. Good on you for recognising that you don't agree with what GitHub looks at data in public repos, but it's not their problem.
In other words: the banal observation that people care far more when their stuff is stolen than when some stranger has their stuff stolen.
For copyright, the act of me creating something doesn't deprive you of anything, except the ability to consume or use the thing I created. If I were influenced by something, you can still be influenced by that same thing - I do not exhaust any resources I used.
This is wholely different from physical objects. If I create a knife, I deprive you of the ability to make something else from those natural resources. Natural resources that I didn't create - I merely exploited them.
Because of this, I'm fine with copyright (patents are another story). But I have some issues with physical property.
And why should opt-out be a reasonable norm? To be clear, the internet (among many other things) breaks down if every exchange of information is opt-in. Sharing of photographs taken in public places is another example. But the internet basically functions because people share information on an opt-out basis (that may or may not even be respected).
Not to mention this code wasn't public so it's kind of moot, having someone's private code be generated into my project is very bad.
As to the option, I do not, I wasn't even aware of the option, but it's pretty silly to me that's not on by default, or even really an option. That should probably be enabled with no way to toggle it without editing the extension.
So? No one needs to stop it totally. The world isn't black and white, pushing it to the fringes is almost certainly a sufficient success.
Outlawing murder hasn't stopped murder, but no one's given up on enforcing those laws because of the futility of perfect success.
> If you try to outlaw it, the day before the laws come into effect, I'm going to download the very best models out there and run it on my home computer. I'll start organising with other scofflaws and building our own AI projects in the fashion of leelachesszero with donated compute time.
That sounds like a cyberpunk fantasy.
Are you sure?
I'm not familiar with the exact data set they used for SD and whether or not Disney art was included, but my understanding is that their claim to legality comes from arguing that the use of images as training data is 'fair use'.
Anyone can use Disney art for their projects as long as it's fair use, so even if they happened to not include Disney art in SD, it doesn't fully validate your point, because they could have done so if they wanted. As long as training constitutes fair use, which I think it should - it's pretty much the AI equivalent of 'looking at others' works', which is part of a human artist's training as well.
The first is that people only recognize the problems with the things that they're familiar with, which you would kind of expect.
The other option is that there's a difference in the thing that people object to. My impression is that artists seem to be reacting to the idea that they could be automated out of a job, where programmers are mostly objecting to blatant copyright violation. (Not universally in either case, but often.) If that is the case, then those are genuinely different arguments made by different people.
I understand there's no way for the model to know, but it's really on Microsoft then to ensure no private, or poorly licensed or proprietary code is included in the training set. That sounds like a very tall order, but I think they're going to have to otherwise they're eventually going to run into legal problems with someone who has enough money to make it hurt for them.
Among many others. Classical music may have fallen into public domain, but modern performances of it is copyrightable, and some of the big companies use copyright matching systems, including YouTube's, that often flags new performances as copies of recordings.
People upthread have reproduced and demonstrated that that's not the issue here.
EDIT: Actually, OP says "The variant it produces is not on my machine." - https://twitter.com/DocSparse/status/1581560976398114822
> Wish people who don't know at all how it works stopped acting all outraged when they're laughably wrong.
Physician, heal thyself.
What exactly gives Davis a better claim to the copyright than the inventors of the algorithm? Yes, I know software is copyrightable while algorithms are not, but it is not at all clear to my why that should be the case. The effort of translating an algorithm into code is trivial compared to designing the algorithm in the first place, no?
I grant that if most people are using it one way here I was likely wrong for the way it is typically used by the normal open source community, I followed up with a reply saying it would likely be more correct for me to have said "improperly licensed" to be included in the training set.
Still it being private means it probably shouldn't be in the training set anyway regardless of license, because in the future, truly proprietary code could be included, or code without any license which reserves all right to the creator.
But surely the answer should be to fix the broken YT system and to educate politicians to abstain from baseless threats, not to make AI researchers pay for it?
Right, that's my point... I can sue anyone for anything, doesn't mean I'll win.
You would have to just hope that you can take down every instance of your code and keep it down, all while copilot keeps making more instances for the next version to train on and plagiarize.
Do you have examples? Because SD will generate photoreal outputs and then get subtle details (hands, faces) wrong, but unless you have the source image in hand then you've no way of knowing whether it's a "source image" or not.
If you can't trust that the code in a project is compatible with the license of the project then the only option I see is that copilot cannot exist.
I love free software and whatnot, but I have a feeling this situation would've been quite different if copilot was made by the free software community and accidentally trained on some non free code..
Precisely. Would it be okay for me to publish some code as GPL because my buddy gave it to me and promised that it was totally legit and I could use it and it definitely wasn't copy-pasted from one of the Windows source leaks?
> If you can't trust that the code in a project is compatible with the license of the project then the only option I see is that copilot cannot exist.
It might be possible to feed it only manually-vetted inputs, but yes; as it currently is, Copilot appears to be little but a massive copyright-infringement engine.
It’s so annoying that they are sooooo slow at this and we have to keep our users from upgrading after every release.
Trademarks require active defense to avoid genericization. Copyright may be asserted at the holder's discretion.
Only then will we see an answer to the question "is making an AI write your stolen code a viable excuse".
I very much approve of the idea of Copilot as long as the copied code is annotated with the right license. I understand this is a difficult challenge but just because this is difficult doesn't mean such a requirement should become optional; rather, it should encourage companies to fix their questionable IP problems before releasing these products into the wild, especially if they do so in exchange for payment.
This becomes particularly onerous when trolls claim copyright on published recordings of environmental sounds that happen to be similar but not identical to someone else's but they do have a legitimate claim on the original recording.
If the issue is more specifically copyright infringement, then leverage the legal apparatus in place for that. Their lawyers might listen better.
This is not a strongly held opinion and if you disagree I would love to hear your constructive thoughts!
And as computers get more powerful and the models get more efficient it'll become easier and easier to self host and run them on your own dime. There are already one click installers for generative models such as stable diffusion that run on modest hardware from a few years back.
I work for a large tech company whose lawyers definitely care that my code doesn't train an AI model somewhere much more than I do. On the contrary, I would really like to open source all of my work - it would make it more impactful and would demonstrate my skills. It makes me a bit sad that my life's work is going to be behind lock and key, visible to relatively few people. Not to mention that the hundreds of thousands of work hours, energy and effort that will be spent to replicate it all over my industry in all other lock-and-key companies makes the industry as a whole tremendously inefficient.
I hope that AI models like Copilot will finally show to the very litigious tech companies that their intellectual property has been all over the public domain from the start. And we can get over a lot of the petty algorithm IP suits that probably hold back all tech in aggregate. We should all be working together, not racing against each other in the pursuit of shareholder value.
Historically, mathematicians used to keep their solutions secret in the interest of employment in the middle ages. So there used to be mathematicians that could, for example, solve certain quadratic equations but it took centuries before all humanity could not benefit from this knowledge. I believe this is what is happening with algorithms now. And it is very counter-progress in my opinion.
I agree with you that it is also possible that people posted Getty thumbnails to some sites as though they are public domain, and that is how the AIs learned the watermark.
If we didn't live in a Capitalist society, that would be fair. But we currently do. That Capitalist society cares little about the well being of artists unless it can find a way to make their art profitable. Projects like DALL-E and Midjourney pillage centuries of human art and sell it back to us for a profit, while taking away work from artists who struggle to make ends meet as it is. Software Developers are generally less concerned about Copilot because they're still making 6 figures a year, but they'll start to get concerned if the technology gets smart enough that society needs less Developers.
An automated future should be a good thing. It should mean that computers can take care of most tasks and humans can have more leisure time to relax and pursue their passions. The reason that artists and developers panic over things like this is that they are watching themselves be automated out of existence, and have seen how society treats people who aren't useful anymore.
But where do you draw the line? What if you accidentally came up with the same or similar solution to something in windows? The code might not be from your friend either, it could be from N steps of copy paste, rework, reformating, refactoring, etc.
> For myself, I am skeptical of intellectual property in the first place. I say go for it.
I'm in a similar boat but this is precisely the reason I object so strongly to Copilot. IP has been invented & perpetuated/extended to protect large corporate interests, under the guise of protecting & sustaining innovators & creative individuals. Copilot is a perfect example of large corporate interest ignoring IP when it suits them to exploit individuals.
In other words: the reason I'm skeptical of IP is the same reason I'm skeptical of Copilot.
Anyone with a mouth can run it and threaten a lawsuit. If fact, I threaten to sue you for misinformation right now unless you correct your post. Fat lot of good my threat will do because no judge in their right mind would entertain said lawsuit because it's baseless.
Because it exposes their direct hypocrisy in this, its fair use for OSS but not for us.
Questions here are very important, and its no surprise GitHub avoided answering anything about CoPilot's legality:
Hah, no, the model encodes the code that it was trained on. This is not "recreating from memory", this is "making a copy of the code in a different format." (Modulo some variable renaming, which it's probably programmed to do to in order to obscure the source of the code.)
Yes, I agree that it's unclear how to deal with that in the general case at scale. Although cases like OP make me think that we could maybe worry about the grey area after we've dealt with the blatant copies.
> The code might not be from your friend either, it could be from N steps of copy paste, rework, reformating, refactoring, etc.
Well, my personal tendency would be to apply the same standard to Microsoft that they would apply to us. How many steps of removal is needed to copy MS proprietary code and it be okay?
Disagree, outputting training data as-is is not cryptomnesia. This is not Copilot's first case. It also reproduced ID software's fast inverse square root function as-is, including its comments, but without its license.
> If you don't want broad access your work, don't upload it to a public repository. It's very simple.
This is actually both funny and absurd. This is why we have licenses at this point. If all the licenses is moot, then this opens a very big can of worms...
My terms are simple. If you derive, share the derivation with the same license (xGPL). Copilot is deriving my code. If you use my code as a derivation point, honor the license, mark the derivation with GPL license. This voids your business case? I don't care. These are my terms.
If any public item can be used without any limitations, Getty Images (or any other stock photo business) is illegal. CC licensing shouldn't exist. GPL is moot. Even the most litigious software companies' cases (Oracle, SCO, Microsoft, Adobe, etc.) is moot. Just don't put it on public servers, eh?
Similarly, music and other fine arts are generally publicly accessible. So copyright on any and every production is also invalid as you say, because it's publicly available.
Why not put your case forward with attorneys of Disney, WB, Netflix and others? I'm sure they'll provide all their archives for training your video/image AI. Similarly Microsoft, Adobe, Mathworks, et al. will be thrilled to support your CoPilot competitor with their code, because a) Any similar code will be just cryptomnesia, b) The software produced from that code is publicly accessible anyway.
At this point, I even didn't touch to the fact that humans are trained much more differently than neural networks.
That's not considering any legal / license issues, just a simple statement about the data used to train CP.
The recording destroyed the occupation of being a live musician. People still do it for what amounts to tip money, but it used to be a real job that people could make a living off of. If you had a business and wanted to differentiate it by having music, you had to pay people to play it live. It was the only way.
It's quite a common complaint because some of the most popular prompts involve just appending an artist's name to something to get it to copy their style.
You cant sue if you dont have money, a big corp can sue even if they know they are wrong.
If I take a song, cut it up, and sing over it, my release is valid
"valid", how? You still have to pay royalties to the copyright holder of the original song, and you don't get to claim it as your own.
It completely destroyed the jobs of photo realistic portrait artists. You only have stylised portrait painting now and now that is going to be ripped off too.
Me too. I think copyright and these silly restrictions should be abolished.
At the same time, I can't get over the fact these self-serving corporations are all about "all rights reserved" when it benefits them while at the same time undermining other people's rights. Microsoft absolutely knows that what they're doing is wrong. Recently it was pointed out to me that Microsoft employees can't even look at GPL source code, lest they subconsciously reproduce it. Yet they think their software can look at other people's code and reproduce it? What a load of BS.
I'll forgive them for going for it the second copyright is gone. Then it won't be a crime for any of us to copy Windows and Office either. You bet we're gonna go for it too.
Yes, I'm sure.
> I'm not familiar with the exact data set they used for SD and whether or not Disney art was included, but my understanding is that their claim to legality comes from arguing that the use of images as training data is 'fair use'.
They could argue that. But since the American court system is currently (almost) de facto "richest wins," their argument will probably not mean much.
The way to tell if something was in the dataset would be to use the name of a famous Disney character and see what it pulls up. If it's there, then once the Disney beast finds out, I'm sure they'll take issue with it.
And by the way, I don't buy all of the arguments for machine learning as fair use. Sure, for the training itself, yes, but once the model is used by others, you now have a distribution problem.
More in my whitepaper against Copilot at [1].
Presumably by "the masses" you meant "the large corporations"?
Usually, "the masses" means "the common people" ... i.e. not much different from "the poor."
If you meant corporations, I'm 100% behind this comment.
You put it as a remix, but remixes are credited and expressed as such.
They already have one open source part I know of, the new conhost[0].
Outputting training data as-is without attribution is just plain plagiarism. You don't get to put verbatim text from textbooks in your academic papers either.
I think that the argument being made by some artists is that the training process itself violates copyright just by using the training data.
That’s quite different from arguing that the output violates copyright, which is what the tweet in this case was about.
That sounds like the pro-innovation bias: https://en.m.wikipedia.org/wiki/Pro-innovation_bias
But your reasoning boils down to I don't like it so it mustn't be that way. That's never been necessarily true.
At any rate piracy is rampant so clearly a large body of people don't think even a direct copies is morally wrong. Let alone something similar.
You're acting as though there are constant won and lost cases over plagiarism. Ed Sheeran seems to defend his work weekly. Every case that goes to court means reasonable minds differ on the interpretation of plagiarism legally.
So what's your point?
Because it seems the main thrust of your argument is I should argue with Microsoft instead (*who own GitHub lol*)? That's all you got to hold back the tide of AI? An appeal to authority?
The way I see copilot's output is that it's already in the grey zone. As with other models like this there are no snippets in the model. I can for example generate similar looking code to the cs_transpose function in Lua if I nudge it a bit. To me this seems equivalent of someone remembering exactly how a function works (to some extent..) and being able to write it in whatever language without copy pasting.
So the output as far as I understand is very grey. Maybe there's something in the training part that can be discussed, but as I mentioned earlier I'm not sure what else you can do other than check the license of some code or avoid creating copilot in the first place.
I don’t see Midjourney (et al) as remixes, myself. More like “inspired by.”
I am against Copilot because Microsoft is training the models with public data disregarding copyright (also, doesn't include it's own code).
To add to that, there is provisions to lock her out of pushing new videos to the platform if the number of unresolved copyright claims passes some low number (3?).
So she loses new revenue until her claims prevail, and of course the entity which the claim is made for knows that and has no incentive to help her (don't they even get the monetization from her videos in the meantime ?)
Put another way, AI's are tools that give more power to already powerful entities.
When it’s a faceless mass of 100k employees…? Not so much.
Its all the same they just dont realize this.
However, until that happens, Microsoft and GitHub cannot get away with blatant copyright infringement like this. No one is interested in their poor excuses either. People get sued and DMCA'd out of existence for far lesser offenses, yet Microsoft gets away with violating the license of every free software and open source project out there? That's fucked up.
If you asked every developer on earth to implement FizzBuzz, how many actually different implementations would you get? Probably not very many. Who should own the copyright for each of them? Would the outcome be different for any other product feature? If you asked every dev on earth to write a function that checked a JWT claim, how many of them would be more or less exactly the same? Would that be a copyright violation? I hope the courts answer some of these questions one day.
So obviously the source of the error is one or more third parties, not Microsoft, and it's obviously impossible for Microsoft to be responsible in advance for what other people claim to license.
It does make you wonder, however, if Microsoft ought to be responsible for obeying a type of "DCMA takedown" request that should apply to ML models -- not on all 32,000 sources but rather on a specified text snippet -- to be implemented the next time the model is trained (or if it's practical for filters to be placed on the output of the existing model). I don't know what the law says, but it certainly seems like a takedown model would be a good compromise here.
This is a fascinating observation and I think there's a lot of truth to it. But maybe our inference should be that these systems mistreat each of us, even if it's difficult to see unless it's falling on you.
Maybe a more important question than whether or not this is a violation of intellectual property is whether this is a violation of human dignity, not that it's illegal (though in this case, it may be) but that it's extremely rude in a way that we don't necessarily have the vocabulary for yet.
Depending on your preferred telemetry settings, GitHub Copilot may also collect and retain the following, collectively referred to as “code snippets”: source code that you are editing, related files and other files open in the same IDE or editor, URLs of repositories and files paths.
"How Does Copilot Work"
If I create something, I get to define the terms of its use, reproduction, distribution, etc. "Value" plays no part in whether someone can appropriate and distribute that creation without permission from the creator.
https://twitter.com/ebkim00/status/1579485164442648577
Not sure if this was fed the original image as an input or not.
Also seen a couple cases where people explicitly trained a network to imitate an artist's work, like the deceased Kim Jung Gi.
No. Look at the insane YouTube copystrike situation. Why shouldn't Microsoft be held to the same standards?
Thousands at least. Some of which would actually work.
Not a lawyer, of course, but I think slapping the Getty logo on a work claiming "fair use" and then releasing the work under public domain would be a case of misrepresentation, because Getty still has a copyright claim on your work. Regardless of the copyright status, it's still a clear trademark violation to me.
Does it matter? If you examined every copyright lawsuit on earth over code, how many of them would actually be over FizzBuzz?
It’s only logical that people twist and bend the rules.
Anyway, this kind of bs will go on until people start bringing companies to court over this.
I still don’t get why don’t lawyers start offering their services for a cut of the damages… there probably is good money to be made by suing companies that put copyright-infringing ai in production.
From the FAQ https://github.com/features/copilot/
When it comes to being an AI that understands coding concepts, I don't want it to regurgitate code verbatim.
When it comes to being a product, I don't want it to plagiarize.
For Copilot, is there a similar argument? Or its model is large enough to contain the training set verbatim?
All I can take away from this is the absurdity of intellectual property laws in general. I agree with the GP, if people are sharing stuff, it's fair game. If you don't want people using stuff you made, keep it to yourself. Pretending we can apply controls to how freely available info is used is silly anyway
Also a major problem with YouTube is not with DCMA itself, but it how it implements the system to allow for abusive takedowns without repercussions for the abusers.
I think over time we are going to see the following:
- If you take say a star wars poster, and inpaint in a trained face over luke's, and sell that to people as a service, you will probably be approached for copyright and trademark infringement.
- If you are doing the above with a satirical take, you might be able to claim fair use.
- If you are using AI as a "collage generator" to smash together a ton of prompts into a "unique" piece, you may be safe from infringement but you are taking a risk as you don't know what % of source material your new work contains. I'd like to imagine if you inpaint in say 20 details with various sub-prompts that you are getting "safer".
So question one in the back of our heads should be "Are we promoting progress here?" That most often means protecting the little guy, and that's why I think it's mostly necessary, and also must be evaluated very skeptically.
Left: “Girl with a Pearl Earring, by Johannes Vermeer” by Stable Diffusion Right: Girl with a Pearl Earring by Johannes Vermeer
This specific one is not copyright violation as it is old enough for copyright to expire. But the same may happen with other images.
from https://alexanderwales.com/the-ai-art-apocalypse/ and https://alexanderwales.com/addendum-to-the-ai-art-apocalypse...
If you “trace” another artists work the hammer comes down though. For Copilot it’s way easier to get it to obviously trace.
Left: “Girl with a Pearl Earring, by Johannes Vermeer” by Stable Diffusion Right: Girl with a Pearl Earring by Johannes Vermeer
This specific one is not copyright violation as it is old enough for copyright to expire. But the same may happen with other images.
from https://alexanderwales.com/the-ai-art-apocalypse/ and https://alexanderwales.com/addendum-to-the-ai-art-apocalypse...
After all, Microsoft may not itself be infringing so there may not be a cause of action against them by the copyright holders - but there's probably cause against the (unknowing) infringers and they may have cause.
Take a single C file or even a long function from the leaked Windows NT codebase and include it in your code. See how happy Microsoft will be with it. They spent millions of dollars on their legal teams. Eroding copyright protections will harm the weakest most. How many open source contributors can afford copyright lawyers?
Even if deduplication efforts are done, that painting will still be in the background of movie shots etc.
The best way to make sure your code isn’t copied is not to publish it.
Pirating Windows is already utterly trivial with KMS activation, so it's not like they'd lose anything there.
Stable Diffusion and DALL-E give a ton of power to individuals, hence why they are popular.
It feels like you're doing a cost analysis instead of a cost-benefit analysis, i.e. you're only looking at the negatives. It's a bit like saying cars are bad because they give more power to the big companies who sell them + put horse and buggy operators out of a job.
Code is only protected to the degree it is creative and not functionally driven anyway.
So the reduced band of possible expression often directly reduces the protectability-through-copyright.
There is absolutely zero doubt in my mind that copilot et al will lead to the absolute proliferation of half baked code even more than all the other mundane ways to copy&paste do.
Even that if done by a person as far as I understand it would not constitute a copyright infringement. It's a separate work mimicking Vermeer's original. The closest real world equivalent I can think of is probably the Obama Hope case by AP vs Shepard Fairy but that settled out of court so we don't really know what the status of that kind of reproduction is legally. On top of that though the SD image isn't just a recoloring with some additions like Fairy's was so it's not quite as close to the original as that case is.
Neither GPT nor Dall-e produces content that anyone can point to and say “they are laundering MY work”.
The closest we’ve been to that point is the image generators spitting out copyright watermarks, but they are not clearly attributable to any one single image (afaik).
> So obviously the source of the error is one or more third parties, not Microsoft, and it's obviously impossible for Microsoft to be responsible in advance for what other people claim to license.
Copilot is Github's product, and Microsoft owns Github. They are responsible for how that product functions. In this case, they should be held responsible for the training data they used for the ML model. Giving them the benefit of the doubt here, at minimum they chose which random third parties to believe were honest and correct. Without giving them the benefit of the doubt, they lied about what data sets they used to train it.
To try and put it simpler, let's say a company comes along and tells the world "we're selling this cool book-writing robot, don't worry it won't ever spit out anyone else's books" and then the robot regurgitates an entire chapter from Stephen King's Pet Sematary, is that the fault of Stephen King or the person selling the robot?
This could be terribly fun.
In this way, making sure I'm not writing proprietary code is important.
Fortunately it looks like Copilot only uses open repositories so that's good.
That's a really hard undersell of responsibility on the part of Microsoft/Github.
It seems as though they did approximately zero work to verify any of the code wasn't infringing. Things they could have tried but apparently didn't:
1) Ask developers to opt-in to copilot scanning of their repositories, and alongside that have them certify that they hold copyright over all lines of code included in the repository.
2) Use a training dataset of only public repositories listed under applicable pre-identified licensing schemes, from established groups. e.g.: *bsd licensed code from the various BSD OSes.
3) Sought out examples from standard libraries in other programming languages with suitable licenses.
It seems like they did nothing and just hoped. I can't see how anyone would try to rely on this thing in a commercial context after its proven to do this over and over. The well has been poisoned.
Please refrain yourself from this kind of blatant gaslighting. You're not the one to assess its value or usefulness and your point is at most tangential to the issue. The problem is that the model systematically took non-public domain code without any permits from the author, not whether it's useful or not. It's worth to hear this complaint and Copilot team should be more accountable for this problem since this could lead to more serious copyright infringement fights for its users.
I didn’t intend to argue anything or draw any conclusions. Just making an observation based on conversations with friends and coworkers.
The big difference is that cars were a tool that helped regular people by being a force multiplier. Stable Diffusion and DALL-E are not force multipliers in the same way. Sure, you may now produce images that you couldn't before, but there are far fewer profitable uses for images than for cars. Images don't materially affect the world, but cars can.
Yes, the way Copilot was trained was morally questionable, but probably legaly fine (Github terms of service).
There is no doubt the result is extremely helpful though.
That’s actually fine (kind of the idea of specifying a license). What is not fine is using that code in non-GPL licensed code.
We are talking ‘de facto’ here, not ‘de jure’. It may be legally problematic, but anything made public once is never going back in the box.
Statically linked binaries for example have parts of libraries embedded into them. There exists tools that can analyze the binary and try to detect signatures from a shared library in the binary.
In the past there were (and probably still are) companies who provided services to help with finding people who have linked in your code so you could take whatever action you wanted against them. I can't recall a specific company name right now but a little bit of Googling would likely bring up some examples.
I can imagine Mona Lisa in my head, but it doesn't really "exist" verbatim in my head. It's only an approximation.
I believe copilot works the same way (?)
Here’s like the first link after a DuckDuckGo search for “copyright utilitarian”:
However, copyright law does not extend to useful items. Therefore, complications may arise when sculptural works are also “useful” items. In these instances, copyright law will protect purely artistic elements of a useful article as long as the useful item can be identified and exists independently of the utilitarian aspects of the article (this concept is sometimes called the “separability test”). 17 U.S.C. §. A “useful article” is an article that has a purpose beyond pure aesthetic value.
https://www.rtlawoffices.com/articles/can-i-copyright-my-des...
Is it safe to assume the rest of the downvotes were from people who were also incorrect?
So much for “generation” - it seems as if these models are just overfitting on extremely small subset of input data that it did not utterly failed to train on, almost that there could be geniuses who would be able to directly generate weight data from said images without all the gradient descent thing.
So very similar to how the music industry treats sampling then?
Everybody using CoPilot needs to get "code sample clearance" from the original copyright holder before publishing their remix or new program that uses snippets of somebody else's code...
Try explaining _that_ to your boss and legal department.
"To: <all software dev> Effective immediately, any use of Github is forbidden without prior written approval from both the CTO and General Councel."
I guess GitHub is violating it the instant their servers send verbatim snippets like this to developers without the copyright notice. And then the Copilot users are also violating it when (if) they release their code that contains verbatim snippets and no notice.
This is a pretty good live experiment on how useful these open source licenses really are. This guy find his license is being violated, so what does he do? If complaining on Twitter seems like the best course of action then maybe we need to rethink the system.
This was of course a leading question. The point was to get you to think about what artists did in response to the photograph. They changed the way they paint.
I'm positive that machine learning will also change the way that people create are and I am positive that it will only add to the rich tapestry of creative possibilities. There are still realistic portrait painters, after all, they're just not as numerous.
Without that context, fizzbuzz is not that different from a matrix transpose function to me.
[0] https://en.m.wikipedia.org/wiki/Quod_licet_Iovi,_non_licet_b...
Sometimes the original information is there in the model, encoded/compressed/however you want to look at it, and can be reproduced.
If similar code is open in your VS Code project, Copilot can draw context from those adjacent files. This can make it appear that the public model was trained on your private code, when in fact the context is drawn from local files. For example, this is how Copilot includes variable and method names relevant to your project in suggestions.
It’s also possible that your code – or very similar code – appears many times over in public repositories. While Copilot doesn’t suggest code from specific repositories, it does repeat patterns. The OpenAI codex model (from which Copilot is derived) works a lot like a translation tool. When you use Google to translate from English to Spanish, it’s not like the service has ever seen that particular sentence before. Instead, the translation service understands language patterns (i.e. syntax, semantics, common phrases). In the same way, Copilot translates from English to Python, Rust, JavaScript, etc. The model learns language patterns based on vast amounts of public data. Especially when a code fragment appears hundreds or thousands of times, the model can interpret it as a pattern. We’ve found this happens in <1% of suggestions. To ensure every suggestion is unique, Copilot offers a filter to block suggestions >150 characters that match public data. If you’re not already using the filter, I recommend turning it on by visiting the Copilot tab in user settings.
This is a new area of development, and we’re all learning. I’m personally spending a lot of time chatting with developers, copyright experts, and community stakeholders to understand the most responsible way to leverage LLMs. My biggest take-away: LLM maintainers (like GitHub) must transparently discuss the way models are built and implemented. There’s a lot of reverse-engineering happening in the community which leads to skepticism and the occasional misunderstanding. We’ll be working to improve on that front with more blog posts from our engineers and data scientists over the coming months.
Don't worry. At that time all of the available hardware will refuse to run any software unless it comes with a signed license from one of the big three.
https://www.nytimes.com/2022/09/30/books/early-cormac-mccart...
I suppose whoever wants to pay the fees would “own” these things ?
If a small time artist has their work stolen, they probably won't be able to fight it very well. They might be able to get a few taken down, but the sheer number will make it impossible to keep up.
Disney, on the other hand, will have armies of lawyers going after any copyright violation.
It seems the same whether AI is involved or not.
https://laion.ai/blog/laion-5b/
Not exactly what you asked, but hopefully useful? The model weights are about 4 GiB I believe.
Public code is a weird name to use for that use case.
I have also never heard of “public code” being used in that way.
If that's the case then it makes a great case to break the DMCA and steal all the content out there.
That sounds terrible in theory but it's one way to put some of these big money systems in check.
What about all the leaked source code in early 2022. It would make perfect sense for the hackers to remove the copyright and put it in Github for karma. That way, the IP of all now belongs to all. I'm quite sure a few won't agree with this method, but that's exactly how bad it is when removing licenses from software for code predictions.
Finally, Git was meant to allow people to host anywhere, so if the worry is about IP theft, one should stop hosting on Github and move it some place else, if they want it fully private.
But that doesn't make it any better.
Nonetheless that’s problematic for folks relying on the copyright as of now.
I feel for artists here, devs won’t go hungry or jobless.
Copilot training data should have been sanitized better.
In addition: any code that is produced by copilot that uses a source that is licensed, MUST follow the practices of that license, including copyright headers.
Until there are a large amount of court cases, the burden of proof is on you to say that this is copyright infringement.
That sounds like a you problem, not a us problem.
As of yet, no court has said that any of this is illegal.
So tough luck. Go take it to the supreme court if you disagree, because right now it actually seems like people can do almost whatever they want with these AI tools.
Your objection simply doesn't matter, until there is a court case that supports you. You can't do anything about it, if that doesn't happen.
those two things exist at the same time.
try reading a licence now and again!
An example: https://twitter.com/DaveScheidt/status/1578411434043580416
> I also know software devs who are extremely excited about AI art and GPT-3 but are outraged by Copilot.
The fear is not unwarranted though. I can clearly see AI replacing most jobs (not just in tech) but art, crafts, music and even science. There probably will be no field untouched by AI in this decade and completely replaced by next decade.
We have multiple extinction events for humanity lined up: Climate Change, Nuclear Apocalypse and now AI.
We will have to not just work towards reducing harm to the Planet, but also work towards stopping meaningless Wars and figuring out how to deal with unemployment and economic crisis that is looming on the horizon. The only ones to suffer in the end would be the "elites" (or will they be the first depending on how quickly Civilization goes towards Anarchy?).
Can't say for sure. But definitely gloomy days ahead.
> Yes, many of us will turn into cowards when automation starts to touch our work, but that would not prove this sentiment incorrect - only that we're cowards.
>> Dude. What the hell kind of anti-life philosophy are you subscribing to that calls "being unhappy about people trying to automate an entire field of human behavior" being a "coward". Geez.
>>> Because automation is generally good, but making an exemption for specific cases of automation that personally inconvenience you is rooted is cowardice/selfishness. Similar to NIMBYism.
It's true cowardice to assume that our own profession should be immune from AI while other professions are not. Either dislike all AI, or like it. To be in between is to be a hypocrite.
For me, I definitely am on the side of full AI, even if it automates my job away, simply because I see AI as an advancing force on mankind.
Because you are right: a few, and a small time artist can fight. Hundreds and thousands of copies, or millions, and even Disney struggles. That's why Disney would go after the model itself; it scales better.
Is it a valid defense against copyright infringement to say “we don’t know where we got it, maybe someone else copied it from you first?”
If someone violated the copyright of a song by sampling too much of it and released it in the public domain (or failed to claim it at all), and you take the entire sample from them, would that hold up in a legal setting? I doubt it.
One of the reasons Roald Dahl was such a great writer is his life experiences. Read his books Boy and Solo.
Here is some reading material for those of you who disagree with reality:
https://en.wikipedia.org/wiki/Abstraction-Filtration-Compari...
https://en.wikipedia.org/wiki/Idea–expression_distinction
https://h2o.law.harvard.edu/cases/5004
https://www.loeb.com/en/insights/publications/2020/04/johann...
Stable Diffusion actually has a similar problem. Certain terms that directly call up a particular famous painting by name - say, the Mona Lisa[0] - will just produce that painting, possibly tiled on top of itself, and it won't bother with any of the other keywords or phrases you throw at it.
The underlying problem is that the AI just outright forgets that it's supposed to create novel works when you give it anything resembling the training set data. If it was just that the AI could spit out training set data when you ask for it, I wouldn't be concerned[1], but this could also happen inadvertently. This would mean that anyone using Copilot to write production code would be risking copyright liability. Through the AI they have access to the entire training set, and the AI has a habit of accidentally producing output that's substantially similar to it. Those are the two prongs of a copyright infringement claim right there.
[0] For the record I was trying to get it to draw a picture of the Mona Lisa slapping Yoshikage Kira across the cheek
[1] Anyone using an AI system to "launder" creative works is still infringing copyright. AI does not carve a shiny new loophole in the GPL.
I will admit that I am conflicted, because I can see some really cool potential applications of Copilot, but I can't say I am not concerned if what Tim maintains is accurate for several different reasons.
Lets say Copilot becomes the way of the future. Does it mean we will be able to trust the code more or less? We already have people, who copy paste stack overflow without trying to understand what the code does. This is a different level, where machine learning seems to suggest a snippet. If it works 70% of time, we will have a new generation of programmers management always wanted.
As I understand, this isn't proven is it?
We don't know that the model isn't simply stitching and approximating back to the closest combination of all the data it saw, versus actually understanding the concepts and logic.
Or is my understanding already behind times?
Given that there have been major concerns about copyright infringements and license violations since the announcement of Copilot, wouldn't it have been better to do some more of this "learning", and determine what responsibilities may be expected of you by the broader community, before unleashing the product into the wild? For example, why not train it on opt-in repositories for a few years first, and iron out the kinks?
Either we get to have things like Copilot, or we loosen copyright protections a great deal. Is there a third way?
[1] https://en.wikipedia.org/wiki/SCO_Group,_Inc._v._Internation....
In my opinion the only thing that should be an infringement regarding code is copying entire non trivial files or entire projects outright.
A 100 line snippet should not be copyrighteable. Only the entire work, which you could think as the composition of many of those snippets.
All the research suggests that AI-assisted auto-complete merely helps developers go faster with more focus/flow. For example, there's an NYU study that compared security vulnerabilities produced by developers with and without AI-assistend auto-complete. The study found that developers produced the same number of potential vulnerabilities whether they used AI auto-complete or not. In other words, the judgement of the developer was the stronger indicator of code quality.
The bottom line is that your expertise matters. Copilot just frees you up to focus on the more creative work rather than fussing over syntax, boilerplate, etc.
The scenes à faire doctrine would certainly let you paint your own picture of a pretty girl with a large earring, even a pearl one. That, however, is definitely the same person, in the same pose/composition, in the same outfit. The colors are slightly off, but the difference feels like a technical error rather than an expressive choice.
"Copying" a style is not a derivative work:
> Why isn't style protected by copyright? Well for one thing, there's some case law telling us it isn't. In Steinberg v. Columbia Pictures, the court stated that style is merely one ingredient of expression and for there to be infringement, there has to be substantial similarity between the original work and the new, purportedly infringing, work. In Dave Grossman Designs v. Bortin, the court said that:
> "The law of copyright is clear that only specific expressions of an idea may be copyrighted, that other parties may copy that idea, but that other parties may not copy that specific expression of the idea or portions thereof. For example, Picasso may be entitled to a copyright on his portrait of three women painted in his Cubist motif. Any artist, however, may paint a picture of any subject in the Cubist motif, including a portrait of three women, and not violate Picasso's copyright so long as the second artist does not substantially copy Picasso's specific expression of his idea."
https://www.thelegalartist.com/blog/you-cant-copyright-style
For literally everything but music, yes.
Even by the standards of copyright technicality, music copyright is weird. For example, if you ask a lawyer[0] what parts of copyright set it apart from other forms of property law[1], they would probably answer that it's federally preempted[2] and that it has constitutionally-mandated term limits.
Which, of course, is why music has a second "recording copyright", which was originally created by states assigning perpetual copyright to sound recordings. I wish I was making this up.
So the musical arrangement that constitutes that song from 1640? Absolutely public domain. You can tell people how to play Monteverdi all damned day. But every time you record that song being played, that creates a new copyright on that recording only. This is analogous to how making a cartoon of a public-domain fairy tale gives you ownership over that cartoon only. Except because different performers are all trying to play the same music as perfectly as possible, the recordings will sound the same and trip a Content ID match.
Oh, and because music copyright has two souls, the Sixth Circuit said there's no de minimus for sampling. That's why sample-happy rap is dead.
If you want public domain music on your YouTube video you either record it yourself or license a recording someone else did. I think there are CC recordings of PD music but I'm not sure. Either way you'll also need to repeatedly prove this to YouTube staff that would much rather not have to defend you against a music industry that's been out for blood for half a century at this point.
[0] Who, BTW, I am very much NOT
[1] Yes, yes, I know I'm dangerously close to uttering the dangerous propaganda term "intellectual property". You can go back to bed Mr. Stallman.
[2] Which means states can't make their own supra-federal copyright law and any copyright suit immediately goes to federal court.
Now a human can take inspiration from like 100 different sources and probably end up with something that no one would recognize as derivative to any of them. But it also wouldn't be obvious that the human did that.
But with an ML model, it's clearly a derivative in that the learned function is mathematically derived from its dataset and so is all the resulting outputs.
I think this brings a new question though. Because till now derivative was kind of implied that the output was recognizable as being derived.
With AI, you can tweak it so the output doesn't end up being easily recognizable as derived, but we know it's still derived.
Personally I think what really matters is more a question of what should be the legal framework around it. How do we balance the interests of AI companies and that of developers, artists, citizens who are the authors of the dataset that enabled the AI to exist. And what right should each party be given?
It's similar to saying that any digital representation of an image isn't an image just a dataset that represent it.
If what you said was any sort of defense every image copyright would never apply to any digital image, because the images can be saved in different resolutions, different file formats, or encoded down. e.g. if a jpeg 'image' was only an image at an exact set of digital bits i could save it again with a different quality setting and end up with a different set of digital bits.
But everyone still recognises when an image looks the same, and courts will uphold copyright claims regardless of the digital encoding of an image. So goodluck with that spurious argument that it's not copyright because 'its on the internet (oh its with AI etc).
An example: a dyslexic friend and a dyslexic family member: their writing communication skills of both is now fine in part because their jobs required it from them (and in part because technology helps). I also had one illiterate friend, who has taught himself to read and write as an adult (basic written communication), due to the needs of his job. Learn by doing, and add observation of others as an adjunct to help you. Even better if you can get good coaching (which requires effort at your craft or sport).
Disclaimer: never a writer. Projecting from my other crafts/sports. Terribly written comment!
The reason doesn't really matter...
But the machine learning model has studied every single one of them.
And maybe more preposterous, if its dataset had no FizzBuzz implementation would it even be able to re-invent it?
I feel this is the big distinction that probably annoys people.
That and the general fact that everyone is worried it'll devalue the worth of an experienced developer as AI will make hard thing easier, require less effort and talent to learn and thus making developers less high demand and probably lower paid.
Well probably no, they didn't pick and choose at all, they just "chose" everyone who put code online with a license. Which is a legal statement of ownership by each of those people, and implies legal liability as well.
> is that the fault of Stephen King or the person selling the robot?
Well, there's certainly an argument to be made that it's neither -- it's the fault of the person who claimed Stephen King's work as their own with a legal notice that it was licensed freely to anyone. That person is the one committing theft/fraud.
The point is that with ML training data, such a vast quantity is required that it's unreasonable to expect humans to be able to research and guarantee the legal provenance of it all. A crawler simply believes that licenses, which are legally binding statements, are made by actual owners, rather than being fraud. It does seem reasonable to address the issue with takedowns, however.
But that is exactly how it works. Translation companies license (or produce) huge corpuses of common sentences across multiple languages that are either used directly or fed into a model.
Third party human translators are asked to assign rights to the translation company. https://support.google.com/translate/answer/2534530
A broadcaster of copyrighted works is not protected against infringement just because they expect viewers to only watch programming they own.
This is a mostly irrelevant red herring setting up professions against each other. Instead we should cooperate on a costly yet necessary decision of instituting a basic income, especially prioritizing professions about to be superseded by modern ML.
Obviously, our decision-making class views the topic of instituting a realistic basic income right now as something extremely unpleasant, and so it goes.
People who helped to bootstrap the AI should be compensated, at the very least by being able to live a modest lifestyle without having to work. Simple as.
An artist should credit when they are directly taking from another artist. Erasure poems don’t quite work if the poet runs around claiming they created the poem that was being erased.
But more importantly SD allows you to take and use existing copyright works and funny-launder them and pass them off as your own, even though you don’t own the rights to that work. This would be more akin to I take a photograph you made and sell it on a t shirt on red bubble. I don’t actually own the IP to do that with.
Sounds like MS has devised a massive automated code laundering racket.
This means MS really shouldn't have used copyleft code at all, and really shouldn't be selling copilot in this state, but "luckily" for them, short of a class action suit I don't really see any recourse for the programmers who's work they're reselling.
(Sorry I didn't log my experiment results at the time. None of it was related to work I'd done - I used time adjustment functions if I remember correctly)
But anyway, how I see stable diffusion being different is that it's a tool to generate all sorts of images, including copyrighted images.
It's more like a database of *how to* generate images rather than a database *of* images. Maybe there isn't that much of a difference when it comes to copyright law. If you ask an artist to draw a copyrighted image for you, who should be in trouble? I'd say the person asking most of the time, but in this case we argue it's the people behind the pencil or whatever. Why? Because it's too easy? Where does a service like fiver stand here?
So if a tool is able to generate something that looks indistinguishable from some copyrighted artwork, is it infringing on copyright? I can get on board with yes if it was trained on that copyrighted artwork, but otherwise I'm not so sure.
If Microsoft’s fair use analysis is correct, not actionably (at least under US law) because it is within the fair use exception to copyright.
> Or is it Copilot users who end up using the regurgitated code?
These aren't exclusive options; either, both, or neither could be true.
Current AI is not replacing anything yet but I feel we are only a few years before AI can do a better job at drawing or programming than someone with years of practice. Sure, you can utilise those tools to stay ahead. But will AI prompt engineer be as emotionally satisfying as drawing for real?
To make it concrete, imagine the latest Disney movie poster. You redraw it 95% close to the original, just changing the actual title. Then you sell your poster on Amazon at half the price of the actual poster. Would you get a copyright strike ?
Ha ha. Because then the product couldn’t be built. Better to steal now and ask forgiveness later, or better yet, deny the theft ever occurred.
I have to assume this is just people being protective of their own profession and consequently, setting up a high bar for what constitutes as performance in that profession.
Oh, actually I remember now -- I think the copyright complaint specifically said what recording they thought I was infringing, and it was the correct piece.
I mean, in humans it's just referred to as 'experience', 'training', or 'creativity'. Unless your experience is job-only, all the code you write is based on some source you can't attribute combined with your own mental routine of "i've been given this problem and need to emit code to solve it". In fact, you might regularly violate copyright every time you write the same 3 lines of code that solve some common language workaround or problem. Maybe the solution is CoPilot accompanying each generation with a URL containing all of the run's weights and traces so that a court can unlock the URL upon court order to investigate copyright infringement.
> If someone violated the copyright of a song by sampling too much of it and released it in the public domain (or failed to claim it at all), and you take the entire sample from them, would that hold up in a legal setting? I doubt it.
In general you're not liable for this. While you still will likely have to go to court with the original copyright holder's work, all the damages you pay can be attributed to whoever defrauded or misrepresented ownership over that work. (I am not your lawyer)
Agreed. That was my point.
What you're describing is a choice. They chose which people to believe, with zero vetting.
> The point is that with ML training data, such a vast quantity is required that it's unreasonable to expect humans to be able to research and guarantee the legal provenance of it all.
I'm not sure what you're presenting here is actually true. A key part of ML training is the training part. Other domains require a pass/fail classification of the model's output (see image identification, speech recognition, etc.) so why is source code any different? The idea that "it's too much data" is absolutely a cop-out and absurd, especially for a company sitting on ~$100B in cash reserves.
Your argument kind of demonstrates the underlying point here: They took the cheapest/easiest option and it's harmed the product.
> A crawler simply believes that licenses, which are legally binding statements, are made by actual owners, rather than being fraud. It does seem reasonable to address the issue with takedowns, however.
Yes, and to reiterate, they chose this method. They were not obligated to do this, they were not forced to pick this way of doing things, and given the complete lack of transparency it's a large leap of faith to assume that their training data simply looked at LICENSE files to determine which licenses were present.
For what it's worth, it doesn't seem that that's what OpenAI did when they trained the model initially in their paper[1]:
Our training dataset was collected in May 2020 from 54 mil-
lion public software repositories hosted on GitHub, contain-
ing 179 GB of unique Python files under 1 MB. We filtered
out files which were likely auto-generated, had average line
length greater than 100, had maximum line length greater
than 1000, or contained a small percentage of alphanumeric
characters. After filtering, our final dataset totaled 159 GB.
I have not seen anything concrete about any further training after that, largely because it isn't transparent.The interesting part is if AI will be considered a tooling mechanism much like the tooling used to record and manipulate a music sample into a new composition.
It is current at the SCOTUS so we should see a ruling for the USA sometime in the next year or so.
https://en.m.wikipedia.org/wiki/Andy_Warhol_Foundation_for_t...
Obviously, that would not entitle anyone to rip those elements from your work and use them in a way that was not fair use. The Getty watermark could fall into this category: public domain pictures using the watermark fairly (for transformative commentary/satire purposes) could go into the network, which uses that information to produce infringing images.
Trademarks are a different story, but trademark protections are a lot narrower than you might think.
The point is that it's very conceivable that the neural network is being trained to infringe copyrights by training entirely with public-domain images.
Warhol’s estate seems likely to lose and their strongest argument is that Warhol took a documentary photo and transformed it into a commentary on celebrity culture. Here, I don’t even see that applying: it just looks like a bad copy.
https://www.scotusblog.com/2022/10/justices-debate-whether-w...
As GP says, no one really cares, but it seems hard to satisfy SA... even if you are pasting into open source, is your license compatible with CC?
Perhaps I'm over-thinking this.
Morally I'd say you should make a reasonable good faith effort to verify that you have a real license for everything you're using. When you're importing something on the scale of "all of Github" that means a bit more effort than just blindly trusting the file in the repository. When I worked with an F500 we would have a human explicitly review the license of each dependency; the review was pretty cursory, but it would've been enough to catch someone blatantly ripping off a popular repo.
This is why we have a market. We let billions of individuals vote on what they think is useful or not, in real-time, multiple times a day, every day. If AI-generated images are less desirable than what came before, then people won't use them or pay to use them in the long run. They'll die like other flash-in-the-pan fads have died, artists will retain their jobs en masse, and OpenAI won't gain much if any power.
The entire idea of the market is to ensure that if some entity is gaining money/power, that's happening as a result of it providing some commensurate good to the people. And if that's not happening, or if the power is too great, that's why we have laws and regulatory bodies.
The main problem I see with generating attribution is that the algorithm obviously doesn't "know" that it's generating identical code. Even in the original twitter post, the algorithm makes subtle and essentially semantically synonymous changes (like the changing the commenting style). So for all intents and purposes it can't attribute the function because it doesn't know _where_ it's coming from and copied code is indistinguishable from de novo code. Copilot will probably never be able to attribute code short of exhaustively checking the outputs using some symbolical approach against a database of copyleft/copyrighted code.
If we have 32 000 copies of the same code in a large database with a linking structure betwen the records then we should be able to discern which are the high provenance sources in the network, and which are the low provenance copies. The problem is after all, remarkedly similar to building a search engine.
Copyright is formed when a human makes a choice about equivalent ways of implementing an algorithm.
If your standard is “Github should have an oracle to the US court system and predict what the outcome of a lawsuit alleging copyright infringement for a given snippet of code would be” then it is literally impossible for anyone to use any open source code ever because it might contain infringing code.
There is no chain of custody for this kind of thing which is what it would require.
> I mean, in humans it's just referred to as 'experience', 'training', or 'creativity'. Unless your experience is job-only, all the code you write is based on some source you can't attribute combined with your own mental routine of "i've been given this problem and need to emit code to solve it". In fact, you might regularly violate copyright every time you write the same 3 lines of code that solve some common language workaround or problem.
Aren't you moving the goal posts? This is not 3 lines, but instead is 1 to 1 reproducing a complex function that definitely has enough invention height to be copyright able.
To be safe, we'd have to get Microsoft to agree to indemnify users (if they really believe using this is safe, they should do so), or wait until a court case on copyright as it regard to training corpus for large models is decided and appeals are exhausted.
1. you make it out like a translation from e.g. English to Spanish wouldn't fall under copyright. That's incorrect, in most juristictions I am aware of, it actually fall under the copyright of the original work and fall under its own copyright.
2. When will copilot be released open source, it is pretty clear by now that it is a derivative of all the OSS code so how about following the licensing?
If someone has lied about the license of something down the chain of links, he's the one on the hook for it.
If you have licensed code in your software and no license to show for it or cannot produce the link to it then you're on the hook.
And here's the issue at hand copilot must have seen that code under permissive license somewhere, but now cannot produce a link to it.
No court is ever going give you that subpoena nor would it even be possible to comply with it even if granted. You might get “show me all the repositories used in the training data for Copilot that contain that snippet.”
You’re really not going to solve this problem with marketing (“blog posts”) or some pro-Github story from data scientists. You need a DMCA / removal request feature akin to Google image search and you need work on understanding product problems from the customer perspective.
They were highly skilled laborers who knew how to operate complex looms. When auto looms came along, factory owners decided they didn't want highly trained, knowledgeable workers they wanted highly disposable workers. The Luddites were happy to operate the new looms, they just wanted to realize some of the profit from the savings in labor along with the factory owners. When the factory owners said no, the Luddites smashed the new looms.
Genuinely, and I'm not trying to ask this with any snark, do you view the work you do as similar to the manufacturers of the auto looms? The opportunity to reduce labor but also further the strength of the owner vs the worker? I could see arguments being made both ways and I'm curious about how your thoughts fall.
Thank you for your input.
I'd like you to inspect the issue and explain what happened and why (and start to fix that if that's not intended) rather than sharing what you think could have happened.
Unless you're not in position to do that, in which case it doesn't matter you're on the Copilot team (anyone can throw hypotheses like that).
Please also don't tell me we're at the point where we can't tell why AI works in a particular way and we cannot debug issues like this :-(
Suppose we trained the open AI model on the entire corpus of pop hits from about 1960 onwards.
What are the chances it would get sued for copyright infringement.
If the derivative nature is clear in the same model being trained on popular song, then it should be the same for code (or visual art, or a number of other domains).
Not arguing for current copyright law, just pointing out the inconsistencies.
For that matter, what would happen if you asked Copilot for a set of Java headers. Asking for a friend!
What about people forking/mirroring your code? Or people merely contributing code? There is no one-to-one correspondence between copyright holders and Github users.
Copilot should just comply with the license, that's it.
The copyright infringement might not matter if code from individual developers is being used - they usually don't sue. But once this happens to say Oracle's copyrighted code... Well, that is going to be interesting.
Takes practice, but it's a skill that can be mastered like any other.
The vast majority of people who would use a matrix transform function they got from code completion (or from a GitHub or stack overflow search) probably don’t care what the license is. They’ll just paste in the code. To many developers publicly viewable code is in the public domain. Code pilot just shortens the search by a few seconds.
Microsoft should try todo better (I’m not sure how), but the sad fact is that trying to enforce a license on a code fragment is like dropping dollar bills on the sidewalk with a note pinned to them saying “do not buy candy with this dollar”
It is, yes. For example, a neural network can't invent a new art style on its own, or at least existing models can't, they can only copy existing art styles, invented by humans.
It looks like it wouldn't in the UK, probably wouldn't in the US but would in Germany. The cases seem to hinge on the level of intellectual creativity of the photograph involved. The UK said that trying to create an exact copy was not an original endeavour whereas Germany said the task of exact replication requires intellectual/technical effort of it's own merit.
https://www.theipmatters.com/post/are-photographs-of-public-...
Things turned out pretty great economy-wise for people in the UK. So that's a poor example even if Luddites didn't hate technology. Not working on the technology wouldn't have done the world any favours (nor the millions of people who wore the more affordable clothes it produced).
I personally think it'd be rewarding to make developers lives easier, essentially just saving the countless hours we spend googling + copy/pasting Stackoverflow answers.
Co-pilot is merely just one project in this technological development, even if a mega-corp like Microsoft doesn't do it ML is here to stay.
If you're concerned that software developers job security is at all at risk from co-pilot than you greatly misunderstand how software engineering works.
Auto-completing a few functions you'd copy/paste otherwise (or rewrite for the hundredth time) is a small part of building a piece of software. If they struggle with self-driving cars, I think you'll be alright.
At the end-of-the-day there's a big incentive for Github et al to solve this problem, a class action lawsuit is always an overhanging threat. Even if co-pilot doesn't make sense as a business and these pushback shut it down I doubt it will go away.
I'm personally confident the industry will eventually figure out the licensing issues. The industry will develop better automated detection systems and if it requires more explicit flagging, no-one is better positioned to apply that technologically than Github.
The statement that language models actually understand syntax and semantics is still subject of significant debate. Look at all discussion around "horse riding astronaut" for stable diffusion models and the prompts with geometric shapes which clearly show that the language model does not semantically understand the prompt.
It doesn't change licensing issue but it does mean people are already copying and using copyrighted code without respecting original license and no AI involved.
There should be a way to reverse engineer code LLMs to see which core bits of memorized code they build on. Another complex option is a combination of provenance tracking and semantic hashing on all functions in code used for training. Another option (non-technical) is a rethinking of IP.
Either that or we effectively get rid of software copyright as copilot can be used (or even claim to be used) to launder code of license restrictions. Eg No I didn't copy your code, I used copilot and it copied your code so I did nothing wrong.
This stance allows me to do whatever do I want with any software or work you put out there, regardless of the license you attach to it, since it's your problem, not mine.
However, this is not the mode I operate ethically.
> As of yet, no court has said that any of this is illegal.
I assume this will be tested somehow, sometime. So I'm investing in popcorn futures.
> Your objection simply doesn't matter, until there is a court case that supports you. You can't do anything about it, if that doesn't happen.
You know, this goes both ways. Same will be very valid for your works, through your own reasoning.
If CoPilot makes everyone see how ridiculous that is, that's a win in my book.
Instead, they scoured and plagiarized everyone's source code without their consent.
I'm not claiming that they did. What I said is, Copilot emitted the exact implementation in IDs repository, incl. all comments and everything.
> But your reasoning boils down to I don't like it so it mustn't be that way. That's never been necessarily true.
If you interpret my comment with that depth and breadth, I can only say that you are misinterpreting completely. It's not about my personal tastes, it's about ethical frameworks and social contracts.
> At any rate piracy is rampant so clearly a large body of people don't think even a direct copies is morally wrong. Let alone something similar.
I believe if you listen to a street musician for a minute, you owe them a dollar. Scale up from there. BTW, I'm a former orchestra player, so I know what making and performing music entails.
> You're acting as though there are constant won and lost cases over plagiarism. Ed Sheeran seems to defend his work weekly. Every case that goes to court means reasonable minds differ on the interpretation of plagiarism legally.
When there's a strict license on how a work can be used, and the license is violated, it's a clear case. That AI is just a derivation engine, and the license that derivations carry the same license. I don't care if you derive my code. I care you derive my code and hide the derivations from public.
It's funny that you're defending close-souring free software at this point. This is a neat full-circle.
> So what's your point?
All research and science should be ethical. AI research is not something special which allows these ethical norms and guidelines (which are established over decades if not centuries) to be suspended. If medicine people act with quarter of this lassiez faire attitude, they'd be executed with a slow death. If security researchers act with eighth of this recklessness, their career are ruined.
> That's all you got to hold back the tide of AI?
As I aforementioned, I'm not against AI. It just doesn't excite me as a person who knows how it works and what it does, and the researchers' attitude is leaving a bad taste in my mouth.
Actually, no it doesn't. This topic is about AI training on code.
Courts have not held that this is illegal.
But there are absolutely other things, that people might do with code, that break copyright law.
> it's your problem, not mine.
Oh, but it would be your problem as well, if you break the law, and someone else sues you for it.
That's the difference. AI training is not against the law. Other things, that you are imagining in your head right now, very well could be, and you could lose.
> Same will be very valid for your works
Not if what you are hypothetically doing breaks the law, and AI training doesn't break the law.
So that the difference, which makes the reasoning legitimate.
Actually yes. I'm not against the tech. I'm against using my code without consent for a tool which allows to breach the license I put my code under.
IOW, if Copilot understood code licenses and prevented intermixing incompatibly licensed code while emitting results for my repository, I might have slightly different stance on the issue.
I would put money on it also containing gpl3 code, which I suspect means that the model itself is probably also required to be public under the terms of gpl3
I look forward to the entire product you have made being available, as is required for any product built using gpl3’d software.
The original poster said it was in a private repository.
>It doesn't change licensing issue but it does mean people are already copying and using copyrighted code without respecting original license and no AI involved.
I don't get the argument. Many people are copying/pirating MS windows/MS office. What do you think MS would say to a company they caught with unlicensed copies and they used the excuse "the PCs came preinstalled with Windows and we didn't check if there was a valid license"?
What if a particular piece of code is licensed restrictively, and then (assuming without malice) accidentally included in a piece of software with a permissive license?
What if a particular piece of code is licensed permissively (in a way that allows relicensing, for example), but then included in a software package with a more restrictive licence. How could you tell if the original code is licensed permissively or not?
At what point do Github have to become absolute arbiters of the original authorship of the code in order to determine who is authorised to issue licenses for the code? How would they do so? How could you prove ownership to Github? What consequences could there be if you were unable to prove ownership?
That's before we even get to more nuanced ethical questions like a human learning to code will inevitably learn from reading code, even if the code they read is not permissively licensed. Why then, would an AI learning to code not be allowed to do the same?
Then there's also parallel discovery. People frequently come to the same solution at roughly the same time, completely independently. And this is nothing new. For instance, who discovered calculus? Newton or Leibniz? This was a roaring controversy at the same time with both claiming credit. The reality is that they both likely discovered it, completely independently, at about the same time. And there's a whole lot more people working on stuff than than in Newton's time!
There's also just parallel creation. Task enough people with creating an octree based level-of-detail system in computer graphics and you're going to get a lot of relatively lengthy code that is going to look extremely similar, in spite of the fact that it's a generally esoteric and non-trivial problem.
In this case, all you have on them is an email address. Pretty sure you're still on the hook.
There is no “I don’t know who owns the IP” defense: the image has a copyright, a person owns that copyright, publishing the image without licensing or purchasing the copyright, is a violation. The fine is something like $100k per offense for a business.
I tried out of curiosity. Here[1] are the first 8 images that came up with the prompt "Disney mickey mouse" using the stable diffusion V1.4 model. Personally I don't really see why Disney or any other company would take issue with the image generation models, it just seems more or less like regular fan art.
So in this case copilot just looks at the situation like that someone gifted me this, and does not question if the person gifting was the real owner of the gift.
Laws are just codified version of ethics. Just because it's not codified in law, it doesn't mean it's ethically correct, and I hold ethics over laws. Some people call this conscience, others call this honor.
Just because it's not deemed illegal, it's not deemed ethical. These are different things. The world has worked under honor and ethical codes for a very long time, and still works under these unwritten laws in a lot of areas.
Science, software and other frontiers value ethics and principles a great deal. Some niches like AI largely ignore these, and I find this disturbing.
However, some people prefer to play the game with the written rules only, and as I said, I'm investing in popcorn futures to see what's gonna happen to them.
I might tank and go bankrupt of course, but I will sleep better at night for sure, and this is more important for me at the end.
I'm passionate about computers, yes. This is also my job, yes, but I'm not the person who'll do reckless things just because an incomplete code of written ethics doesn't prevent me to do it.
I'd rather not do anything to anyone which I don't want to receive. IOW, I sow only the seeds which I want to reap.
Copiloot doesn't obey GPL license, so they need to obtain written permission and pay license fees to be able to use code in their product.
See https://en.m.wikipedia.org/wiki/Peterloo_Massacre for example
If we hold reproductions of a single repository to a certain standard, the same standard should probably apply to mass reproductions. For a single repository, it’s your responsibility to make sure it’s used according to the license.
Are there slightly gray edge cases? Of course, but they’re not -that- grey. If I reproduced part of a book from a source that claimed incorrectly it was released under a permissive license, I would still be liable for that misuse. Especially if I was later made aware of the mistake and didn’t correct it.
If something is prohibitively difficult maybe we should sometimes consider that more work is required to enable the circumstances for it to be a good idea, rather than starting from the position that we should do it and moulding what we consider reasonable around that starting assumption.
But it IS possible to train a model for that. In fact, I believe ML models can be fantastic "code archaeologists", giving us insights into not just direct copying, but inspiration and idioms as well. They don't just have the code, they have commit histories with timestamps.
A causal fact which these models could incorporate, is that we know data from the past wasn't influenced by data from the future. I believe that is a lever to pry open a lot of wondrous discoveries, and I can't wait until a model with this causal assumption is let loose on Spotify's catalog, and we get a computer's perspective on who influenced who.
But in the meantime, discovering where copy-pasted code originated should be a lot easier.
Sure glad thse Luddites didn't get their way
Of course if someone does manage to set a precedent that including copyrighted works in AI training data without an explicit license to do so, GitHub Copilot would be screwed and at best have to start over with a blank slate if they can't be grandfathered. But this would affect almost all products based on the recent advancements in AI and they're backed by fairly large companies (after all, GitHub is owned by Microsoft and a lot of the other AI stuff traces back to Alphabet and there are a lot of startups funded by huge and influential VC companies). Given the US's history of business-friendly legislation, I doubt we'll see copyright laws being enforced against training data unless someone upsets Disney.
I think you are vastly underestimating how many professionally employed software developers are replaceable by copilot at this very moment. The managers are not caught up yet and you seem to be lucky not having to work with this type of dev, but I have had 1000s of people I interacted with in a professional capacity over the decades who can be replaced today. Some of those realised this and moved to different positions (for instance, advising how to use ML to replace them: if you cannot beat them…).
I mean of course you are right in general but there are millions of ‘developers’ who just look everything up with Google/SO, copy paste and change until it works. You are saying this will make their lives better, I say it will terminate their employment.
Anecdote: I know a guy who makes a boatload of money in London programming but has no understanding of things like classes, functional constructs, functions, iterators (he kind of, sometimes, understands loops) etc. He simply copies things and changes them until it works: he moved to frontend (react) as there he is almost not distinguishable from his more capable colleagues because they are all in a ‘put code and see the result’ type of mode anyway and all structures look the same in that framework, so the skeleton function, useXXX etc is all copy paste mostly anyway.
This is why I'm gnashing my teeth whenever I hear companies being fine with their employees using Copilot for public-facing code. In terms of liability, this is like going back from package managers to copying code snippets of blogs and forum posts.
If you do something, it's ultimately you who has to make sure that it is not against the law. "I didn't know" is never a good defense. If you pay with counterfeit cash, it is you who will be arrested, even if you didn't know it was counterfeit. If you use code from somewhere else (no matter if it's by copy/pasting or by using Copilot), it is you who has to make certain that it doesn't infringe on any copyright.
Just because a tool can (accidentally) make you break the law, doesn't mean the tool is to blame (cf. BitTorrent, Tor, KaliLinux, ...)
How is that a solution though? OP isn't upset that he's regenerated his own work via Copilot, he's upset that others can unknowingly & without attribution.
Music licensing is bonkers but AFAIR (at least in the UK) I think you're allowed to do covers without explicit permission[1] - you'll have to give the original writers/composers the appropriate share of any money you make.
[1] Which is why you (used to?) get, e.g., supermarkets playing covers of songs rather than the originals because it's cheaper.
I'm also not sure that Copilot is just reproducing code, but that's a separate discussion.
> If I reproduced part of a book from a source that claimed incorrectly it was released under a permissive license, I would still be liable for that misuse. Especially if I was later made aware of the mistake and didn’t correct it.
I don't believe that's correct in the first instance (at least from a criminal perspective). If someone misrepresents to you that they have the right to authorise you to publish something, and it turns out they don't have that right, you did not willingly infringe and are not liable for the infringement from a criminal perspective[1]. From a civil perspective, likely the copyright owner could still claim damages from you if you were unable to reach a settlement. A court would probably determine the damages to award based on real damages (including loss of earnings for the content creator), rather than anything punitive if it's found that
Further, most jurisdictions have exceptions for short extracts of a larger copyrighted work (e.g. quotes from a book), which may apply to Copilot.
This is my own code, I wrote it myself just now. Can I copyright it?
``` function isOdd (num) { if (num % 2 === 0) { return true; } else { return false; } } ```
What about the following:
``` function isOddAndNotSunday (num) { const date = new Date(); if (num % 2 === 0 && date.getDay() > 0) { return true; } else { return false; } } ```
Where do we draw the line?
[0]: https://docs.github.com/en/site-policy/github-terms/github-t... [1]: https://www.law.cornell.edu/uscode/text/17/506
I would never pay for Co-pilot since its Microsoft who owns Github and now my fears about the product seems to have been proven.
Why this restriction on public-facing code? Are you OK with Copilot being used for "private"/closed source code? I get that it would be less likely to be noticed if the code is not published, but (if I understand right) is even worse for license reasons.
So I had something similar happen to the OP a couple of days ago. I'm on friendly terms with a competing codebase's developer and have confirmed the following with them, both mine and it are closed source and hosted on github.
Halfway through building something I was given a block of code by copilot, which contained a copyright line with my competitors name, company number and email address.
Those details have never, ever been published in a public repository.
How did that happen?
Then they decided to wade in and build a house of cards where the cards are everyone else’s code, just waiting for the grenade pin puller and we’ve potentially witnessed the moment?
That’s the only thing that makes sense to me here. They don’t care because opening the issue will bring down everyone else with them.
I will admit I’m kind of a “throw stuff at the wall and see what sticks” kind of coder but nobody is paying me boatloads of money to poke at some program until it stops segfaulting, would be nice though.
The first C developers wrote C code despite lacking a training set of C code.
AI can't do that. It needs C code to write C code.
See the difference here?
But also this point is silly. Plenty of money and effort is risked and lost with no bailout. Bailouts are extremely unusual in the grand scheme of things.
Since copilot famously outputs GPL covered code… no, we have proof they didn't do that.
Even if CHOLMOD is easily the best sparse symmetric solver, it is notoriously not used by scipy.linalg.solve, though, because numpy/scipy developers are anti-copyleft fundamentalists and have chosen not to use this excellent code for merely ideological reasons... but this will not last: thanks to the copilot "filtering" described here, we can now recover a version of CHOLMOD unencumbered by the license that the author originaly distributed it under! O brave new world, that has such people in it!
Similarly, the reason Europe put 30% of its populace "out of work" by industrialising agriculture is why we don't have to all go work in fields all day. It is a massive net positive for us all.
Moving ice from the arctic into America quickly enough before it melted was a big industry. The refrigerator put paid to that, and improved lives the world over.
Monks retained knowledge through careful copying and retransmission of knowledge during the medieval times in the UK. That knowledge was foundational in the incredible acceleration of development in the UK and neighbouring countries in the 18th and 19th centuries. But the printing press, that rendered those monks much less relevant to culture and academia, was still a very good idea that we all still benefit from today.
Soon, millions of car mechanics who specialise in ICE engines will have to retrain or, possibly, just be made redundant. That may be required for us to reduce our pollution output by a few percent globally, and we may well need to do that.
The exact moment in history when workers who've learned how to do one job are rendered obsolete is painful, yes, and they are well within their rights to what they can to retain a living. But that doesn't mean those workers are somehow right; nor that all subsequent generations should have to delay or forego the life improvement that a useful advance brings, nor all of the advances that would be built on that advance.
All lines are associated to a commit, which has author/commit date. A reasonable guess as to which snippet was made first can be done
I'm concerned that "draw context from" is a euphemism. Does it mean it uses code that's only on your laptop to train its AI?
The most simple answer would be that this is false, it was published somewhere but you are not aware of it.
It would be a lot less trouble for everyone if it was just a per repository setting.
If this is the case, I can imagine people migrating of GitHub very quickly. I can also imagine some pretty nice lawsuits opening up.
This would basically kill github as an idea. I like the ability to be able to push some personal project to github and don't really give a fuck about technical copyright violations and I think the same is true for 90% of developers.
The world seems slightly mad about these things that produce "almost" pictures from text. We forgive DALL-E when it produces a twisted eye or an impossible perspective, because its result is "close enough" that we recognise something and grant the image intention.
So now you've got me waiting for DALL-Ecode. Give DALL-Ecode a description, it produces code.
"DALL-Ecode: Code that is sufficiently close to what you'd expect that you'll try to use it."
"DALL-Ecode: Code that looks like it does what is needed."
"DALL-Ecode: Good enough to compile, good enough to get through a code review (just not good enough to get through testing)."
For example, if I copy pasted code from someone in my open source project, and the copied code was subjected to required attribution will Copilot keep that attribution when it copies my code again?
If you write some code and release it under the GPL. Then I take your code, integrate it into my project, and release my project with the MIT licence (for example), it may be that Copilot was only trained on my repo (with the MIT licence)
The fault there is not on Github, it's on me. I was the one who incorrectly used your code in a way that does not conform to the terms of your licence to me.
I don't think the fact that Copilot outputs code which seems to be covered under the GPL proves that Github did not only crawl repositories with permissive licences when training Copilot.
export default class USERCOMPONENT extends REACTCOMPONENT<IUSER, {}> {
constructor (oProps: IUSER){
super(oProps);
}
render() {
return (
<div>
<h1>User Component</h1>
Hello, <b>{This.oProps.sName}</b>
<br/>
You are <b>{This.oProps.dwAge} years old</b>
<br/>
You live at: <b>{This.oProps.sAddress}</b>
<br/>
You were born: <b>{This.oProps.oDoB.ToDateString()}</b>
</div>
);
}
}I find it very hard to believe you didn't understand the suggestion.
Github can only trust push timestamps.
IT Crowd Piracy Warning https://www.youtube.com/watch?v=ALZZx1xmAzg
The issue is in how it creates the output. Both Dalle and Copilot can work only by taking work of people in past, sucking up their earned know how and creations and remixing it. All that while not crediting (or paying) anyone. The software itself might be great but it only works because it was fed with loads of quality material.
It's smart copy&paste with obfuscation. If thats ok legally. You can imagine soon it could be used to rewrite whole codebases while avoiding any copyright. All the code will technically be different, but also the same.
Although we can't rule out a common origin of shared code, including a common origin off github, we can know for sure that old code doesn't copy code from the future.
As to Microsoft and human developers having no clue about a piece of code's origin, thats especially false, since not only do we have timestamps on repositories, we can also easily verify that the code first appeared in the context of the csparse library, by Tim Davis, CS professor at Texas A&M who has worked on sparse matrix numerical methods his entire career.
On the argument of the machine is just leaning like any other human, that is absolute nonsense. Makes me feel ashamed to work in software, the way people can take advantage of other peoples hard work to make a buck with no even request or slightest bit of remorse.
A real good example is mapping objects: let’s say you have a deep nested object from an ERP and you need to map that to another system(s). This is horrible work and copilot just generates almost everything for it if it knows the input and output objects; it ‘knows’ that address = street and if it is not it will deduct it from the models or comments or both; if there is a separate house number and stuff, it’ll generate code to translate that. I used to hire people for that; no longer; it just pops, I run the tests and fix some thing here and there.
Right or wrong, copyright doesn't care about how valuable something is. Everything is equally (not in reality but in theory) protected. GitHub is a platform many people have trusted with protecting ownership of their copyrighted code through reasonable levels of security.
I think the big discussion point here is around ensuring that this tool is acting correctly and respecting rights of an individual. It's very easy for a large company to accidentally step on people and not realise it or brush it away. People want to make sure that isn't happening and right now there are some very compelling examples where it looks like this is happening. The fact that this isn't opt-in and there's no way to opt-out your public repositories means the choice has been taken away from people. Previously you were free to license your code as you see fit, now we have some examples of where that license may not be being respected as a result of an expensive GitHub feature.
I think this is where the conversation is centring. It's not about whether your code is valuable or not. It's whether a large company is making profit by stepping on an individuals right of ownership or not.
On the note of leveraging legal apparatus to figure it out I think you're right. The problem is what individual open source maintainer is going to have the funds to bring a reasonable equal legal challenge to such a large organisation? I maintain a relatively well used open source project and I sure as hell don't. Realistically my option is to either spend a lot of personal time and resources to challenge it (if I think wrong-doing is happening) or just suck it up. Given that there's no easy way to figure out if wrong-doing is happening because it's all in the AI soup, it makes it even harder to consider that approach.
I think the point is a lot less about the value of the code, and much more about a massively organisation playing hard and fast with an individuals rights.
None of this is to say GitHub have actually done anything wrong here. I'm sure we'll figure that out in time, but it would be great if they could figure out a way to provide more concrete explanations.
Stealing, scamming, gambling, inheriting, collecting interest, price gouging, slavery, underpaying workers, supporting laws to undermine competitors… Plenty of ways to make money without being useful—or by being actively harmful—to someone else.
> Almost all of the clothing industry companies make money from large numbers of people buying their clothes. So they are useful to us.
We don’t need all that clothing, made by monetarily exploiting people in poor countries and sold by emotionally exploiting people in rich countries under the guise of “fashion”. The usefulness line has long been crossed, it’s about profit profit profit.
But Copilot now does the search/copy/paste for me and suddenly uproar? I'm not sure I follow, the capability/possibility hasn't changed, just the convenience has.
So you write tests and copilot generates code you shove into production with little overhead ?
Do you read the code thoroughly (kind of negating having it generated for you?), or just have blind faith in it because tests are green and just YOLO it into production ?
I'd feel pretty uneasy deploying code that:
* I, or a trusted peer has not written.
* Hasn't been reviewed by my peers.
* Code I, or my peers don't understand fairly well.
That's not to say I think me or my colleagues write code that doesn't have problems, but I like to think we at least understand the code we work with and I believe this has benefits beyond just getting stuff done quickly and cheaply.In other words, I have no problem using code generated by co-pilot, but I'd feel the need to read and review it quite thoroughly and then I sort of feel that negates the purpose, and it also means it pulls my back into the role of doing work I'd hire someone else to do.
Most highly qualified workers loves what they do and would stand for keeping they’re output quality up. On the contrary interchangeable cheap workers have no real incentive to that. The factory’s manager is left alone in charge to balance quality versus cheapness, and the last comes with obsolescence (planned or not), which is good for business.
Sarcasm aside I think there are several possible legal viewpoints here (IANAL):
1. copilot is distributing copies of code and it's not a safe harbor: Microsoft is directly liable to copyright infringement by copilot producing code without appropriate license/attribution.
2. copilot is distributing copies of code and it's a safe harbor: Microsoft is not directly liable, but it should comply with DMCA requests. Practically that would mean retraining with mentioned code snippets/repositories excluded in a timely manner, otherwise I don't see a way how they could disentangle the affected IP from the already trained models.
3. copilot produces novel work by itself not subject to copyright of the training data: I think this is really a stretch. IANAL, but I think producing novel creative work is a right exclusive to living human beings, so machines can't produce them almost by definition. (There is the monkey selfie copyright case, but at least the "living" there was ticked off).
4. the user of copilot is producing novel work by prompting copilot: it's like triggering a camera. The copyright of the resulting picture is fully owned by the operator, even though much of the heavy lifting is done by the camera itself. Even then, this very much depends on the subject.
IMO option 3 doesn't have a legal standing. Microsoft and users of copilot would very much like if it was option 4 that applied always, but this particular case clearly falls under option 1 or option 2, in which case Microsoft should hold some legal liability, even if they can't always track the correct license ahead of time.
The best way to be transparent about a software implementation is to open source the thing. If that's your take away, this is the only logical thing to do. Blogs posts would be appreciated but are not enough. We can only trust what you say, we cannot verify anything.
Of course, that would have to be a decision by the copyright holders, which is possibly difficult to provide on GitHub given how easy it makes it to upload other people's work, and you would need the agreement of every contributor unless you do copyright assignment.
And you wouldn't be able to license a project using this service (i.e. with Copilot-generated code) under the GPL. But it's already unclear if you can legally do that today.
Imagine a person who would want to implement the same function in their project. They could look at the open source implementation to learn how the algorithm is supposed to work, and would write their implementation. They could end up with the implementation on the right.
Sadly that's probably a modern thing and not something that people wanted / cared about immediately once everyone lost their jobs.
Of course, someone else can still upload your elsewhere-published code to GH. You cannot win.
It doesn't matter if I think my code is valuable, it's that Github is using everyone's code for their own profit - without opt-in, attribution, or paying a license.
It’s not satisfying to painstakingly work on something that I could have generated with an AI in seconds.
A more analogous situation would be if the AI model occasionally returned the entirety of "Baby One More Time" by Britney Spears. Yes, I think you'd be sued if you passed off Baby One More Time as an original work just because you got it from an open AI tool.
If that is true then one way to get around copyright restrictions on existing code is to create a new language.
Sure the legal framework can change, but such profound change will have surely many consequences we won't foresee, for good or bad.
There you have the "most responsible way".
The GPL should be updated to prohibit code to be used for "learning" (i.e., regurgitating copyrighted fragments).
Isn't this basically all UI programming? :D
Joking aside, I see this 'person X doesn't know anything, but they are still delivering' attitude quite a bit on HN now. They clearly know something, and projects like co-pilot will make them even more effective.
I think the opposite of you - that projects like co-pilot will further lower the barriers of entry to programming and expand those who program. I also think that like all ease of programming advances in the past, business requirements will continue to grow at the edges where those who care about the craft will still be required.
In this case, what the model is doing is clearly (to me as an non-lawyer) not transformative enough to count as fair use, but it's possible that the co-pilot folks will be able to fix this kind of thing with better output filtering.
Most of the time when it’s made it’s just papering over yer another situation where a surplus is being squeezed out of a transaction by a parasitic manager class using principal-agent problem dynamics.
The people who invented this stuff are always trying to tell you they’ve invented the cotton gin or something when in fact they’ve just come up with a clever way to take someone else’s work and exploit it.
only emotionally crippled people like fashion, if they were healthy they would all dress in gray unitards and march in formation towards the glorious future!
hey I too have often been carried away by my own rhetoric but come on!
From an AI safety perspective, I'm also worried it will accelerate the transition to self-learning code, ie. the model both generating and learning from source code, which is a crucial step on the way to general artificial intelligence that we are not ready for.
For example, it would interfere with e.g. copyright of scientific/mathematical papers if algorithms were copyrightable, as mathematicians would not be able to extend another mathematician’s ideas without first gaining permission.
AI could be used to create languages based on design criteria and constraints like C was, but it does bring up the question of why one of the constraints should be character encodings from human languages if the final generated language would never be used by humans...
I mainly think it's funny watching all of these Rand'ian objectivists reusing ever excuse used by every craftsman that was excised from working life...machines need a machinist, they don't have souls or creativity, etc.
Industry always saw open source as a way to cut cost. ML trained from open source has the capability to eliminate a giant sink of labor cost. They will use it to do so. Then they will use all of the arguments that people have parroted on this site for years to excuse it.
I'm a pessimist about the outcomes of this and other trends along with any potential responses to them.
Can Copilot prove that and link to the source LGPL code whenever it reproduces more than half a line of code from such a source?
Because without that clear attribution trail, nobody in their right mind would contaminate their codebase with possibly stolen code. Hell, some bad actor might purposefully publish a proprietary base full of stolen LGPL code, and run scanners on other products until they get a Copilot "bite". When that happens and you get sued, good luck finding the original open source code both you and your aggressor derive from.
I think the real lesson to learn is if you look at the sheer amount of energy (wattage) used to replace humans it's clear that brains are really calorie efficient at doing things like producing the kinds of code that Copilot creates...but it doesn't matter because eliminating labor cost will always be attractive no matter what the up front cost is to do it. They literally can't NOT do it based on the rules of our game.
If it wasn't MS it would be someone else and is...you think IBM isn't doing this? Amazon? GTFOH. So is every other large company that has a pool of labor that is valued as a cost.
Maybe a better question would be how and why major parts of human life are organized in ways that are bad for the bulk of humanity.
Being a new area of development doesn't release you from your obligation to make sure what you're doing is ethical and legal FIRST.
> I’m personally spending a lot of time chatting with developers, copyright experts, and community stakeholders to understand the most responsible way to leverage LLMs.
And yet oddly nowhere did the phrase "I reached out to OP to discuss with with them" appear anywhere in your response." Nope. Being part of GitHub's infamous Social Media "incident response" team was more important than actually figuring out what was going on.
You don't even say that you will look into the situation with OP, or speak to them.
waves to all the github employees who will be reading this comment because someone on Github's marketing team links to it
First consider that you made a mistake yourself, _then_ ask, whether the fault could be on the other side. I really dislike this high-horse down-talking tone. Maybe it was not meant to sound like that, maybe this kind of talk has become a habit without noticing. Lets assume that, giving a benefit of a doubt.
Onto the actual matter:
> If similar code is open in your VS Code project, Copilot can draw context from those adjacent files. This can make it appear that the public model was trained on your private code, when in fact the context is drawn from local files. For example, this is how Copilot includes variable and method names relevant to your project in suggestions.
How comes, that Copilot hasn't indicated, where the code came from? How can it ever seem, like the code came from elsewhere? That is the actual question. We still need Copilot to point us to repositories or snippets on Github, when it suggests copies of code (including just renaming variables). Otherwise the human is taken out of the loop and no one is checking copyright infringements and license violations. This has been requested for a long time. Time for Copilot to actually respect rights of developers and users of software.
> It’s also possible that your code – or very similar code – appears many times over in public repositories.
So basically it propagates license violations. Great. Like I said, the human needs to be kept in the loop and Copilot needs to empower the user to check where the code came from.
> This is a new area of development, and we’re all learning.
The problem is not, that this is a new development or that we are all learning. That is fine. Sure, we all need to learn. However, when there is clearly a problem with how Copilot works, it is the responsibility of the Copilot development team to halt any further violations and first fix that problem, before letting the train roll on and violating more people's rights. The way this is being handled, by just shrugging and rolling on, maybe at some point fixing things, is simply not acceptable.
[0] https://www.statista.com/statistics/817918/number-of-busines...
I have lower expectations of the rigor with which companies police their internal codebases, though. Seeing Copilot banned for internal use too is a pleasant surprise. Companies tend to be a lot more "liberal" in what kind of legal liabilities they accept for their internal tooling in my experience.
It would be much more valuable for people who care about the truth.
And a quite reasonable code of ethics is thst people do not have absolute, complete control over their intellectual property, and instead only have the ability to control it in certain circumstances.
Things like fair use, which makes this legal, exists for many very good reasons.
So yes, the code of ethics that society has decided on, includes perfectly reasonable exception, such as fair use, and it is your problem, not ours, that you have some ridiculous idea that people should have complete, 100% authoritarian control over their IP.
And no, people not having infinite control over IP, does not allow you to extend this reasonable exception, to you being able to do literally anything to other people's IP.
This claim rings extremely hollow when your team refuses to do any of the obvious things that developers, experts and community stakeholders in this very thread (and the rest of this website) are telling you. You still haven't open-sourced Copilot. You still haven't trained it on Microsoft internal code such as Windows and Office. You still haven't made the model freely available for anyone to run locally. Until you do any of these things, you are not acting in the interest of the community and you are just exploiting people and their code for your own profit.
Is this the tact your organization would take if someone else’s code completion software was generating proprietary Microsoft’s proprietary code?
There are statutory damages on top of your actual damages. $50k per act of infringement. No reason for the copyright holder to settle for less when it's an open and shut case.
> Further, most jurisdictions have exceptions for short extracts of a larger copyrighted work (e.g. quotes from a book), which may apply to Copilot.
Quotes do not automatically get an exception just because they're taken from a larger work, they might be excepted either because they were de minimis (essentially because they were too short to be copyrightable) or because they were fair use (which is a complex question that takes into account the purpose and context, which Copilot is very unlikely to satisfy because it's not quoting other code for the purpose of saying something about it).
> Where do we draw the line?
Circuit specific; some but not all circuits use the AFC test. It sounds like this code was both long enough and creative/innovative enough to be well on the wrong side of it though.
It’s not a binary all perfectly or nothing at all. The law looks at intent and so doesn’t punish mistakes or errors so long as you aren’t being malicious or reckless or negligent.
They should definitely include disclaimers and make seeding opt-in (though I don't know how safe you are legally when you download a Lion King copy labeled Debian.iso). That said, they don't have the information necessary to tell whether what you're doing is legal or not.
Copilot _has_ that information. The model spits out code that it read. They could disallow publishing or commercially using code generated by it while they're sorting it out, but they made the decision not to.
AI is hard, but the model is clearly handing out literal copies of GPL code. Github knows this and they still don't tell you about it when you click install.
What I say with the GPL license is clear:
If you derive anything from this code base, you're agreeing and obliged to carry this license to the target code base (The logical unit in this case is a function in most cases).
So the case is clear. AI is a derivation engine. What you obtain is a derivation of my GPL licensed code. Carry the license, or don't use that snippet, or in AI's case, do not emit GPL derived code for non-GPL code bases.
This is all within the accepted ethics & law. Moreover, it's court tested too.
As I understand it, the complainant may CHOOSE to request the court to levy statutory damages rather than actual damages at any point, but is not entitled to both actual AND statutory (17 U.S. Code § 504)
It also seems to be absolutely capped at 30K per infringement, not 50, and ranges up from $750. It also seems that if the "court finds, that such infringer was not aware and had no reason to believe that his or her acts constituted an infringement of copyright, the court in its discretion may reduce the award of statutory damages to a sum of not less than $200."
I think you are probably right that this specific function is copyrightable though, but taken overall, I think Microsoft's lawyers have probably concluded that they would win any challenge on this. Microsoft have lost court battles before though, so who knows?
But they ain't some kind of special villains, its today's monopoly market kicked in. Selling startuprs to Yahoo comes with consequences.
> capable of laundering open source code That's an exaggeration. Copilot is still a dumb machine which accidentally learned to mimic the practice of borrowing intellectual property from human coders.
That is why Copilot should have always been opt-in (explicitly ask original authors to provide their code to copilot training). Instead, they are simply stealing the code of others.
People are not agreeing though.
They are not agreeing, because there is a perfectly reasonable ethical and legal principle called fair use, which society has determined allows people to engage in limited use of other people's IP, no matter what the license says.
> Carry the license, or don't use
Or, instead of that, people could reasonably use fair use, and ignore the license, as fair use exists for many good legal and ethical reasons.
And no, you do not get to extend that out, to doing anything you want to do, just because there is a reasonable exception called fair use.
> do not emit GPL derived code for non-GPL code bases
Or, actually, yes do this. This is allowed because of the reasonable ethical and moral principle called fair use, which allows people to ignore your license.
I don't equate, say, "making money" with "stealing money". I mean the way people do things within the law. Inheriting is different; the money is already made. Interest is being useful to someone else, via the loan of capital.
On the subject of trademarks the issue is as far as I know even more on the end user because the protections on them is around use in commerce and consumer confusion not about just recreating them like copyright protections.
That's actually a very real problem that mega money has been spent on. The same legal problem appears on sites like YouTube around fair use and copyright. In terms of fair use that doesn't apply here see:
https://softwareengineering.stackexchange.com/questions/1217...
Regardless platforms are partially responsible for the content that their users upload into them. Most try to absolve themselves of this responsibility with their terms of service but legally that's just not possible.
Personally I'm an advocate for fair use but I'm also an advocate for strong copyright laws and their enforcement. In the short time the internet has been available to most people in the world there is a habit of stealing others work and claiming it as your own. Quite often this is for some financial gain.
Thanks for the discussion, and have a nice day.
I may not further comment on this thread from this point.
Copilot on Python makes me x5 more productive. I used Copilot in Beta for a year and continue paying for it now.
For example: I can make a command line data wrangling script for a novel data set in a few minutes with a few prompts with full complement of extras (proper argparse parameters with sane defaults, ready to import etc etc). # reasonable comments included for free as well
Before copilot I could do the same in about 20-30minutes but my code would be a mess with little commenting. I would spend 30-60 minutes just looking up docs for various libraries.
Now without Copilot, if all I was doing was writing data wrangling scripts 4 hours a day I could approach this Copilot like productivity for a single task.
However with Copilot I can switch problem domains very quickly and remain productive.
Interestingly, on something like CSS or Javascript - Copilot is helping only slightly, maybe because my local training set is insufficient and my web-dev prompts are too generic.
So I think AI can be fantastic force multiplier in a skillset that you already are reasonably familiarity. I can handle the 5-10% wtf Python code that Copilot produces.
I do not particularly like copyrights and do wish Copilot had been trained on private Microsoft code as well.
This is the problem of applying the idea of ownership to ideas and expression like art. Art in particular is a very remix and recombination driven field.
the problem isn't even that this technology will eventually replace programmers: the problem is that it produces parts of the training set VERBATIM, sans copyright.
No, I am pretty optimistic that we will quickly come to a solution when we start using this to void all microsoft/github copyright.
Huh? "Legal universal agreement" has never been required to push something to the fringes in a particular country.
If (in the US) these models were declared to be copyright infringement, or the users were required to pay license feeds to the creators of the data that was used to build the models, they will vanish from the public sphere. GitHub/Microsoft's legal department will pull Copilot down immediately, and development will effectively cease. No US company will sponsor development, and no company will allow in-house use. It will be dead.
Some dude might still run the model in his bedroom in his spare time on his own hardware, but that's what irrelevance looks like.
> And as computers get more powerful and the models get more efficient it'll become easier and easier to self host and run them on your own dime. There are already one click installers for generative models such as stable diffusion that run on modest hardware from a few years back.
If that's the only way you can run something, because it's illegal, you're describing a fringe technology right there.
The fantasy is the idea that doing what you describe will matter.
No, that's not true. Capitalists make money from simply owning things, not because they're necessarily doing anything useful.
Quite possible, but the details could be tweaked to be more viable. The underlying message of "your tools have done us wrong and we're going to drag your customers who benefitted in" could still get through.
You might think it's unreasonable to build such a house-burning robot, but you have to realize that I actually designed it as a lawn-mowing robot. The robot will simply not value your life or property because its utility function is descended from my own, so may burn your house down in the regular course of its duties (if it decides to use a powerful laser to trim the grass to the exact nanometer). Sorry neighbor.
What do you expect me to do? NOT build this robot? How dare you stand in the way of progress!
How has your team defined, specified and clearly articulated these issues with generation?
How do you test your generation to distinguish between fixing a problem vs reducing obvious true positives (i.e. unintentionally making the problem less visible without eliminating it)?
Without some communication on those fronts (which maybe I've just not seen yet), I'm not surprised that you get pushback against your product from people feel like you're taking a cover-our-ass-and-YOLO approach.
Maybe not right this moment but our actions have consequences in the future.
For those who only see the next quarter, they're stoked.
For those who understand infinite growth is impossible and would simply like a livable world, they're horrified.
Currently, everything is extraction and the US is rotting from the inside out because of it.
That something is effectively public domain does not make it legal to use. This movie was in a thousand torrents, yet one gets still sued for uploading a kilobyte of it.
That it is hard or impossible to know if it is legal to use does not mean it is ok to do so. You need a source for the license that is able to compensate you for the damages you incur in case their license was invalid.
I'm not happy about either of these points, but that's how it is currently and just closing your eyes and hoping it will go away won't work.
Laws shouldn't be equated to ethics. There have been and will be countless ways to make money legally and unethically in any society.
I know cognitive biases are strong, but amongst a community that is at least reputed to be somewhat intellectual, the lack of similar sympathy for artists who say their work is being stolen is a bit too much of an irony here to ignore.
If you can figure out a method of determining whether someone owns the code that doesn't involve, "try suing in court for copyright infringement and see if it sticks" then we're kinda stuck. Because just because a codebase contains an exact or similar snippet from another codebase doesn't mean that snippet reaches the threshold of copyrightable work. Or the reverse being that just because two code snippets look wildly different doesn't mean it's not infringement and detecting that automatically is solving the halting problem.
The thing you want for software to actually solve this is chain of custody which we don't have. If you require everyone assume everyone else could be lying or mistaken about infringement then using any open source project for anything becomes legal hot water.
In fact when you upload code to Github you grant them a license to do things like "display it" which you can't do if you don't actually own the copyright or have a license so even before the code is ever slurrped into Copilot the same exact legal situation arises as to wether Github is legally allowed to host the code at all. Can you imagine if when you uploaded code to Github you had to sign a document saying you owned the code and indemnifying Microsoft against any lawsuit alleging infringement o boy people would not enjoy that.
This is just fear mongering, the same exact thing can happen with a web browser, I click a link to view an image of a cat but... oops, it was actually a Getty copyrighted picture of a dog! Oh nooooo.
On the web that sort of thing is actually common, but bit torrent? I have never downloaded a torrent to find it was something other than what I expected. Never have I seen a movie masquerading as a Debian ISO. That's nothing more than a joke people use to make light of their (deliberate) copyright infringement.
Furthermore, is there even any bit torrent client that will recommend copyrighted content to you, rather than merely download what you tell it to? I've not seen one. Search engines, in my browser, do that sort of recommendation but bit torrent clients do what I tell them to. Including seeding to others, which is optional but recommended for obvious reasons.
The code in question is not something that anyone needs to own. Rather, it's what anyone would write, faced with the same problem. It's stupid to make humans do a robot's job in the name of preserving meaningless "IP rights".
[1] Things like "was it on the radio or a TV show or a live performance or a recording? who was the composer? which licensing region was it in?" etc.
Proposition: "They don't use private code".
Proof: "They said they don't use private code. Either the private code appearing is published somewhere else, or they are using private code. Lying would be bad. Therefore the code is published somewhere else, and they don't use private code".
Did you read the comment you're replying to at all? It says
>The Luddites were happy to operate the new looms, they just wanted to realize some of the profit from the savings in labor along with the factory owners.
Now maybe you agree maybe you disagree. But if you're just talking past the person you're replying to... what's the point?
In other words: things improved because of technology and despite the societal/economic framework, not because of it.
And how many workers even have the possibility of an arrangement like this, i.e. a worker-owned cooperative?
Yes, that is exactly the point. When a labour-saving technological development comes along, it's payday to the capital-having class and dreary times for the labour-doing class.
>hey I too have often been carried away by my own rhetoric but come on!
Because that's what people want. You can get high quality clothes for much cheaper than you could in 1816, but people prefer disposable clothes so they can change their look more often. This is just producers responding to demand.
I always found this weird while I was working at this company, but then, they have no reason to care about ephemeral threats that have never been brought to bear in a meaningful way. No consequences = no reason to spend literal billions retooling the entire tech side of your company over a decade.
Sorry, what?
Downloading copyrighted content is very, very rarely the problem.
It's the uploading (the sharing!) of copyrighted content where you actually get into trouble.
But more to the point, getting tricked into seeding a copyrighted movie by a torrent masquerading as a Debian ISO isn't something that actually happens. That's absurd FUD.
I agree, but I somehow doubt that will ever happen. Partly because MS is motivated to muddy the waters and shift norms towards allowing more of this kind of license-defying copying (because they make money from a product that does just that), and partly because the market for the most part doesn't think clearly about these issues. Many commenters here seem really fuzzy on the fact that nearly all code is, with or without an explicit statement as such, copyrighted (thanks, Berne convention), that that copyright (with or without documentation) is owned by someone, and that it is licensing which grants use of copyrighted work under specific circumstances. So as you say, the real problem is losing information about the license.
I'm grateful that the author of some LGPL'd code has triggered this discussion, since its a more consequential license w.r.t. code reuse.
The repo: https://github.com/Shreeyak/cleargrasp
https://github.com/Shreeyak/cleargrasp/blob/master/api/depth...
It looks like the license of the repo is Apache 2.0
If your hope is that saying "it came out of our ML model" somehow removes Copilot from the well-established legal framework of licensing, I think you're wrong, and you are creating a minefield that I and others choose to stay well clear of. The revenue from Copilot, and the rest of MS, can probably pay your legal bills, but certainly not mine.
J. Random Hacker acquires and uses a copy of some of GitHub's, or Microsoft's source. When sued, the defense says that the code was not taken directly from GH/MS, just copied from a newsgroup where it had been posted. Does this get J. off the hook?
As a thought experiment, if one were to train a model on purely leaked and/or stolen source code, would the use of model step effectively "launder" the code and make later partial reuse legit?
My real worry is downstream infringement risk, since fair use is non-transitive. Microsoft can legally provide you a code generator AI, but you cannot legally use regurgitated training set output[1]. GitHub Copilot is creating all sorts of opportunities to put your project in legal jeopardy and Microsoft is being kind of irresponsible with how they market it.
[0] Note that we're assuming published work. Doing the exact same thing Microsoft did, but on unpublished work (say, for irony's sake, the NT kernel source code) might actually not be fair use.
[1] This may give rise to some novel inducement claims, but the irony of anyone in the FOSS community relying on MGM v. Grokster to enforce the GPL is palpable.
> "This is just fear mongering, the same exact thing can happen with a web browser, I click a link to view an image of a cat but... oops, it was actually a Getty copyrighted picture of a dog! Oh nooooo."
No-one cares whether you download an open-sourced photo of a cat or a copyrighted photo of a dog.
Why would anyone claim that?
It's a terrible comparison to torrents.
What a nightmare.
I'd say that constant code copying is massively pervasive, with no regard to licensing, always has been, and that's not really a bad thing, and attempts to stop it are going to be far more harmful than helpful.
Please don’t straw man¹. That’s neither what I said, nor what intended to convey, nor what I believe.
If this can leak so easy, it makes me wonder how safe api keys are. They are supposed to be hidden away, we know, but so is proprietary code.
The examples considered that: gambling, collecting interest, price gouging, underpaying workers, supporting laws to undermine competitors.
Maybe not as soon as possible, probably a lot of other problems that need immediate attention.
But some attention maybe? Nobody likes it if someone does a repost of a post, but some reposting could be seen as going "viral" so as a creator you're cool with it. If someone else does the whole going "viral" with your work, maybe it's less cool.
In spirit of the thread, if co-pilot generates some code to create a stable diffusion prompt explicitly for "Greg Rutkowski" art and it writes code your friend wrote for a small gig, well can you re-use his code for your own gig?
Can "Greg" even claim it was his art in your opininion, even though it's probably not a verbatim copy
I’m not saying they’re intentionally lying, but that one possible explanation is it looking through non public repositories
They also built a program that outputs open source code without tracking the license.
This isn't a human who read something and distilled a general concept. This is a program that spits out a chain of tokens. This is more akin to a human who copied some copywritten material verbatim.
Also, register your code with the copyright office.
Edit: Apparently, with the #1 post on HN right now, you could also just go here: https://githubcopilotinvestigation.com/
> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider
So the act of hosting copyrighted content is not actually a copyright violation for Github. They're not obligated to preemptively determine who the original copyright owner of some piece of code is, as they're not the judge of that in the first place. Even if you complain that someone stole your code, how is Github supposed to know who's lying? Copyright is a legal issue between the copyright holder and the copyright infringer. So the only thing Github is required to do is to respond to DMCA takedown notices.
What I am referring to is GitHub claiming that you are using their resources so they can break your license, when in fact you are not using their resources so they never made that agreement with you.
A car has all the information that it's going faster than the speed limit, or that it just ran a red light. But in the end it's the driver who is responsible. It's not the tool (car, Copilot) that commits the illegal act, it's the user using that tool
I can't say for sure about copilot but in general you don't have that kind of information. The problem is a bit like trying to add debug symbols back to some highly optimized binary program.
I'm from the UK, and we used to make motorbikes. They got - correctly - outcompeted by Japanese bikes in the 1950s that were built with more modern investment and tooling. If Japan hadn't done that, we'd have more motorcycle jobs in the UK, and terrible motorcycles that still leaked oil because the seam of the crankcase would still be vertical and not horizontal.
I'm not saying anything about this process is perfect and pain-free, but it seems that a lot of the things we have now are because of processes like this. Should Tesla sell through dealerships instead of direct to consumers? I think the answer is, "Tesla should do what's best for its customers", and not "Tesla should act to keep dealership jobs and not worry about what's best for its customers."
Businesses exist for their customers and not their employees, and having just been part of a business that, shall we say, radically downsized, I've seen a little of the pain of that. Thankfully it was a high tech business, and as the best employment protection is other employers, and there are loads of employers wanting tech skills I've seen my great colleagues all get new jobs. But I think it's ultimately disempowering to think of your employer like a superior when it should feel like an equal whose goals happen to coincide with yours for a while.
Can you elaborate on this? How can I become a capitalist so all my possessions start earning me money?
Proposition: "They either do not use private code or they did something very very stupid."
Proof: "Not using private code is very easy (for example google does not train its models on workspace users' data, which is why they get inferior features) and they promised multiple time not to use private code so doing in would be hard to justify"
Like I said; it is a great thing for me but I don’t believe developers without talent and/or rigorous foundations will make it. Go on Upwork and try to find someone who can do more than the same work (mostly copy paste) that they always did. In an interview when you ask someone to use map/reduce to create a map/dict, they will glaze over. This is the norm, not the exception, no matter the pay. Some of them have 10 years experience but cannot do anything else than make crud pages. This will end as copilot makes lovely .reduce and linq art from a human language prompt.
From other comments, this developers "private" code was found in 30k+ public repositories with public attribution which is what created this issue.
Presumably your private code is not also present or leaked to any public repositories.
Capitalists make money from simply owning things, but that doesn't imply in the slightest that everything that can be owned produces income.
The classic example is a landlord: he collects income because he simply owns the land others need or want to use. He doesn't necessarily have do any work that's useful to anyone else, not even maintenance or "capital allocation."
This specific one would not be a problem, but doing it with a still copyrighted work would be.
Yes I know that presumably that doesn't make them safer from a copilot visibility point of view ... but I had to do something.
It basically happens like this:
"Oh this code solves our problems and has a nice community around it for network effects!"
**developers proceed to adopt codebase without checking the license**
**months later**
"Oh, huh this license has some interesting language in it..."
Then the employee doesn't mention it; because the risk of having to re-do a bunch of work feels higher than the risk of getting in trouble for violating a license. Basically, unless it's Oracle; people just kinda shrug it off as a "wontfix".
My whole thing is that any system depending on people to read and follow a license is quite flawed in terms of enforcement, and is largely designed specifically so that powerful encumbents can make claims, not individual developers.
Laws have to be enforced or people will ignore them. If there's no practical way to enforce a law that doesn't involve violating freedoms - you're kinda fucked.
Genuine question, not being snarky.
It is still your responsibility to know and obey the traffic laws, the same as it is your responsibility to obey the copyright laws....
Gambling - I don't do it, but I'd need more specifics to see why gambling is bad in this sense. It's a voluntary pursuit that I think is a bad idea, but that doesn't make it illegal.
Price gouging is still being useful, just at a higher price. Someone could charge me £10 for bread and if that was the cheapest bread available, I'd buy it. If it is excessive and for essential goods, it is increasingly illegal, however. 42 out of 50 states in the US have anti-gouging laws [0], which, as I say, isn't what I'm talking about. I'm talking about legal things.
Underpaying workers - this certainly isn't illegal, unless it's below minimum wage, but also "underpaying" is an arbitrary term. If there's a regulatory/legal/corrupt state environment in which it's hard to create competitors to established businesses, then that's bad because it drives wages down. Otherwise, wages are set by what both the worker and employer sides will bear. And, lest we forget, there is still money coming into the business by it being useful. Customers are paying it for something. The fact that it might make less profit by paying more doesn't undermine that fundamental fact.
As for supporting laws to undermine competitors, that is something people can do, yes. Microsoft, after their app store went nowhere, came out against Apple and Google charging 30% for apps. Probably more of a PR move than a legal one, but businesses trying to influence laws isn't bad, because they have a valid perspective on the world just as we all do, unless it's corruption. Which is (once more, with feeling) illegal, and so out of scope of my comment. And again, unless the laws are there to establish a monopoly incumbent, which is pretty rare, and definitely the fault of the government that passes the laws, the company is still only really in existence because it does something useful enough to its customers that they pay it money.