Jobs have been automated since the industrial revolution, but this usually takes the form of someone inventing a widget that makes human labor unnecessary. From a worker's perspective, the automation is coming from "the outside". What's novel with AI models is that the workers' own work is used to create the thing that replaces them. It's one thing to be automated away, it's another to have your own work used against you like this, and I'm sure it feels extra-shitty as a result.
That's a huge ethical issue whether or not it's explicitly addressed in copyright/ip law.
I'd just say the scale is different. Old school automation just required one expert to guide the development of an automation. AI art requires the expertise of thousands.
Not, if the worker is an engineer or similar. Some engineers built tools that improved building tools.
And this started even earlier than the industrial revolution. Think for example of Johannes Gutenberg. His real important invention was not the printing press (this already existed) and not even moveable types, but a process by which a printer could mold his own set of identical moveable types.
I see a certain analogy between what Gutenberg's invention meant for scribes then and what Stable Diffusion means for artists today.
Another thought: In engineering we do not have extremly long lasting copyright, but a lot shorter protection periods via patents. I have never understood why software has to be protected for such long copyright periods and not for much shorter patent-like periods. Perhaps we should look for something similar for AI and artists: An artist as copyright as usual for close reproductions, but after 20 years after publication it may be used without her or his consent for training AI models.
The whole point of art is human expression. The idea that artists can be "automated away" is just sad and disgusting and the amount of people who want art but don't want to pay the artist is astounding.
Why are we so eager to rid ourselves of what makes us human to save a buck? This isn't innovation, its self destruction.
We've just made "learning style" easier, so a thing that was always a risk is now happening.
The real answer is AI are not people, and it is ok to have different rules for them, and that is where the fight would need to be.
For someone seeking sound/imagery/etc. resulting from human expression (i.e., art), it makes sense that it can't be automated away.
For someone seeking sound/imagery/etc. without caring whether it's the result of human expression (e.g., AI artifacts that aren't art), it can be automated away.
Static 2D images that usually serve a commercial purpose. Ex logos, clip art, game sprites, web page design and the like.
And the second is pure art whose purpose is more for the enjoyment of the creator or the viewer.
Business wants to fully automate the first case and must people view it has nothing to do with the essence of humanity. It's simply dollars for products - but it's also one of the very few ways that artists can actually have paying careers for their skills.
The second will still exist, although almost nobody in the world can pay bills off of it. And I wouldn't be shocked it ML models start encroaching there as well.
So a lot of what's being referred to is more like textile workers. And anyone who can type a few sentences can now make "art" significantly lowering barriers to entry. Maybe a designer comes and touches it up.
The short sighted part, is people thinking that this will somehow stay specific to Art and that their cherished field is immune.
Programming will soon follow. Any PM "soon enough" will be able to write text to generate a fully working app. And maybe a coder comes in to touch it up.
Art-as-human-expression isn't going anywhere because it's intrinsically motivated. It's what people do because they love doing it. Just like people still do woodworking even though it's cheaper to buy a chair from Walmart, people will still paint and draw.
What is going to go away is design work for low-end advertising agencies or for publishers of cheap novels or any of the other dozens of jobs that were never bastions of human creativity to begin with.
Oh, life & death is different? Don't be so sure; there's good reasons to believe that livelihood (not to mention social credit) and life are closely related -- and also, the fundamental point doesn't depend on the specific example: you can't point to an orders-of-magnitude change and then claim we're dealing with a situation that's qualitatively like it's "always" been.
"Easier" doesn't begin to honestly represent what's happened here: we've crossed a threshold where we have technology for production by automated imitation at scale. And where that tech works primarily because of imitation, the work of those imitated has been a crucial part of that. Where that work has a reasonable claim of ownership, those who own it deserve to be recognized & compensated.
Artists are poets, and they're railing against Trurl's electronic bard.
[https://electricliterature.com/wp-content/uploads/2017/11/Tr...]
There are a lot of working commercial artists in between the fine art world and the "cheap novels and low-end advertising agencies" you dismiss, and there's no reason to think AI art won't eat a lot of their employment.
I don't pay someone to run calculations for me, either, also a difficult and sometimes creative process. I use a computer. And when the computer can't, then I either employ my creativity, or hire a creative.
As far as money goes... long run artists will still make money fine as people will value the people generated (artisanal) works. Just as people like hand-made stuff today, even though you can get machine-made stuff way cheaper. You may not have the generic jobs of cranking out stuff for advertisements (and such) but you'll still have artists.
It's not even clear you're correct by the apparent (if limited) support of your own argument. "Transmission" of some sort is certainly occurring when the work is given as input. It's probably even tenable to argue that a copy is created in the representation of the model.
You probably mean to argue something to the effect that dissemination by the model is the key threshold by which we'd recognize something like the current copyright law might fail to apply, the transformative nature of output being a key distinction. But some people have already shown that some outputs are much less transformative than others -- and even that's not the overall point, which is that this is a qualitative change much like those that gave birth to industrial-revolution copyright itself, and calls for a similar kind of renegotiation to protect the underlying ethics.
People should have a say in how the fruits of their labor are bargained for and used. Including into how machines and models that drive them are used. That's part of intentionally creating a society that's built for humans, including artists and poets.
Commercial art needs to be eye catching and on brand if it's going to be worth anything, and a random intern isn't going to be able to generate anything with an AI that matches the vision of stakeholders. Artists will still be needed in that middle zone to create things that are on brand, that match stakeholder expectations, and that stand out from every other AI generated piece. These artists will likely start using AI tools, but they're unlikely to be replaced completely any time soon.
That's why I only mentioned the bottom tier of commercial art as being in danger. The only jobs that can be replaced by AI with the technology that we're seeing right now are in the cases where it really doesn't matter exactly what the art looks like, there just has to be something.
The ones at risk (and complaining the most) are semipro online artists who sell one image at a time, like fanart commissions.
- generic expression: commercial/pop/entertainment; audience makes demands on the art
- autonomous expression: artist's vision is paramount; art makes demands on the audience
Obviously these are idealized antipodes. The question about whether it is the art making the demands on the audience or the audience making demands on the art is especially insightful in my opinion. Given this rubric, I'd say AI-generated art must necessarily belong to "generic expression" simply because it's output has to meet fitness criteria.
I can't copy your GPL code. I might be able to write my own code that does the same thing.
I'm going to defend this statement in advance. A lot of software developers white knight more than they strictly have to; they claim that learning from GPL code unavoidably results in infringing reproduction of that code.
Courts, however, apply a test [1], in an attempt to determine the degree to which the idea is separable from the expression of that idea. Copyright protects particular expression, not idea, and in the case that the idea cannot be separated from the expression, the expression cannot be copyrighted. So either I'm able to produce a non-infringing expression of the idea, or the expression cannot be copyrighted, and the GPL license is redundant.
[1] https://en.wikipedia.org/wiki/Abstraction-Filtration-Compari...
In the sense that art is a 2D visual representation of something, or a marketing tool that evokes a biological response in the viewer, art is easy to automate away. This is no different than when the camera replaced portraitists. We've just invented a camera that shows us things that don't exist.
In the sense that art is human expression, nobody has even tried to automate that yet and I've seen no evidence that expressionary artists are threatened.
Something being currently legal and possible doesn’t mean being morally right.
Technology enables things and sometimes the change is qualitatively different.
It's not possible for training an AI using data that was obtained legally to be copyright infringement. This is what I was talking about regarding transmission. Copyright provides a legal means for a rights holder to limit the creation of a copy of their image in order to be transmitted to me. If a rights holder has placed their image on the internet for me to view, then copyright does not provide them a means to restrict how I choose to consume that image.
The AI may or may not create outputs that can be considered derivative works, or contain characters protected by copyright.
You seem to be making an argument that we should be changing this somehow. I suppose I'll say "maybe". But it is apparent to me that many people don't know how intellectual property works.
Those engineers consented to creating the new tools so that's different
You're in for a rude awakening when you get laid off and replaced with a bot that creates garbage code that is slow and buggy but works and so the boss gets to save on your salary. "But it's slow, redundant, looks like it was made by some who just copy and pasted endlessly from stackoverflow" but your boss won't care, he just needs to make a buck.
A derivative work is a creative expression based on another work that receives its own copyright protection. It's very unlikely that AI weights would be considered a creative expression, and would thus not be considered a derivative work. At this point, you probably can't copyright your AI weights.
An AI might create work that could be considered derivative if it were the creative output of a human, but it's not a human, and thus the outputs are unlikely to be considered derivative works, though they may be infringing.
This was my reply: https://news.ycombinator.com/item?id=34005604
I also agree that artist employment isn't sacred, but after extensive use of the generation tools I don't see them replacing anything but the lowest end of the industry, where they just need something to fill a space. The tools can give you something that matches a prompt, but they're only really good if you don't have strong opinions about details, which most middle tier customers will.
Both personal autonomy and private property are social constructs we agree are valuable. Stealing a car and raping a person are things we've identified as unacceptable and codified into law.
And in stark contrast, intellectual property is something we've identified as being valuable to extend limited protections to in order to incentivize creative and technological development. It is not a sacred right, it's a gambit.
It's us saying, "We identify that if we have no IP protection whatsoever, many people will have no incentive to create, and nobody will ever have an incentive to share. Therefore, we will create some protection in these specific ways in order to spur on creativity and development."
There's no (or very little) ethics to it. We've created a system not out of respect for people's connections to their creations, but in order to entice them to create so we can ultimately expropriate it for society as a whole. And that system affords protection in particular ways. Any usage that is permitted by the system is not only not unethical, it is the system working.
If the original is a creative expression, then recording it using some different tech is still a creative expression. I don't see the qualitative difference between a bunch of numbers that constitutes weights in a neural net, and a bunch of numbers that constitute bytes in a compressed image file, if both can be used to recreate the original with minor deviations (like compression artifacts in the latter case).