HOWEVER, if a person were to ask for permission to use my pictures to feed into an AI to generate a number of images, and that person _selected_ a few and decided to sell them, I wouldn't have a problem with that. Something to do with the permission provided to the artist and an editing/filtering criteria being used by a human makes me feel ok with such use.
Edit: Silicon Valley exceptionalism seems to preclude some thought leaders in the field to remember the full definition of copyright: it's an artist's exclusive right to copy, distribute, adapt, display, and perform a creative work.
A number of additional provisions, like fair use, are meant to balance artists' rights against public interest. Private commercial interest is not meant to be covered by fair use.
No one is disputing that everyone, including companies in the private sector, is entitled to using artists' images for AI research. But things like e.g. using AI-generated images for promotional purposes are not research, and not covered by fair use. You want to use the images for that, great -- ask for permission, and pay royalties. Don't mooch.
Stability AI knew they would be sued to the ground if they trained their AI generating music equivalent called 'Dance Diffusion' model on thousands of musicians without their permission and used public domain music instead.
So of course they think it is fine to do it to artists copyrighted images without their permission or attribution, as many AI grifters continue to drive everything digital to zero. That also includes Copilot being trained on AGPL code.
I would think that generating those images is okay by Disney, the same as if I painted them. The moment Disney would object is when I start selling them on merch, at which point it is irrelevant how they were created.
Am I mistaken?
Regardless, the human generating and publishing these images is obviously responsible to ensure they are not violating any IP property. So they might get sued by Disney. I don't get why the AI companies would be effected in any way. Disney is not suing Blender if I render an image of Mickey Mouse with it.
Though I am sure that artists might find an likely ally in Disney against the "AI"'s when they tell them about their idea of making art-styles copyright-able Being able to monopolize art styles would be indeed a dream come true for those huge corporations.
On the other hand, nobody owns a copyright on a specific style. If I go study how to make art in the style of my favorite artist, that artist has no standing to sue me for making art in their style. So why would they have standing to sue for art generated by an AI which is capable of making art in their style?
[1] https://fishstewip.com/mickey-mouse-copyright-expires-at-the...
The challenging part is that these artists are protesting the use of 'style' in AI synthesized media. That is, an artist's style is being targeted (or, even, multiple artist's styles are combined in a prompt to create a new AI-original work). This is not protected by copyright—if you draw a new scene in another artist's style, it would be perhaps unethical, but legally derivative work.
If the artists who are challenging these AI systems do get there way, and they are able to legally copy-protect their "style" (like a certain way of brush strokes), this would inevitably backfire against them. To give an example: any artist whose work now too closely resembles the "style" of Studio Ghibli might be liable to copyright infringement, where before the work would be clearly derivative, or just influenced by another work, as is the case with most art over time.
Copyright (in the US) was NOT in fact created to protect creators, it was to encourage creation and advance science. Today copyright is being used to curb and monopolize creation and prevent advancement (case in point this very story)
EDIT: Would Andy Warhol be sued by Campbell or Brillo?
If people are happy with metaverse A.I. generated images, projected in their minds, so be it. It is over. The rest is just an echo of human civilization. Transhumanistic clones are coming to town:)
Artists have always been inspired by each other and copied each other's styles and ideas.
I can appreciate that there are all kinds of potential "intellectual property" issues with the current glut of AI models, but the level of misunderstanding in some affected communities is concerning.
I don’t think “real art” will disappear. People will always want to create (although monetising that will now be exceedingly more difficult).
It feels like we are ripping the humanity out of life on a greater and greater scale with tech. Instead of replacing crappy jobs and freeing up peoples time to enjoy their life, we’re actually automating enjoyable pursuits.
NB: when I’m referring to art I mean of all types as that’s where we are heading.
Would you mind if AI starts creating art like yours?
What if your clients tell you they bought the AI generated art instead of yours?
It’s blatantly obvious that regardless of if it will work or not, they’re trying to get companies with enough money to file law suits to make a move and do so.
> I don’t see the point.
…or you don’t agree with the intent?
I’m fine with that, if so, but you’d to be deliberately trying very hard not to understand what they’re trying to do.
Quite obviously they’re hoping, similar to software that lets you download videos from YouTube, that tools that enable things are bad, not neutral.
Agree / disagree? Who cares. I can’t believe anyone who “doesn’t get it” is being earnest in their response.
Will it make any difference? Well, it may or may not, but there’s a fair precedent of it happening, and bluntly, no one is immune to law suits.
This whole copyright / intellectual property idea is something that unfortunately cropped up in the 20th century, and the fact that it was codified into law is certainly not something 20th century humanity should be proud of or regard as progress.
AI replacing artists functionally is just the surface fear. The real problem is using AI as an automated method of copyright laundering. There's only so much hand waving one can do to excuse dumping tons of art that you didn't make into a program and transform it into similar art and pretend like you own it. People like to pretend that it's like a person learning and replicating a style, but it's not. It's a computer program and it's automated. That the process is similar is immaterial.
What we’re actually always talking about is “applied computational statistics”, otherwise known as ML.
And if an artist wants to sample from the distribution of beautiful images and painting and photographs as a source of inspiration, why not? We do it in other fields.
But using a computer to sample from that same distribution and adding nothing will be rightly rewarded by nothing.
Good stuff will still be good stuff, and it will keep being rare. The biggest change will be that producing mediocre content will be cheaper and more accessible, but we're already drowning in it, so .. meh?
> Instead of replacing crappy jobs and freeing up peoples time to enjoy their life, we’re actually automating enjoyable pursuits.
That's an interesting observation.
Now, as for training "AI" models, who knows. You can argue it is the same thing a human is doing or you could argue it a new, different quality and should be under different rules. Regardless, the current copyright laws were written before "AI" models were in widespread use so whatever is allowed or not is more of a historic accident.
So the discussion needs to be about the intention of copyright laws and what SHOULD be.
Yeah, sure you'd mind. However, we have decided as a society that "style" is not protected.
Copying a work itself can be copyright infringement if it’s very close to the original to the point people may think they’re the same work.
So in effect, they are pitting Disney's understanding of copyright (maximally strict) against that of the AI companies (maximally loose).
Even if it's technically the responsibility of the user not to publish generated images that contain copyrighted content, I can't imagine that Disney is very happy with a situation where everyone can download Stable Diffusion and generate their own arbitrary artwork of Disney characters in a few minutes.
So that strategy might actually work. I wish them good luck and will restock my popcorn reserves just in case :)
The problem I see though is that both sides are billion dollar companies - and there is probably a lot of interest in AI tech within Disney themselves. So it might just as well happen that both sides find some kind of agreement that's beneficial for both of them and leaves the artists holding the bag.
Have you been to the internet?
In all seriousness, the cream will rise to the top. The mediocre “content” will get generated and we will get better at filtering it out which will decrease the value in generating mediocre content, etc etc. The tools being produced just further level the playing field for humanity and allow more people to get “in the arena” more easily.
Humans are still the final judge of the value being produced, and the world/internet will respond accordingly.
For a thought exercise, take your argument and apply it to the internet as a whole, from the perspective of a book or newspaper publisher in the 1990s.
Your line of reasoning sounds like “ah, we already won so your protest doesn’t matter anyway”, but did you already win actually? Do you really not need all their development to draw on the same level? Just show that.
And then someone comes along and competes with you?
—
No one is bothered by competition in markets.
Why do we have more or less empathy of this type for some professions?
I think attempting to prevent this is a losing battle.
We might be able to argue that the computer program taking art as input and automatically generating art as output is the exact same as an artist some time after general intelligence is reached, until then, it's still a machine transformation and should be treated as such.
AI shouldn't be a legal avenue for copyright laundering.
And practically speaking, putting aside whether a government should even be able to legislate such things, enforcing such a law would be near impossible without wild privacy violations.
It doesn't mean that. You could "find" Mickey in the latent space of any model using textual inversion and an hour of GPU time. He's just a few shapes.
(Main example: the most popular artist StableDiffusion 1 users like to imitate is not in the StableDiffusion training images. His name just happens to work in prompts by coincidence.)
Given most of the heavy lifting is already done, this seems like a pretty easy thing for anyone to do.
There is a secondary issue on that there is other people being able to craft high quality images with strong compositions without spending the "effort/training" that artists had to use over years to produce them, so they are bitter about that too, but that's generally a minor cross-section of the publicvoutcry tho they are quite vitriolic
Photobashing, tracing, etc there have always been a layer of purists whom look down on anyone that doesn't "put the effort in" yet get great results in a timely manner, these purists will always exist, just like how it was when digital painting was starting, people were looked down by oil painters for not putting the effort in, even when oil painters themselves used tricks like projectors to the empty blank canvas to get perspective perfect images, but that's just human nature to a degree, trying to put down other people while yourself doing tricks to speed up processes
Simply hiding in an obsolete technicality is sure a wrong way to handle it.
It's very confusing, especially when you have to consider trademark as related but separate.
edit: the examples are all about objects, but my understanding is that it is capable of style transfers as well.
Go to a baker and commission a Mickey Mouse cake. Is that a violation if the bakery didn't advertise it? (To note, a bakery can't advertise it due to trademark, not copyright. Right?)
For that matter, any privately commissioned art? Is that really what artists want to lock away?
The law isn't there to protect my feelings, so whether I mind or not is irrelevant. Artists have had to deal with shifting art markets for as long as art has been a profession.
> What if your clients tell you they bought the AI generated art instead of yours?
I'd be sad and out of a source of income. Much the same way I would be if my clients hired another similar but cheaper artist. The law doesn't guarantee me a livelihood.
1) the artist is not literally copying the copyrighted pixel data into their "system" for training
2) An individual artist is not a multi billion dollar company with a computer system that spits out art rapidly using copyrighted pixel data. A categorical difference.
> Instead of replacing crappy jobs and freeing up peoples time to enjoy their life, we’re actually automating enjoyable pursuits.
Yeah really hit the nail on the head here. I thought a lot of backlash against AI was due to workers not really reaping the benefits of automation and that's a solvable problem. But I've seen a lot of artists who are retired or don't need to work dive into despair over this still. It's taking their passion away, not just their job.
I don't really know how we could stop it though without doing some sweeping Dune-level "Thou shalt not make a machine in the likeness of the human mind" type laws.
The punishment for breaking any of these rules is a lot of people yell at you on Twitter. Unfortunately, they've been at it so long that they now think these are actual laws of the universe, although of course they have pretty much nothing to do with the actual copyright law.
That actual law doesn't care if you're selling it or not either, at least not as a bright line test.
(Japanese fanartists have a lot more rules, like they won't produce fan merch of a series if there is official merch that's the same kind of object, or they'll only sell fan comics once on a specific weekend, and the really legally iffy ones have text in the back telling you to burn after reading or at least not resell it. Some more popular series like Touhou have explicit copyright grants for making fanart as long as you follow a few rules. Western fanartists don't read or respect any of these rules.)
What then? Certainly an individual artist can't go and sell images of Mickey Mouse since it's still copyright infringement, but what claim would Disney have against the AI company?
I wrote in another comment that if you make the training of such models illegal regardless of distribution, it's essentially making certain mathematics illegal. That poses some very interesting questions around rights, whether others will do it anyways, and the practicality of enforcing such a rule in the first place.
> automatically generating art as output
The user is navigating the latent space to obtain said output, I don't know if that's transformative or not, but it is an important distinction
If the program were wholy automated as in it had a random number/words generator added to it and no navigation of the latent space by users happened, then yeah I would agree, but that's not the case at least so far as ml algos like midjourney or stable diffusion are concerned
In 2018[0], didn't Getty force Google to change how Google Images presented results, following a lawsuit in 2016[1]?
[0] https://arstechnica.com/gadgets/2018/02/internet-rages-after... [1] https://arstechnica.com/tech-policy/2016/04/google-eu-antitr...
On 1, human artists are copying copyrighted pixel data into their system for training. That system is the brain. It's organic RAM.
On 2, money shouldn't make a difference. Jim Carrey should still be allowed to paint even though he's rich.
If Jim uses Photoshop instead of brushes, he can spit out the style ideas he's copied and transformed in his brain more rapidly - but he should still be allowed to do it.
(That's as opposed to a large language model, which does memorize text.)
Also, you can train it to imitate an artist's style just by showing it textual descriptions of the style. It doesn't have to see any images.
The lack of empathy is incredibly depressing...
We already live in a time of artistic stagnation. With how much audio engineers manipulate pop music in Pro Tools, "fake" singers have been a practical reality for 20 years. Look at Marvel movies. Go to any craft fair on a warm day, or any artists' co-op, in a major city and try, try to find one booth that is not exactly like 5 other booths on display.
People have been arguing about what is "real art" for centuries. Rap music wasn't real because it didn't follow traditional, European modes and patterns. Photography wasn't real because it didn't take the skill of a painter. Digital photography wasn't real because it didn't take laboring in a dark room. 3D rendering wasn't real. Digital painting wasn't real. Fractal imagery wasn't real. Hell, anything sold to the mass market instead of one-off to a collector still isn't "real art" to a lot of people.
Marcel Duchamp would like to have a word.
If anything, I think AI tools are one of the only chances we have of seeing anything interesting break out. I mean, 99% of the time it's just going to be used to make some flat-ui, corporate-memphis, milquetoast creative for a cheap-ass startup in a second rate co-working space funded by a podunk city's delusions they could ever compete with Silicon Valley.
But if even just one person uses the tool to stick out their neck and try to question norms, how can that not be art?
No, it would just legislate what images are and which ones are not on the training data to be parsed, artists want a copyright which makes their images unusable for machine learning derivative works.
The trick here is that eventually the algorithms will get good enough that it won't be necessary for said images to even be on the training data in the first place, but we can imagine that artists would be OK with that
They shouldn't be OK with that and they probably aren't. That's a much worse problem for them!
The reason they're complaining about copyright is most likely coping because this is what they're actually concerned about.
The art equivalent of patent trolling or domain squatting basically. Is that possible legally?
Absolutely. Google previously had a direct link to the full-size image, but it has removed this due to potential legal issues. See [0].
> Is that a violation if the bakery didn't advertise it?
According to Disney, it is. See [1].
> Any privately commissioned art?
Not any art, no. Only that which uses IP/material they do not have a license to.
[0]: https://www.ghacks.net/2018/02/12/say-goodbye-to-the-view-im...
[1]: https://en.wikipedia.org/wiki/Cake_copyright#Copyright_of_ar...
You can however disallow Google from indexing your content using robots.txt a met tag in the HTML or an HTTP header.
Or you can ask Google to remove it from their indexes.
Your content will disappear from then on.
You can't un-train what's already been trained.
You can't disallow scraping for training.
The damage is already done and it's irreversible.
It's like trying to unbomb Hiroshima.
Then again, there should be some sort of solution so this can coexist with artists, and not replace them
At the dawn of mechanization, these same arguments were being used by the luddites, I'd recommend you to read them, it was quite an interesting situation, same as now
The reality is that advances such as these can't be stopped, even if you forbid ml legislation in the US there are hundreds of other countries which won't care same as it happens with piracy
Use these tools to 10x your own output and create new markets that arise due to the 10x modifier.
Going painting > raw photo (derivative work), raw photo > jpg (derivative work), jpg > model (derivative work), model > image (derivative work). At best you can make a fair use argument at that last step, but that falls apart if the resulting images harm the market for the original work.
you have rights.
AIs don't.
Because they don't have will.
It's like arresting a gun for killing people.
So, as a human, the individual(s) training the AI or using the AI to reproduce copyrighted material, are responsible for the copyright infringement, unless explicitly authorized by the author(s).
It's quite possible to apply the same kind of protections to generative models. (I hope this does not happen, but it is fully possible.)
A tool that catalogues attributed links can't really be evaluated the same way as pastiche machine.
You'd be much closer using the example of Google's first page answer snippets, that are pulled out of a site's content with minimal attribution.
As long as the world is not entirely made of AI, there will always be some expertise to add, so instead of being afraid, you should just evolve with your time
I completely agree with it. Take a contemporary pianist for example, the amount of dedication to both theory and practice, posture, mastering the instrument and what not, networking skills, technology skills, video recording, music recording, social media management, etc.
This has always been the case. Most entertainment regardless of form (music, art, tv, games...) is mediocre or below mediocre, with the occasional good or even rarer exceptional that we all buzz about.
AI image gen is only allowing a wider range of people to express their creativity. Just like every other tools that came before it lowered the bar of entry for new people to get in on the medium (computer graphics for example allowed those who had no talent for pen and paper to flourish).
Yes, there will be a lot of bad content, but that's nothing out of the ordinary.
The matters of the baker and the privately comissioned art are more complicated. The artist and baker hold copyrigh for their creation, but their products are also derived from copyrighted work, so Disney also has rights here [1]. This is just usually not enforced by copyright holders because who in their right mind would punish free marketing.
That might be a good way to go about it
A latent space that contains every image contains every copyrighted image. But the concept of sRGB is not copyrighted by Disney just yet.
It's up to us to distribute those gains back.
Because in my time the stakeholders in companies have never actually been decisive when scoping features.
Co-pilot is indeed the endgame for AI assisted programming. So I would say for art, someone mindful could train an AI on their own dataset and use that to accelerate their workflow. Imagine it drawing outlines instead of the full picture.
That's actually a tricky question and lengthy court battles were held over this in both the US and Europe. In the end, all courts decided that the image result page is questionable when it comes to copyright, but generally covered by fair use. The question is how far fair use goes when people are using the data in derivative work. Google specifically added licensing info about images to further cover their back, but this whole fair use stuff gets really murky when you have automatic scrapers using google images to train AIs who in turn create art for sale eventually. There's a lot of actors in that process that profit indirectly from the provided images. This will probably once again fall back to the courts sooner or later.
The job market will always keep on changing, you have to adept to it to a certain degree.
Now we can talk about supporting art as a public good and I am all for that but I don't see how artists are owed a corporate job. Many of my current programming skill will be obsolete one day, that's part of the game.
They probably aren't doing that. Studying the production methods and WIPs is more useful for a human. (ML models basically guess how to make images until they produce one that "looks like" something you show it.)
* Can a Copilot-like generator be trained with the GPL code of RMS? What is the license of the output?
* Can a Copilot-like generator be trained with the leaked source code of MS Windows? What is the license of the output?
If an AI will take care of most of the finicky details for me and let me focus on defining what I want and how I want it to work, then that is nothing but an improvement for everyone.
I think this isn't just a simple discussion on competition and copyright, I think it's a much larger question on humanity. It just seems like potentially a bleak future if enjoyable and creative pursuits are buried and even surpassed by automation.
If you have views on whether they'll win, the prediction market is currently at 49%: https://manifold.markets/JeffKaufman/will-the-github-copilot...
High-quality content rarely rises to the top. The internet as of 2022 optimizes for mediocrity: the most popular content is the one which is best psychological manipulation using things like shock value and sexuality. Just take a look at Twitter, Facebook, or Reddit: it is extremely rare to see genuine masterpieces on there. Everything is just posted to farm as many shares and likes as possible.
If anything, this will result in the cream getting drowned in shit. Not to mention that artists do not get the space to develop from mediocre to excellent - as the mediocre market will have been replaced with practically free AI.
Automated transformation is not guaranteed to remove the original copyright, and for simple transformations it won't, but it's an open question (no legal precedent, different lawyers interpreting the law differently) whether what these models are doing is so transformative that their output (when used normally, not trying to reproduce a specific input image) passes the fair use criteria.
That said Microsoft didn't allow their kernel developers to look at Linux code for a reason.
Mind you, this is not talking about the usage rights of images generated from such a model, that's a completely different story and a legal one.
I think the two biggest differences between art AI and code AI are that (a) code that's only 95% right is just wrong, whereas art can be very wrong before a client even notices [0]; and (b) we've been expecting this for ages already, to the extent that many of us are cynical and jaded about what the newest AI can do.
[0] for example, I was recently in the Cambridge University Press Bookshop, and they sell gift maps of the city. The background of the poster advertising these is pixelated and has JPEG artefacts.
It's highly regarded, and the shop has existed since 1581, and yet they have what I think is an amateur-hour advert on their walls.
If it's not, why worry about it ?
you can still copyright characters separatedly. he's feigning ignorance of how copyright work to make a sensationalistic point, which pretty much invalidate and poison what is otherwise an interesting argument at the boundary between derivative work and generative art.
What they were however was against was companies using that technology to slash their wages in exchange for being forced to do significantly more dangerous jobs.
In less than a decade, textile work went from a safe job with respectable pay for artisans and craftsmen into one of the most dangerous jobs of the industrialised era with often less than a third of the pay and the workers primarily being children.
That's what the luddites were afraid of. And the government response was military/police intervention, breaking of any and all strikes, and harsh punishments such as execution for damaging company property.
Could you please elaborate, why its "short-sighted"?
> As an artist, would you be much happier if, rather than the AI copying your style, the AI generated infinitudes of pictures in a style that the overwhelming majority of humans prefers to yours, so that you couldn't hope to ever create anything that people outside of a handful of hipsters and personal friends will value?
You mean that any artist should be just happy that his work is used by other people / rich corporation / AI without consent? Cool, cool.
What do you think cloud computing did? A lot of sysadmins, networking, backups, ops went the way of dinosaurs. A lot of programmers have also fallen on the side by being replaced with tech and need to catch up.
Wallowing in pity is not going to make help, we saw a glimpse of this with github-copilot. Some people built the hardware, the software behind these AIs, some others are constructing the models, applying it to distinct domains. There's work to be done for those who wish to find their place in the new world.
Artists will survive through innovation.
I really doubt that AI will somehow be our successors. Machines and AI need microprocessors so complex that it took us 70 years of exponential growth and multiple trillion-dollar tech companies to train even these frankly quite unimpressive models. These AI are entirely dependent on our globalized value chains with capital costs so high that there are multiple points of failure.
A human needs just food, clean water, a warm environment and some books to carry civilization forward.
I know current AI is very different from an organic brain at many levels, but I don't know if any of those differences really matters.
Code should not need to be done by humans at all. There's no reason coding as it exists today should exist as a job in the future.
Any time I or a colleague are "debugging" something, I'm just sad we are so "dark ages" that the IDE isn't saying "THERE, humans, the bug is THERE!" in flashing red. The IDE has the potential to have perfect information, so where is the bug is solvable.
The job of coding today should continue to rise up the stack tomorrow to where modules and libraries and frameworks are just things machines generate in response to a dialog about “the job to be done”.
The primary problem space of software is in the business domain, today requiring people who speak barely abstracted machine language to implement -- still such painfully early days.
We're cavemen chipping at rocks to make fire still amazed at the trick. No empathy, just, self-awareness sufficient to provoke us into researching fusion.
It seems to me that the online discourse is very US-centric, thinking that the AI regulatory battles are in the future, when in some other countries it’s already over.
It still needs a human to tell it what to paint, and the best outputs generally require hours of refinement and then possibly touch-up in photoshop. It's not generating art on its own.
Artists still have a job in deciding what to make and using their taste to make it look good, that hasn't changed. Maybe the fine-motor skills and hand-eye coordination are not as necessary as they were, but that's it.
I would personally be astonished if any of the distributed systems I've worked on in my career were even close to 95% correct, haha.
It doesn't necessarily matter if they're affected. My thought when seeing this is that they want some legal precedent to be set which determines that this is not fair use.
So I don’t think art is “harder”. It’s just harder for the average practitioner/professional to find “success” (however you like to define it).
Warhol himself said that art "is anything you can get away with." He was clearly very much aware of the dubious legality of some of his work.
I think artists feeling like shit in this situation is totally understandable. I'm just a dilettante painter and amateur hentai sketcher, but some of the real artists I know are practically in the middle of an existential crisis. Feeling empathy for them is not the same as thinking that we should make futile efforts to halt the progress of this technology.
Because of how capitalism works and people always try to corner markets, extract value from other people, etc. etc.?
> If it's not, why worry about it ?
Because we can choose different professions that are less susceptible to automation? Or we can study DL to implement our own AI.
hear hear...
> Passively training a model on an artwork does not change the art in the slightest
copyright holders, I mean individual authors, people who actually produced the content being used, disagree.
They say AI is like a bulldozer destroying the park to them.
Which technically is true, it's a machine that someone (some interested party maybe?) is trying to disguise as a human, doing human stuff.
But it's not.
> passive, non-destructive
Passive, non-destructive, in this context means
- passive: people send the images to you, you don't go looking for them
- non-destructive: people authorized you, otherwise it's destructive of their rights.
It would be great if there was an AI that could be a liaison between developers and stakeholders, translating the languages of each side for mutual understanding.
If an AI were to make it impossible to make a living doing programming, would that be an improvement for most readers of this site?
"Mickey" does work as a prompt, but if they took that word out of the text encoder he'd still be there in the latent space, and it's not hard to find a way to construct him out of a few circles and a pair of red shorts.
It's still worth it on the whole but I have already gotten caught up on subtly wrong Copilot code a few times.
Art and programming are hard for different reasons.
The difference in the AI context is that a computer program has to do just about exactly whats asked of it to be useful, whereas a piece of art can go many ways and still be a piece of art. If you know what you want its quite hard to get DALL-E to produce that exactly (or it has been for me), but it still generates something that is very good looking.
Lawyers are going to have a lot of fun$$ with the copyright/trademark violation flood that is coming (and not only for high profiles).
i’ve noticed this mediocrity decades ago when artists started using computers to create art. for me that’s when it went downhill.
Targeting and distribution as well. AI has the edge on individual creators here.
Not disagreeing with your comment but this is not the case with Midjourney. Very little is needed to produce stunning images. But afaik they modify/enhance the prompts behind the screen
Intellectual property concepts in their current form started to appear as soon as prints, so about the 15th century.
https://en.wikipedia.org/wiki/History_of_copyright#Early_dev...
People keep saying this without defining what exactly they mean. This is a technical topic, and it requires technical explanations. What do you think "mostly copying" means when you say it?
Because there isn't a shred of original pixel data reproduced from training data through to output data by any of the diffusion models. In fact there isn't enough data in the model weights to reproduce any images at all, without adding a random noise field.
> The benefits of allowing this will be had by a very small group of corporations and individuals
You are also grossly mistaken here. The benefits of heavily restricting this, will be had by a very small group of corporations and individuals. See, everyone currently comes around to "you should be able to copyright a style" as the solution to the "problem".
Okay - let's game this out. US Copyright lasts for the life of author plus 70 years. No copyright work today will enter public domain until I am dead, my children are dead, and probably my grandchildren as well. But copyright can be traded and sold. And unlike individuals, who do die, corporations as legal entities do not. And corporations can own copyright.
What is the probability that any particular artistic "style" - however you might define that (whole other topic really) - is truly unique? I mean, people don't generally invent a style on their own - they build it up from studying other sources, and come up with a mix. Whatever originality is in there is more a function of mutation of their ability to imitate styles then anything else - art students, for example, regularly will do studies of famous artists and intentionally try to copy their style as best they can. A huge amount of content tagged "Van Gough" in Stable Diffusion is actually Van Gough look-alikes, or content literally labelled "X in the style of Van Gough". It had nothing to do with them original man at all.
I mean, zero - by example - it's zero. There are no truly original art styles. Which means in a world with copyrightable art styles, all art styles eventually end up as a part of corporate owned styles. Or the opposite is also possible - maybe they all end up as public domain. But in both cases the answer is the same: if "style" becomes a copyrightable term, and AIs can reproduce it in some way which you can prove, then literal "prior art" of any particular style will invariably be an existing part of an AI dataset. Any new artist with a unique style will invariably be found to simply be 95% a blend of other known styles from an AI which has existed for centuries and been producing output constantly.
In the public domain world, we wind up approximately where we are now: every few decades old styles get new words keyed into them as people want to keep up with the times of some new rising artist who's captured a unique blend in the zeitgeist. In the corporate world though, the more likely one, Disney turns up with it's lawyers and says "we're taking 70% or we're taking it all".
The linked item was posted within the past 24 hours. The referenced images also appear to be current so far as I can tell.
(I'd looked for a more substantial post or article without luck when submitting this.)
The steady progress of the Industrial Revolution that has made the average person unimaginably richer and healthier several times over, looks in the moment just like this:
"Oh no, entire industries of people are being made obsolete, and will have to beg on the streets now".
And yet, as jobs and industries are automated away, we keep getting richer and healthier.
However, art isn't solely interpolation. The critical part is that art styles shift around due to innovations or new viewpoints, often caused by societal development. AI might be able to make a new Mondriaan when trained on pre-existing Mondriaans but it won't suddenly generate a Mondriaan out of a Van Gogh training set - and yet that's still roughly what happened historically.
It may be that one day AI will also make their creators obsolete. But at that point so many professions will be replaced by it already, that we will live in a massively changed society where talking about the "job" has no meaning anymore.
Edit: Typo
Copying an artist's style isn't in and of itself looked down upon, any artist will tell you that doing so is an important part of figuring out what aspects of it one likes for their own style. The problem with AI copying it is that the way the vast majority of users are using it isn't in artistic expression. The majority of them are simply spamming images out in an attempt to gain a popularity "high" from social media, without regard for any of the features of typical creative pursuits (an enjoyment of the process, an appreciation for other's effort, a desire to express something through their creativity, having some unique intentional and unintentional identifying features).
Honestly maybe the West messed up having such broad fair use protections since it seems people really have no respect for any creative effort, judging by all the AI art spam and all the shortsighted people acting smug about it despite the questions around it being pretty important to have a serious conversation about, especially for pro-AI folk.
The AI art issue has several difficult problems that we are seemingly too immature to deal with, it makes it clear how screwed we'd be as a society if anything approaching true AGI happened to be stumbled upon anytime soon.
> Who can't contribute to Wine?
> Some people cannot contribute to Wine because of potential copyright violation. This would be anyone who has seen Microsoft Windows source code (stolen, under an NDA, disassembled, or otherwise). There are some exceptions for the source code of add-on components (ATL, MFC, msvcrt); see the next question.
I've seen a few MIT/BSD projects that ask people not to contribute if they have seen the equivalent GPL project. It's a problem because Copilot has seen "all" GPL projects.
<https://waxy.org/2019/12/how-artists-on-twitter-tricked-spam...>
If art streams are tree-spiked with copyrighted or trademarked works, then AI generators might be a bit more gun-shy about training with abandon on such threads.
It's a form of monkeywrenching.
<https://en.wikipedia.org/wiki/Tree_spiking>
<https://en.wikipedia.org/wiki/Sabotage#As_environmental_acti...>
The through line for a lot of mediocre stuff is the intention of the artist/creator to appeal to as broad a demographic/audience as possible so as to dissolve away anything that makes the art interesting, challenging, and good.
Rendering was only ever a small part of the visual arts process anyway. And you can still manually add pixel perfect details to these images by hand that you wouldn't know how to create an AI prompt for. And further, you can mash together AI outputs in beautifully unique and highly controlled ways to produce original compositions that still take work to reproduce.
To me, these AI's are just a tool for increased speed, like copy and paste.
Copyright is not the same as intellectual property.
Copyright is not an intellectual property concept.
They're very different things, though often conflated.
Can probably do all that well-enough (probably doesn't need to be perfect) by leaning on FAANG, with or without legislation.
But: opt-in by default, or opt-out by default?
And that was the main reason for "modern art". A camera can do a portrait or landscape instantly and more precise than a painter, but it can't compete on abstract or imagined pictures.
Will something analogous happen when AIs takes over other industries? I have no clue, but it will, as always, be interesting to see what happens.
https://waxy.org/2019/12/how-artists-on-twitter-tricked-spam...
I feel like this about the mostly-human-created fashion. In my not so long lifetime I've seen everything from the 90s making a comeback. Ultimately I guess in terms of clothing that is practical with the materials that are available, we've already cycled through every style there is, such that the cycle time is now <30years.
Training a model with artists' work seems completely fine to me. If something is out in the world and you can see it, you can't really control how that affects a person or a model or whatever.
The actual issue is reproduction of trademarked and copyrighted material. There are already restrictions on how you can use Mickey Mouse's likeness in any derivative work. That's not an AI issue. It's an IP issue. The derivative works are no different than if I, a person, produced the same derivative work.
It would be funny to me that we had to turn our attention to training AIs in IP laws.
Of course software gets copied all the time, but we have jobs because so much bespoke software is needed. Looking at some of what AI can do now, I wouldn't need surprised if our floor gets raised a lot in the next few years as well.
Are artists really "doomed"? Or are they just worse at redistribution?
While technically both artists and developers make their living by producing copyrighted works, our relationship to copyright is very different; while artists rely on copyright and overwhelmingly support its enforcement as-is, many developers (including myself) would argue for a significant reduction of its length or scale.
For tech workers (tech company owners could have a different perspective) copyright is just an accidental fact of life, and since most of paid development work is done as work-for-hire for custom stuff needed by one company, that model would work just as well even if copyright didn't exist or didn't extend to software. While in many cases copyright benefits our profession, in many other cases it harms our profession, and while things like GPL rely on copyright, they are also in large part a reaction to copyright that wouldn't be needed if copyright for code didn't exist or was significantly restricted.
-> here is the actual judgement though: https://juris.bundesgerichtshof.de/cgi-bin/rechtsprechung/do...
Art is more difficult than programming for people with talents in programming but not in arts. Art is easier than programming for people with talents in arts but not in programming. Granted, those two sentences are tautology, but nonetheless a reminder that the difficulty of art and programming does not form a total order.
If you want to give programming work to an AI, give it the things where incorrect behaviour is going to be really obvious, so that it can be fixed. Don't give it the stuff where everyone will just naively trust the computer without thinking about it.
I know what you mean, but thinking about it critically, this is just wrong. All software has bugs in it. Small bugs, big bugs, critical bugs, security bugs, everything. No code is immune. The largest software used by millions every day has bugs. Library code that has existed and been in use for 30 years has bugs.
I don't think you were actually thinking of this in your comparison, but I think it's actually a great analogy - code, like art, can be 95% complete, and that's usually enough. (For art, looks good and is what I wanted is enough, for code, does what I want right now, nevermind edge cases is enough.)
I don't think people are debating fair use for education and research. It's the obvious corporate and for profit use which many see coming that is the issue. Typically, licensing structures were a solution for artists, but "AI" images seem to enable for-profit use by skirting around who created the image by implying the "AI" did, a willful ignorance of the way that the image was generated/outputted.
> code that's only 95% right is just wrong,
I know what you mean, but thinking about it critically, this is just wrong. All software has bugs in it. Small bugs, big bugs, critical bugs, security bugs, everything. No code is immune. The largest software used by millions every day has bugs. Library code that has existed and been in use for 30 years has bugs.
I don't think you were actually thinking of this in your comparison, but I think it's actually a great analogy - code, like art, can be 95% complete, and that's usually enough. (For art, looks good and is what I wanted is enough, for code, does what I want right now, nevermind edge cases is enough.)
The reason ChatGPT isn't threatening programmers is for other reasons. Firstly, it's code isn't 95% good, it's like 80% good.
Secondly, we do a lot more than write one-off pieces of code. We write much, much larger systems, and the connections between different pieces of code, even on a function-to-function level, are very complex.
You'll find out because you're now an enlightened immortal being, or you won't find out at all because the thermonuclear blast (or the engineered plague, or the terminators...) killed you and everybody else.
Does that mean there won't be some enterprising fellas who will hook up a chat prompt to some website thing? And that you can demo something like "Add a banner. More to the right. Blue button under it" and that works? Sure. And when it's time to fiddle with the details of how the bloody button doesn't do the right thing when clicked, it's back to hiring a professional that knows how to talk to the machine so it does what you want. Not a developer! No, of course not, no, no, we don't do development here, no. We do prompts.
What are those? It seems it's low-margin, physical work that's seeing the least AI progress. Like berry picking. Maybe also work that will be kept AI-free longer by regulators like being a judge?
But at least every job I've had so far also entailed understanding the entire system, the surrounding ecosystem, upstream and downstream dependencies and interactions, the overall goal being worked toward, and playing some role in coming up with the requirements in the first place.
ChatGPT can't even currently update its fixed-in-time knowledge state, which is entirely based on public information. That means it can't even write a conforming component of a software system that relies on any internal APIs! It won't know your codebase if it wasn't in its training set. You can include the API in the prompt, but then that is still a job for a human with some understanding of how software works, isn't it?
Ultimately though this isn't a technical problem but an economic one about how we as a society decide to share our resources. AI growth the pie, but removes leverage from some to claim their slice. Automation is why we'll inevitably need UBI at some point
It's fair to suppose (albeit based on a very small sample size, i.e., the last couple hundred, abnormal years of history) that all sorts of new jobs will arise as a result of these changes- but it seems to me unreasonable to suppose that these new jobs of the future will necessarily be more interesting or enjoyable than the ones they destroyed. I think it's easy to imagine a case in which the jobs are all much less pleasant (even supposing we all are wealthier, which also isn't necessarily going to be true)- imagine a future where the remaining jobs are either managerial/ownership based in nature or manual labor. To me at least, it's a bleak prospect.
Now imagine a future where AI can assist in law. Or should we not have that because lawyers pay so much for education and they work so bitterly? Should we do away with farm equipment as well? Should we destroy musical synths so that we can have more musicians?
It’s one thing to say we should have a government program to ease transitions in industry. It’s something else to say that we should hold back technological progress because jobs will be destroyed.
How do we develop a coherent moral framework to address this matter?
There are noninfringing usecases for generating images containing Mickey Mouse - not least, Disney themselves produce thousands of images containing the mouse's likeness every year; but also parody usecases exist.
But even if you are just using SD to generate images, if we want to make sure to avoid treading on Disney's toes, the AI would need to know what Mickey Mouse looks like in order to avoid infringing trademark, too. You can feed it negative weights already if you want to get 'cartoon mouse' but not have it look like Mickey.
The AI draws what you tell it to draw. You get to choose whether or not to publish the result (the AI doesn't automatically share its results with the world). You have the ultimate liability and credit for any images so produced.
To me, code that is 95% correct will either fail catastrophically or give very wrong results. Imagine if the code you wrote was off 5% for every number it was supposed to generate. Code that is 99.99% correct will introduce subtle bugs.
* No shade to chatGPT, writing a function that calculates shap values is tough lol, I just wanted to see what it could do. I do think that, given time, it'll be able to write a days worth of high quality code in a few seconds.
Which is what most humans do, and what most humans need.
But currently, first, there is a reasonable argument that the model weights may be not copyrightable at all - it doesn't really fit the criteria of what copyright law protects, no creativity was used in making them, etc, in which case it can't be a derivative work and is effectively outside the scope of copyright law. Second, there is a reasonable argument that the model is a collection of facts about copyrighted works, equivalent to early (pre-computer) statistical ngram language models of copyrighted books used in e.g. lexicography - for which we have solid old legal precedent that creating such models are not derivative works (again, as a collection of facts isn't copyrightable) and thus can be done against the wishes of the authors.
Fair use criteria comes into play as conditions when it is permissible to violate the exclusive rights of the authors. However, if the model is not legally considered a derivative work according to copyright law criteria, then fair use conditions don't matter because in that case copyright law does not assert that making them is somehow restricted.
Note that in this case the resulting image might still be considered derivative work of an original image, even if the "tool-in-the-middle" is not derivative work.
I like the explanation a lot, and I think the timelines line up pretty well.
But sure, it could be one of those stories that sound true, but isn't.
In any case, in the example images here, the AI clearly knew who Mickey is and used that to generate Mickey Mouse images. Mickey has got to be in the training data.
Imagine you are a painter and you have developed your expertise in photorealistic painting over your entire lifetime.
Would you mind if someone snaps a photograph of the same subject you just painted?
What if your commissioners tell you they decided to buy a photograph instead of your painting because it looked more realistic?
Every argument I've seen against AI art is an appeal to (human) ego or an appeal to humanity. I don't find either argument compelling. Take this video [0] for example and half of the counterarguments are an appeal to ego - and one argument tries to paint the "capped profit" as a shady dealing of circumventing laws without realizing (1) it's been done before, OpenAI just tried slapping a label on it and (2) nonprofits owning for-profit subdivisions is commonplace. Mozilla is both a nonprofit organization (the Foundation) and a for-profit company (the Corporation).
E:
I'm going to start a series of photographs that are intentionally bad and poorly taken. Poor framing, poor lighting, poor composition. Boring to look at, poor white balance, and undersaturated photos like the kind taken on overcast days. With no discernable subjects or points of interest. I will call the photos art - things captured solely with the press of a button by pointing my camera in a direction seemingly at random. I'm afraid many won't understand the point I am making but if I am making a point it does make the photographs art - does it not? I'm pretty sure that is how modern art works. I will call the collection "Hypocrisy".
E2:
The first photo of the collection to set the mood - a picture of the curtain in my office: https://kimiwo.aishitei.ru/i/mUjQ5jTdeqrY3Vn0.jpg
Chosen because it is grey and boring. The light is not captured by the fabric in any sort of interesting manner - the fabric itself is quite boring. There is no pattern or design - just a bland color. There is nothing to frame - a section of the curtain was taken at random. The photo isn't even aligned with the curtain - being tilted some 40 odd degrees. Nor is the curtain ever properly in focus. A perfect start for a collection of boring, bland photos.
[0] https://www.youtube.com/watch?v=tjSxFAGP9Ss&feature=youtu.be
The role that is possibly highly streamlined with a near-future ChatGPT/CoPilot are requirements-gathering business analysts, but developers at Staff level on up sits closer to requiring AGI to even become 30% good. We'll likely see a bifurcation/barbell: Moravec's Paradox on one end, AGI on the other.
An LLM that can transcribe a verbal discussion directly with a domain expert for a particular business process with high fidelity, give a precis of domain jargon to a developer in a sidebar, extracts out further jargon created by the conversation, summarize the discussion into documentation, and extract how the how's and why's like a judicious editor might at 80% fidelity, then put out semi-working code at even 50% fidelity, that works 24x7x365 and automatically incorporates everything from GitHub it created for you before and that your team polished into working code and final documentation?
I have clients who would pay for an initial deployment of that for an appliance/container head end of that which transits the processing through the vendor SaaS' GPU farm but holds the model data at rest within their network / cloud account boundary. Being able to condense weeks or even months of work by a team into several hours that requires say a team to tighten and polish it up by a handful of developers would be interesting to explore as a new way to work.
I dont think that is absolutely something a society must guarantee. People are made obsolete all the time.
What needs to be done is to produce new needs that currently cannot be serviced by the new AI's. I'm sure it will come - as it has for the past hundred years when technology supplanted an existing corpus of workers. A society can make this transition smoother - such as a nice social safety-net, and low-cost/free education for retraining into a different field.
In fact, these things are all sorely needed today, without having the AIs' disruptions.
But that is giving AI too much credit. As advanced as modern AI models are, they are not AGIs comparable to human cognition. I don't get the impulse to elevate/equate the output of trained AI models to that of human beings.
But what if AI generates arts where humans do not scale?
For example, what if the AAA game you are expecting gets done in half of the time, or has ten times the size of explorable area, because it is cheap and fast to generate many of the arts needed by AI?
Or if some people excellent at story telling but mediocre at drawing can now produce world class manga due to the assistance of AI?
This is the main reason I haven't actually incorporated any AI tools into my daily programming yet - I'm mindful that I might end up spending more time tracking down issues in the auto-generated code than I saved using it in the first place.
[0] You can see the results here https://twitter.com/NickFisherAU/status/1601838829882986496
In the same way, making the model deliberately unable to generate Micky Mouse images would be much more far-reaching than just removing Micky imagery from the trainset.
Of course that probably means that those copyrighted images exist in some encoded form in the data or neural network of the AI, and also in our brain. Is that legal? With humans it's unavoidable, but that doesn't have to mean that it's also legal for AI. But even if those copyrighted images exist in some form in our brains, we know not to reproduce them and pass them off as original. The AI does that. Maybe it needs a feedback mechanism to ensure its generated images don't look too much like copyrighted images from its data set. Maybe art-AI necessarily also has to become a bit of a legal-AI.
I'm sure your employer would love that more than you. That's the issue here.
> That said while these tools are incredibly impressive, having messed with this for a few days to try to even do basic stuff, what am I missing here? It is a nice starting point and can be a productivity boost but the code produced is often wrong and it feels a long way away from automating my day to day work.
This is the first irritation of such a tool and it's already very competent. I'm not even sure I'm better at writing code than GPT, the only thing I can do that it can't is compile and test the code I produce. If you asked me to create a React app from a two sentence prompt and didn't allow me to search the internet, compile or test it I'm sure I'd probably make more mistakes than GPT to be honest.
But that is not "programming". That is glueing together bullshit until it works and the results of that "work" are "blessing" us everyday. The gift that keeps on giving. You FAANG people are indeed astronomically, immorally, overpaid and actively harm the world.
But, luckily, the world has more layers than that. Programming for Facebook is not the same as programming for a small chemical startup or programming in any resource-restricted environment where you can't just spin up 1000 AWS instances at your leisure and you actually have to know what you're doing with the metal.
Be entertaining. Be outrageous. Be endearing. An AI can't cut off their ear.
Clearly ChatGPT is going to improve, and AI development is moving at a breakneck pace and accelerating. Dinging it for totally fumbling 5% or 10% of written code is completely missing the forest for the trees.
Sounds like you are, because in copyright law there is not carve out for only non-profit education / research. Research and Education can be both profit and non-profit, copyright law does not distinguish between the 2, but it sounds like you claim is research can only ever be non-profit but given the entire computing sector in large part owes itself to commercial research (i.e Bell Labs) I find that a bit odd
I disagree that there is no originality in art styles, human creativity amounts to more than just copying other people. There is no way a current gen AI model would be able to create truly original mathematics or physics, it is just able to reproduce facsimile and convincing bullshit that looks like it. Before long the models will probably able to do formal reasoning in a system like Lean 4, but that is a long way of from truly inventive mathematics or physics.
Art is more subtle, but what these models produce is mostly "kitsch". It is telling that their idea of "aesthetics" involves anime fan art and other commercial work. Anyways, I don't like the commercial aspects of copyright all that much, but what I like is humans over machines. I believe in freely reusing and building on the work of others, but not on machines doing the same. Our interests are simply not aligned at this point.
The original copyright term was for 14 years, not for Life.
Copyright infringement does generally require you to have been aware of the work you were copying. So for sure there's an issue with using AI to generate art where you could use the tool to generate you an image, which you think looks original, because you are unaware of a similar original work, so you could not be guilty of copyright infringement - but if the AI model was trained on a dataset that includes an original copyrighted work that is similar, obviously it seems like someone has infringed something there.
But that's not what we're talking about in the case of mickey mouse imagery, is it? You're not asking for images of 'utterly original uncopyrighted untrademarked cartoon mouse with big ears' and then unknowingly publishing a mouse picture that the evil AI copied from Disney without your knowledge.
Say it with me: Computer algorithms are NOT people. They should NOT have the same rights as people.
I fully expect there will be zero reciprocation. There will, instead, be a strong expectation that that empathy turns into centering of fear and a resulting series of economic choices. AI systems are now threatening the ability of some artists to get paid and those artists would like that to stop.
I think we're seeing it right now. You shift effortlessly from talking about empathy to talking about the money. You consider the one the way to get the other, so you deplore the horrifying lack of empathy.
Let me put it another way. Would you be happy if you saw an outpouring of empathy, sympathy, and identification with artists coupled with exactly the same decisions about machine learning systems?
So any transformativity of the action should be attributed to the human and the same copyright laws would apply.
The question is perhaps not if we should have empathy for them. The question is what we should do with it once we have it. I have empathy for the cabbies with the Knowledge of London, but I don't think making any policy based on or around that empathy is wise.
This is tricky in practice. A surprising number of people regard prioritizing the internal emotional experience of empathy in policy as experiencing empathy.
Art can also be extremely wrong in a way everyone notices and still be highly successful. For example: Rob Liefeld.
I watched a documentary in roughly the early oughts about AI. The presenter might have been Alan Alda.
In one segment, he visited some military researchers who were trying to get a vehicle to drive itself. It would move only a few inches or feet at a time as it had to stop to recalculate.
In another segment, he visited some university researchers who set up a large plotter printer to make AI-generated art. It was decent. He saw it could depict things like a person and a pot, so he asked if it would ever do something silly to us like put a person in a pot. The professor said not to be silly.
To jokingly answer the title question: everyone who saw that one specific documentary 20 years ago knew that AI art was way ahead of AI machines.
Art is useful when someone subjectively finds it enjoyable or meaningful. While it might not achieve all of what humans can, the barrier to entry is relatively lower.
In a program, you can't really afford that. A small mistake can have dramatic consequences. Now, maybe in the next few years you'll only need one human supervisor fixing AI bugs where you used to need 10 high-end developers, but you probably won't be able to make reliable programs just by typing a prompt, the way you can currently generate a cover for an e-book just by asking midjourney.
As for the political consequences of all of this, this is yet another issue.
It would be very easy to make training ML models on publicly available data illegal. I think that would be a very bad thing because it would legally enshrine a difference between human learning and machine learning in a broader sense, and I think machine learning has huge potential to improve everyone's lives.
Artists are in a similar position to grooms and farriers demanding the combustion engine be banned from the roads for spooking horses. They have a good point, but could easily screw everyone else over and halt technological progress for decades. I want to help them, but want to unblock ML progress more.
Which is pretty close to the actual issue here, that artists did not give their permission to use their own work to generate their competition.
https://en.m.wikipedia.org/wiki/Scientific_American_Frontier...
Edit: confused SAF with Nova!
This feels like the natural outcome of Moravec's paradox[1]. I can imagine a grim future where most intellectually stimulating activities are done by machines and most of the work that's left for humans is building, cleaning, and maintaining the physical infrastructure that keeps these machines running. Basically all the physical grunt work that has proven hard to find a general technological solution for.
And the weights. The weights it has learned come originally from the images.
Now, I have empathy. I paused a moment before writing this comment to identify with artists, art students, and those who have been unable to reach their dreams for financial reasons. I emphatically empathize with them. I understand their emotional experiences and the pain of having their dreams crushed by cold and unfeeling machines and the engineers who ignore who they crush.
Yet I must confess I am uncertain how this is supposed to change things for me. I have no doubt that there used to be a lot of people who deeply enjoyed making carriages, too.
Programming is definitely easier to make a living from. I'm a very mediocre artist and developer and I'm never making enough off of art to live on, but I could get a programming job at a boring company and it would pay a living wage. In that sense, it's definitely 'easier'.
I would turn this around to you: if a braindead AI can do these astonishingly difficult art, maybe art was never difficult to begin with, and that artists are merely finagling dumb, simple things to their work. Sounds annoying and condescending right? If you disagree what I said about art, maybe you ought to be more aware of your own lack of empathy.
I'll go so far as to say that in many cases, displaying empathy for the artists without also advocating for futile efforts to halt the progress of this technology will be regarded as a lack of empathy.
I think this is exactly the problem that many artists have with imagine generators. Yes, we could all easily identify if a generated artwork contained popular Disney characters - but that's because it's Disney, owners of some of the most well-known IP in the world. The same isn't true for small artists: There is a real risk that a model reproduces parts of a lesser known copyrighted work and the user doesn't realise it.
I think this is what artists are protesting: Their works have been used as training data and will now be parts of countless generated images, all with no permission and no compensation.
Here is an example for keras (a popular ML framework). https://keras.io/guides/transfer_learning/
Ever wondered why artists have to show up at gallery parties to sell their stuff?
Art currently requires two skills - technical rendering ability, and creative vision/composition. AI tools have basically destroyed the former, but the latter is still necessary. Professional artists will have to adjust their skillset, much like they had to adjust their skillset when photography killed portrait painting as a profession.
It can't? I could've sworn I've seen (cherry-picked) examples of it doing exactly that, when prompted. It even explains what the bug is and why the fix works.
Because it's barely been a year since we've gone from people confidently asserting that AI won't be able to produce visual art on the level of human professionals at all to the current situation. Predictions on ways in which AI performance will not catch up to or overtake human performance have a bad track record at the moment, and it has not been long enough to even suspect that the current increase in performance might be plateauing. Cutting-edge image generation AI appears to often imitate human artists in obvious ways now, but it seems quite plausible that the gap between this and being "original"/as non-obvious in your imitation of other humans as those high-performing human artists that are considered to be original is merely quantitative and will be closed soon enough.
> You mean that any artist should be just happy that his work is used by other people / rich corporation / AI without consent? Cool, cool.
I don't know how you get that out of what I said. Rather, I'm claiming that artists will have enough to be unhappy about being obsoleted, and the current direction of their ire at being "copied" by AI may be a misdirection of effort, much as if makers of horse-drawn carriages had tried to forestall the demise of their profession by complaining that the design of the Ford Model T was ripped off of theirs (instead of, I don't know, lobbying to ban combustion engines altogether, or sponsoring Amish proselytism).
In other words, I am wondering if the current issue here is the model being trained or the model being able to generate images.
Coming back to my example, if the car displayed the closest vehicle on the HUD. Would Honda ask the car company to replace the likeness of their car with a generic car icon or would they ask for the model to be scrubbed?
Extreme specialists are found everywhere. Mastering skateboarding at world level will eat your life too, but it's not "harder" than programming. At least, for any commonsensical interpretation of "harder".
All the rest, we do too. Except I don't record videos and I'm sure it is not childishly easy, but it will not eat my life.
Technologists acting like technocrats and expecting everyone to give them sympathy, empathy and identification is laughably rude and insulting.
Several open source licenses do not agree with this (they enforce restrictions on how it is to be shared).
At the end of the day though, i think i'm an oddball in this camp. I just don't think there's that much difference between ML and Human Learning (HL). I believe we are nearly infinitely more complex but as time goes on i think the gulf between ML and HL complexity will shrink.
I recently saw some of MKBHD's critiques of ML and my takeaway was that he believes ML cannot possibly be creative. That it's just inputs and outputs.. and, well, isn't that what i am? Would the art i create (i am also trying to get into art) not be entirely influenced by my experiences in life, the memories i retain from it, etc? Humans also unknowingly reproduce work all the time. "Inspiration" sits in the back of their minds and then we regurgitate it out thinking it as original.. but often it's not, it's derivative.
Given that all creative work is learned, though, the line between derivative and originality seems to just be about how close it is to pre-existing work. We mash together ideas, and try to distance it from other works. It doesn't matter what we take as inspiration, or so we claim, as long as the output doesn't overlap too much with pre-existing work.
ML is coming for many jobs and we need to spend a lot of time and effort thinking about how to adapt. Fighting it seems an uphill battle. One we will lose, eventually. The question is what will we do when that day comes? How will society function? Will we be able to pay rent?
What bothers me personally is just that companies get so much free-reign in these scenarios. To me it isn't about ML vs HL. Rather it's that companies get to use all our works for their profit.
We're like people getting the very first electric light bulbs in their home, trying to speculate how electricity will change the world. The pace of change however will be orders of magnitude faster than that.
Shoe's on the other foot now and they don't like it.
In the former case, I'd agree.
In the second, there's a clear violation of 17 USC 506(a)(1)(A).
The solution isn't to halt technological progress to try to defend the few jobs that are actually available in that sector, the solution is to fight forward to a future where no one has to do dull and boring things just to put food on the table. Fight for future where people can pursue what they want regardless of whether it's profitable.
Most of that fight is social and political, but progress in ML is an important precursor. We can't free everyone from the dull and repetitive until we have automated all of it.
My empathy for artists is aligned with my concern for everyone else's future.
> I want to help them, but want to unblock ML progress more.
But progress towards what end? The ML future looks very bleak to me, the world of "The Machine Stops," with humans perhaps reduced to organic effectors for the few remaining tasks that the machine cannot perform economically on its own: carrying packages upstairs, fixing pipes, etc.
We used to imagine that machines would take up the burden our physical labor, freeing our minds for more creative and interesting pursuits: art, science, the study of history, the study of human society, etc. Now it seems the opposite will happen.
To some. To others, the artistic object is all that all that matters.
This is giving ML models, more credit than they are due. They are unable to be imagine, they might convincingly seem to produce novel outputs, but their outputs are ultimately proscribed by their inputs and datasets and programming. They're machines. Humans can learn like machines, but humans are also able to imagine as agents. "AI" "art" is just neither of its namesakes. That doesn't mean it isn't impressive, but implying they are the same is granting ML more powers and abilities than it is capable of.
Meanwhile, where is my levy of custom artists willing to do free commission work for me? It’s enjoyable, right?
I see a lot of discussion about money and copyright, and little to no discussion about the individual whose life is enriched by access to these tools and technologies.
As for your bleak future… will that even come to pass? I don’t know. Maybe it depends on your notion of “surpass”, and what that looks like.
- government could treat open ai like an electricity utility, with regulated profits
- open ai could be forced to come up with compensation schemes for the human source images. The more the weights get used, the higher the payout
- the users of the system could be licensed to ensure proper use and that royalties are paid to the source creators. We issue driving licenses, gun licenses, factory permits etc. Licenses are for potentially dangerous activities and powers. This could be one of those.
- special taxation class for industries like this that are more parasitic and less egalitarian than small businesses or manufacturing
- outright ban on using copyrighted work in ai training
- outright ban on what can be considered an existential technology. This has been the case for some of the most important technologies in the last 100 years including nuclear weapons.
That is based on the fallacy that derivative creativity is somehow lesser than so-called “original” creativity.
Comparing these is very "apples and oranges", but I think you'd better have a strong background in both if you're gonna try.
I have no idea if the AI that's getting code 80% right today will get it 95% right in two years, but given current progress, I wouldn't bet against it. I don't think there's any fundamental reason it can't produce better code than I can, at least not at the "write a function that does X" level.
Whole systems are a way harder problem that I wouldn't even think of making guesses about.
For years DeBeers and other diamond moguls have run extensive propaganda campaigns to try to convince people that lab-grown diamonds are physically, emotionally, and morally inferior. They had a lot of success at first. Based on lobbying, the US FTC banned referring to lab-grown diamonds as "real", "genuine", or even "stone". It required the word "diamond" be prefixed with "lab-grown" or "synthetic" in any marketing materials.
Technology kept improving, economies of scale applied, and consumer demand eventually changed the balance. The FTC reversed its rulings and in 2022 demand for lab-grown stones (at small fractions of equivalent natural prices) is at an all-time high.
Artists (and writers, and programmers) can fight against this all they like, and may win battles in the short term. In the end the economic benefits accruing to humankind as a result of these technologies is inexorably going to normalize them.
"AI will outdo us at repetitve, mindless tasks, but it will NEVER be able to compete with humans at, like, ART, and stuff"
Also, a jpg seemingly fits your definition as “no creativity was used in making them, etc” but clearly they embody the original works creativity. Similarly, a model can’t be trained on random data it needs to extract information from it’s training data to be useful.
The specific choice of algorithm used to extract information doesn’t change if something is derivative.
Most of coding is routine patterns that are only perceived as complex because of the presence of other coders and the need to "talk" with them, which creates a need for reference materials(common protocols, documentation, etc.)
Likewise, most of painting is routine patterns complicated by a mix of human intent(what's actually communicated) and the need for reference materials to make the image representational.
Advancements in Western painting between the Renaissance and the invention of photography track with developments in optics; the Hockney-Falco thesis is the "strong" version of this, asserting that specific elements in historical paintings had to have come through the use of optical projections, not through the artist's eyes. A weaker form of this would say that the optics were tools for study and development of the artist's eye, but not always the go-to tool, especially not early on when their quality was not good.
Coding has been around for a much shorter time, but mostly operates on the assumptions of bureaucracy: that which is information is information that can be modelled, sorted, searched. And the need for more code exists relative to having more categories of modelled data.
Art already faced its first crisis of purpose with the combination of photography and mass reproduction. Photos produced a high level of realism, and as it became cheaper to copy and print them, the artist moved from a necessary role towards a specialist one - an "illustrator" or "fine artist".
What an AI can do - given appropriate training, prompt interfaces and supplementary ability to test and validate its output - is produce a routine result in a fraction of the time. And this means that it can sidestep the bureaucratic mode entirely in many circumstances and be instructed "more of this, less of that" - which produces features like spam filters and engagement-based algorithms, but also means that entire protocols are reduced to output data if the AI is a sufficiently good compiler; if you can tell the AI what you want the layout to look like and it produces the necessary CSS, then CSS is more of a commodity. You can just draw a thing, possibly add some tagging structure, and use that as the compiler's input. Visual coding.
But that makes the role a specialized one; nobody needs a "code monkey" for such a task, they need a graphic designer...which is an arts job.
That is, the counterpoint to "structured, symbolic prompts generating visual data" is "visual prompts generating structured, symbolic data". ML can be structured in either direction, it just takes thoughtful engineering. And if the result is a slightly glitchy web site, it's an acceptable tradeoff.
Either way, we've got a pile of old careers on their way out and new careers replacing them.
The vast majority of AI art I've seen on sites like Pixiv has been 'generic' to the level of the 'artist' being completely indistinguishable from any other AI-using 'artist'. There has been very little of the sort where the AI seemed to truly just be a tool and there was enough uniqueness to the result that it was easy to guess who the creator was. The former is definitely less creative than the latter.
Part of my job is something like that. I make custom programs for my department in the university. I don't care how long is the copyright. Anyway, I like to milk the work for a few years. There are some programs I made 5 or 10 years ago that we are still using and saving time of my coworkers and I like to use that leverage to get more freedom with my time. (How many 20% projects can I have?) Anyway, most of them need some updating because the requirements change of the environment changes, so it's not zero work on them.
There are very few projects that have a long term value. Games sell a lot of copies in a short time. MS Office gets an update every other year (Hello Clippy! Bye Clippy!) , and the online version is eating them. I think it's very hard to think programs that will have a lot of value in 50 years, but I'm still running some code in Classic VB6.
people are mad because job & portfolio sites are being flooded with aishit which is making them unusable for both artists and clients .
people are mad because their copyright is being scraped and resold for profit by third parties without their consent.
whether ai is the future is an utterly meaningless distraction until these concerns are addressed. as an aside, ai evangelists telling working professionals that they 'simply don't get' their field of expertise has been an incredibly poor tact for generating goodwill towards this technology or the operations attempting to extract massive profit from it's implementation.
Technological progress is not a linear deterministic progression. We decide how to progress every step of the way. The problem is that we are making dogshit decisions for some reason
Maybe we lack the creativity to envision alternative futures. How does a society become so uncreative I wonder
The value of work is not measured by its difficulty. There's a small amount of people who make a living doing contract work that may be replaced by an AI, but these people were in a precarious position in the first place. The well-to-do artists are not threatened by AI art. The value of their work is derived from them having put their name on it.
If you assume that most programming work could be done by an AI "soon", then we really have to question what sort of dumb programming work people are doing today and whether that wouldn't disappeared anyway, once funding runs dry. Mindlessly assembling snippets from Stackoverflow may well be threatened by AI very soon, so if that's your job, consider the alternatives.
What about the horse-powered carrioles devastated by cars !!
That said, I don't think AIs ability to generate art is a major milestone in the progress of things, I think it's more of the same, automating low value-add processes.
I agree that AI is/will-be an incredibly disruptive technology. And that automation in general is putting more and more people out of jobs, and extrapolated forward you end up in a world where most humans don't have any practical work to do other than breed and consume resources at ever increasing rates.
As much as I'm impressed by AI art (it's gorgeous), at the end of the day it's mainly just copying/pasting/smoothing out objects it's seen before (training set). We don't think of it as clipart, but that's essentially what it is underneath it all, just a new form of clipart. Amazing in it's ability to reposition, adjust, smooth images, have some sense of artistic placement, etc. It's lightyears beyond where clipart started (small vector and bitmap libraries). But at the end of the day it's just automating the creation of images using clipart. Re-arranging images you've seen before so is not going to make anyone big $$$. End of the day the quality of the output is entirely subjective, just about anything reasonable will do.
This reminds me a lot of GPT-3... looks like it has substance but not really. GPT-3 is great at making low value clickbait articles of cut-and-paste information on your favorite band or celebrity. GPT-3 will never be able to do the job of a real journalist, pulling pieces together to identify and expose deeper truths, to say, uncover the Theranos fraud. It's just Eliza [1] on steroids.
The AI parlor tricks started with Eliza, and have gotten quite elaborate as of late. But they're still just parlor tricks.
Comparing it to the challenges of programming, well yes I agree AI will automate portions of it, but with major caveats.
A lot of what people call "programming" today is really just plumbing. I'm a career embedded real-time firmware engineer, and it continues to astonish me that there's an entire generation of young "programmers" who don't understand basic computing principles, stacks, interrupts, I/O operations.. at the end of the day their knowledge base seems comprised of knowing which tool to use where in orchestration, and how to plumb it together. And if they don't know the answer they simply google and stack overflow will tell them. Low code, no code, etc. (python is perfect for quickly plumbing two systems together). This skill set is very limited and wouldn't even get you a junior dev position when I started out. I'm not suprised it's easy to automate, as it will generally have the same quality code (and make the same mistakes) as a human dev that simply copies/pastes Stack Overflow solutions.
This is in stark contrast to the types of problems that most programmers used to solve in the old days (and a smaller number still do). Stuff that needed an engineering degree and complex problem solving skills. But when I started out 30 years ago, "programmers" and "software engineers" were essentially the same thing. They aren't now, there is a world of difference between your average programmer and a true software engineer today.
Not saying plumbers aren't valuable.. they absolutely are as more and more of the modern world is built on plumbing things together. Highly skilled software engineers are needed less and less, and that's a net-good thing for humanity. No one needs to write operating systems anymore, lets add value building on top of them. Those are the people making the big $$$, their skillset is quite valuable. We're in the middle of a bi-furcation of software engineering careers. More and more positions will only require limited skills, and fewer and fewer (as a percentage) will continue to be highly skilled.
So is AI going to come in and help automate the plumbing? Heck yes, and rightly so... They've automated call centers, warehouse logistics, click-bait article writing, carry-out order taking, the list goes on and on. I'd love to have an AI plumber I could trust to do most of the low-level work right (and in CI/CD world you can just push out a fix if you missed something).
I don't believe for a second that today's latest and greatest "cutting edge" AI will ever be able to solve the hard problems that keep highly skilled people employed. New breakthroughs are needed, but I'm extremely skeptical. Like fusion promises, general purpose AI always seems just a decade or two away. Skilled labor is safe, for now.. maybe for a while yet.
The real problem as I see it, is that AI automation is on course to eliminate most low skilled jobs in the next century, which puts it on a collision course with the fact that most humans aren't capable of performing highly skilled work (half are below average by definition). Single parent workig the GM line in the 50's was enough afford an average family a decent life. Not so much where technology is going. At the end of the day the average human will have little to contribute to civilization, but still expects to eat and breed.
Universal basic income has been touted as a solution to the coming crisis, but all that does is kick the can down the road. It leads to a world of too much idle time (and the devil will find work for idle hands) and ever growing resource consumption. A perfect storm.... at the end of the day what's the point of existing when all you do is consume everything around you and don't add any value? Maybe that's someone's idea of utopia, but not mine.
This has been coming for a long time, AI art is just a small step on the current journey, not a big breakthrough but a new application in automation.
/rant
The people who generated the training data should have a say in how their work is used. Opt-in, not opt-out.
How about we legally enshrine a difference between human learning and corporate product learning? If you want to use things others made for free, you should give back for free. Otherwise if you’re profiting off of it, you have to come to some agreement with the people whose work you’re profiting off of.
Developers will be fine because software engineering is an arms race - a rather unique position to be in as a professional. I saw this play out during the 2000s offshoring scare when many of us thought we'd get outsourced to India. Instead of getting outsourced, the industry exploded in size globally and everything that made engineers more productive also made them a bigger threat to competitors, forcing everyone to hire or die.
Businesses only need so much copy or graphic design, but the second a competitors gains a competitive advantage via software they have to respond in kind - even if it's a marginal advantage - because software costs so little to scale out. As the tech debt and the revenue that depends on it grows, the baseline number of staff required for maintenance and upkeep grows because our job is to manage the complexity.
I think software is going to continue eating the world at an accelerated pace because AI opens up the uncanny valley: software that is too difficult to implement using human developers writing heuristics but not so difficult it requires artificial general intelligence. Unlike with artists, improvements in AI don’t threaten us, they instead open up entire classes of problems for us to tackle
You're comparing apples to oranges. Digging a trench by hand is also vastly more difficult than art or programming.
There's just as much AI hype around code generation, and some programmers are also complaining (https://www.theverge.com/2022/11/8/23446821/microsoft-openai...).
Overall though the sentiment is that AI tools are useful and are a sign of progress. The fact that they are stirring so much contention and controversy is just a sign of how revolutionary they are.
I'm nonplussed by ChatGPT because the hype around it is largely the same as was for Github Copilot and Copilot fizzled badly. (Full disclosure: I pay for Copilot because it is somewhat useful).
I feel a big part what makes it okay or not okay here is intention and capability. Early in an artistic journey things can be highly derivative but that's due to the student's capabilities. A beginner may not intend to be derivative but can't do better.
I see pages of applications of ML out there being derivative on purpose (Edit: seemingly trying to 'outperform' given freelance artists with glee, in their own styles).
So Disney don’t need to worry about AI art tools - so ‘attacking’ them with such tools does nothing.
Obviously a lot of money will be lost for artists in a variety of commercial fields, but the ultimate "success of art" will be unapproachable by AI given its subjective nature.
Developers though will be struggling to compete from both a speed and technical point of view, and those hurdles can't be simply overcome with a shift in how someone feels. And you're right about the arms race, it just won't be happening with humans. It'll be computing power, AIs and the people capable of programming those AIs.
That depends on what the layer was, and there is current cases heading to supreme court that have something similar to that so we may see
however commentary is just one type of fair use and would not be a factor here, nor is anyone claiming the AI is reselling the original work. The claim is that copyright law prevents unauthorized use of a work in the training of AI, AI training could (and likely would) be treated as research, and the result of the research is a derivative work wholly separate from the original and created under fair use
As a developer/manager i am not yet scared of AI because i have had to already correct multiple people this week who tried to use chatGPT to figure something out.
It’s actually pretty good but when it’s wrong it seems to be really wrong and when you don’t have the background to figure that out a ton of time is wasted. It’s just a better Stackoverflow at the end of the day imo.
People should stop giving work all this meaning and also they should study economics so they chill.
Learn and chill.
I don't think this is going to put developers out of work, however. Instead, lots of small businesses that couldn't afford to be small software companies suddenly will be able to. They'll build 'free puppies,' new applications that are easy to start building, but that require ongoing development and maintenance. As the cambrian explosion of new software happens we'll only end up with more work on our hands.
The Mickey Mouse case though is obviously bs, the training data definitely does just have tons of infringing examples of Mickey Mouse, it didn't somehow reinvent the exact image of him from first principles.
With no training, I, or even a 1 year old, could make something and call it art. I wouldn't claim it's very good but I think most people would accept it as art. The same cannot be said for programming.
I'm just as excited for myself as I am for artists. The current crop of these tools look like they could be powerful enablers for productivity and new creativity in their respective spaces.
I happen to also welcome being fully replaced, which is another conversation and isn't really where I see these current tools going, though it's hard to extrapolate.
It’s a stupid concept. It would never work. Even the visualizations we see that are explicitly attempting to copy another artist’s style are often still clearly not exactly the same.
That is absurd. Sure some basic AI tools have been helpful like co-pilot and it's sometimes really impressive how it can help me autofill some code instead of typing it out... but come on, there is no way we are anywhere close to AI replacing 99.99% of developers.
>making art is vastly more difficult than the huge majority of computer programming that is done
I don't know.. art is "easy" in the sense that we all know what art looks like. You want a picture of a man holding a cup with a baby raven in it? I can picture that in my head to some degree right away, and then it's just "doing the process" to draw it in some way using shapes we know.
How in the heck can you correlate that to 99% of business applications? Most of the time no one even knows exactly what they want out of a project.. so first there is the massive amount of constant changes just from using stuff. Then there is the actual way the code is created itself. Let's even say you could tell it "Make me an angular website with two pages and a live chat functionality" and it worked. Well, ok great it got you a starting template.. but first, maybe the code is so weird or unintuitive that it's almost impossible to really keep building upon- not helpful. Now let's say it is "descent enough", well fine.. then it's almost like an advanced co-pilot at this point. It helps with boilerplate boring template.
But comparing this all to art is still just ridiculous. Again, everyone can look at a picture and say "this is what I wanted" or "this is not what I wanted at all". Development is so crazy intricate that it's nothing like art.. I could look at two websites (similar to art) and say "these look the same", but under the hood it could be a million times different in functionality, how it works, how well it's structured to evolve over time.. etc etc. But if I look at two pictures that look exactly the same, I don't care how it got there or how it was created- it's done and exactly the same. Not true of development for 99% of cases.
This ongoing discussion feels classist. I've never seen such strong emotions about AI (and automation) taking blue-collar jobs, some shrugs at most. It's considered an unavoidable given, even though it has been happening for decades. The only difference now is that AI is threatening middle-upper class jobs, which nobody saw coming.
I do not see the difference between both. Can somebody that does explain to me why now is "critical" and not so much before?
Both answers were orders of magnitude wrong, and vastly different from each other.
JS code suggested for a simple database connection had glaring SQL injection vulnerabilities.
I think it's an ok tool for discovering new libraries and getting oriented quickly to languages and coding domains you're unfamiliar with. But it's more like a forum post from a novice who read a tutorial and otherwise has little experience.
We developers are hired because our coworkers can’t express what they really want. No one pays six figures to solve glorified advent of code prompts. The prompts are much more complex, ever changing as more information comes in, and in someone’s head to be coaxed out by another human and iterated on together. They are no more going to be prompt engineers than they were backend engineeers.
I say this as someone who used TabNine for over a year before CoPilot came out and now use ChatGPT for architectural explorations and code scaffolding/testing. I’m bullish on AI but I just don’t see the threat.
There's no fundamental reason it can't be the world expert at everything, but that's not a reason to assume we know how to get there from here.
Jobs have been automated since the industrial revolution, but this usually takes the form of someone inventing a widget that makes human labor unnecessary. From a worker's perspective, the automation is coming from "the outside". What's novel with AI models is that the workers' own work is used to create the thing that replaces them. It's one thing to be automated away, it's another to have your own work used against you like this, and I'm sure it feels extra-shitty as a result.
I see this as another step toward having a smaller and smaller space in which to find our own meaning or "point" to life, which is the only option left after the march of secularization. Recording and mass media / reproduction already curtailed that really badly on the "art" side of things. Work is staring at glowing rectangles and tapping clacky plastic boards—almost nobody finds it satisfying or fulfilling or engaging, which is why so many take pills to be able to tolerate it. Work, art... if this tech fulfills its promise and makes major cuts to the role for people in those areas, what's left?
The space in which to find human meaning seems to shrink by the day, the circle in which we can provide personal value and joy to others without it becoming a question of cold economics shrinks by the day, et c.
I don't think that's great for everyone's future. Though admittedly we've already done so much harm to that, that this may hardly matter in the scheme of things.
I'm not sure the direction we're going looks like success, even if it happens to also mean medicine gets really good or whatever.
Then again I'm a bit of a technological-determinist and almost nobody agrees with this take anyway, so it's not like there's anything to be done about it. If we don't do [bad but economically-advantageous-on-a-state-level thing], someone else will, then we'll also have to, because fucking Moloch. It'll turn out how it turns out, and no meaningful part in determining that direction is whether it'll put us somewhere good, except "good" as blind-ass Moloch judges it.
I hear these sorts of statements a lot, and always wonder how people come to the conclusion that "people who said A were the ones who were saying B". Barring survey data, how would you know that it isn't just the case that it seems that way?
The idea that people who would tell someone else to learn to code are now luddites seems super counter-intuitive to me. Wouldn't people opposing automation now likely be the same ones opposing it in the past? Why would you assume they're the same group without data showing it?
I know a bunch of artists personally and none of them seem to oppose blue-collar work
1) It'll no longer be possible to work as an artist without being incredibly productive. Output, output, output. The value of each individual thing will be so low that you have to be both excellent at what you do (which will largely be curating and tweaking AI-generated art) and extremely prolific. There will be a very few exceptions to this, but even fewer than today.
2) Art becomes another thing lots of people in the office are expected to do simply as a part of their non-artist job, like a whole bunch of other things that used to be specialized roles but become a little part of everyone's job thanks to computers. It'll be like being semi-OK at using Excel.
I expect a mix of both to happen. It's not gonna be a good thing for artists, in general.
I would argue because most AI imagery right now is made for fun and not monetary gains, so it is actually a purer form of art.
The number of people required to publish a magazine, or to create an ad, went down significantly with digital tools.
"It'll massively suck for you, but don't worry, it'll be better for everyone else" is little comfort for most of us
Ideally we’d see something opt-in to decide exactly how much you have to give back, and how much you have to constrain your own downstream users. And in fact we do see that. We have copyleft licenses for tons of code and media released to the public (e.g. GPL, CC-BY-SA NC, etc). It lets you define how someone can use your stuff without talking to you, and lays out the parameters for exactly how/whether you have to give back.
Photos will periodically be added to the collection - not that I expect anyone whatsoever to ever be interested in following a collection of photos that is meant to be boring and uninspired. However - feel free to use this collection of photos as a counterargument to the argument that "art requires some effort". I promise that I will put far less thought and effort into the photos of this collection than I have in any writing of prompts for AI generated art that I've done.
Art is little more than a statement and sometimes a small statement can carry a large message.
Tomorrow I will work on setting up a domain and gallery for the images - to facilitate easier discussion and sharing. Is the real artistic statement the story behind the collection and not the collection itself? How can the two be separated? Can one exist without the other?
But if you say "humans are programmed" in a metaphorical sense, then yeah sure that's an interesting thought experiment. But it's still a thought experiment.
This is a moment where individual humans substantially increase their ability to affect change in the world. I’m watching as these tools quickly become commoditized. I’m seeing low income first generation Americans who speak broken English using ChatGPT to translate their messages to “upper middle class business professional” and land contracts that were off limits before. I’m seeing individuals rapidly iterate and explore visual spaces on the scale of 100s to 1000s of designs using stable diffusion, a process that was financially infeasible even for well funded corps due to the cost of human labor this time last year. These aren’t fanciful dreams of how this tech is going to change society - Ive observed these outcomes in real life.
I’m empathetic that the entire world is moving out from under all of our feet. But the direction it’s moving is unbelievably exciting. AI isn’t going to replace humans, humans using AI are going to replace humans who don’t.
Be the human that helps other humans wield AI.
Could the bot not curate its own output? It has been shown that back feeding into the model result in improvement. I got the idea that better results come from increments. The AI overlords (model owners) will make sure they learn from all that curating you might do too, making your job even less skilled. Read: you are more replaceable.
Please prove me wrong! I hope I am just anxious. History has proven that increases in productivity tend to go to capital owners, unless workers have bargaining power. Mine workers were paid relatively well here, back in the day. Complete villages and cities thrived around this business. When those workers were no longer needed the government had to implement social programs to prevent a societal collapse there.
Look around, Musk wants you to work 10 hours per day already. Don't expect an early retirement or a more relaxed job..
Progress is cool if you're on the side of the wheel that's going up. It's the worst fucking thing in the world if you're on the side that's going down and are about to get smashed into the mud.
I think the wheels are turning. It's just a resultant movement from thousands of small movements, but nobody is controlling it. If you take a look not even wars dent the steady progress of science and technology.
A belief system that centers around human well being sounds more reasonable than *unbounded* capitalism. We know it, we don't know what to do with it.
That's a huge ethical issue whether or not it's explicitly addressed in copyright/ip law.
I'd just say the scale is different. Old school automation just required one expert to guide the development of an automation. AI art requires the expertise of thousands.
The fundamental design of transformer architecture isn't capable of what you think it is.
There are still radical, fundamental breakthroughs needed. It's not a matter of incremental improvement over time.
When was making living through art guaranteed? Society has mocked artists for centuries.
Let them make art and give them a UBI.
AI will replace programmers too; if a user can ask a future AI to organize their machines state into an arbitrary video game, photorealistic movie, generate reports from sources abc, weighing for xyz on a bar chart; only the AI code base (whatever boot strapping and runtime it needs) becomes necessary.
Why would I ask an AI that can produce the end result to produce code? Code is just minimized ideal machine state.
You’re correct; there is a lot of empathy lacking in our culture, but it’s not just when it comes to art.
The thing is, empathy doesn't really do anything. Pandora's Box is open and there's no effective way of shutting it that is more than a hopeful dream. Stopping technology is like every doomed effort that has existed to stop capitalism.
Probably has something to do with years of artists trash talking engineers.
"Observing humans under capitalism and concluding it's only in our nature to be greedy is like observing humans under water and concluding it's only in our nature to drown."
Whether 95% or 99.9% correct, when there is a serious bug, you're still going to need people that can fix the gap between almost correct and actually correct.
If I can just ask for a certain arbitrary machine state (with some yet unrealized future version of AI) who needs programmers?
We’ll need to vet AI output so there will still be knowledge work; we’re not going to let the AI decide to launch nukes, or inject whatever level of morphine it wants.
Data entry (programming being specialized data entry; code is a data model of primitives for a compiler/interpreter) work at a computer is not long for this world but analysis will be.
Not, if the worker is an engineer or similar. Some engineers built tools that improved building tools.
And this started even earlier than the industrial revolution. Think for example of Johannes Gutenberg. His real important invention was not the printing press (this already existed) and not even moveable types, but a process by which a printer could mold his own set of identical moveable types.
I see a certain analogy between what Gutenberg's invention meant for scribes then and what Stable Diffusion means for artists today.
Another thought: In engineering we do not have extremly long lasting copyright, but a lot shorter protection periods via patents. I have never understood why software has to be protected for such long copyright periods and not for much shorter patent-like periods. Perhaps we should look for something similar for AI and artists: An artist as copyright as usual for close reproductions, but after 20 years after publication it may be used without her or his consent for training AI models.
That’s really what we’re protecting here?
I’d rather live in the future where automation does practically everything not for the benefit of some billionaire born into wealth but because the automation is supposed to. Similar to the economy in Factorio.
Then people can derive meaning from themselves rather that whatever this dystopian nightmare we’re currently living in.
It’s absurdly depressing that some people want to stifle this progress only because it’s going to remove this god awful and completely made up idea that work is freedom or work is what life is about.
The future of work will not be decided by now 60+ year olds in another 10-15 year; Millennials and Gen Z are not growing conservative as they age into and through their 30s as Gen X and Boomers did. Generational churn is a huge wildcard.
I think you need to see there are 2 types of people:
- those who want to generate results ("get the job done, quickly"), and
- those who enjoy programming because of it.
The first one are the ones who can't see what is getting lost. They see programming as an obstacle. Strangely, some of them believe that on the one hand that many more people can produce lots more of software because of AI, and simultaneously expect to keep being in demand.
They might think your job is producing pictures, which is just a burden.
I am from the second group. I never choose this profession because of the money, or dreaming about big business I could create. I dread pasting generated code all over the place. The only one being happy would be the owner of that software. And the AI model overlord of course.
I hope that technical and artistic skill will gain appreciation again and that you will have a happy live in doing what you like the most.
Another place to look is the financially independent. What are they doing with their time?
The whole point of art is human expression. The idea that artists can be "automated away" is just sad and disgusting and the amount of people who want art but don't want to pay the artist is astounding.
Why are we so eager to rid ourselves of what makes us human to save a buck? This isn't innovation, its self destruction.
We've just made "learning style" easier, so a thing that was always a risk is now happening.
I'd reframe this to: making a living from your art is far more difficult than making money from programming.
> also be able to do the dumb, simple things most programmers do for their jobs?
I'm all for Ai automating all the boring shit for me. Just like frameworks have. Just like libraries have. Just like DevOps have. Take all the plumbing and make it automated! I'm all for it!
But. At some point. Someone needs to take business speak and turn it into input for this machine. And wouldn't ya know it, I'm already getting paid for that!
The real answer is AI are not people, and it is ok to have different rules for them, and that is where the fight would need to be.
Capitalism is particularly good at weaponizing our own ideas against us. See large corporations co-opting anti-capitalist movements for sales and PR.
Pepsi-co was probably mad that they couldn't co-op "defund the police", "fuck 12", and "ACAB" like they could with "black lives matter".
Anything near and dear to us will be manipulated into a scientific formula to make a profit, and anything that cannot is rejected by any kind of mainstream media.
See: Capitalist Realism and Manufactured Consent(for how advertising effects freedom of speech in any media platform).
You're projecting your own fears on everyone else. I'm a programmer, too, among other things. I write code in order to get other things done. (Don't you?) It's fucking awesome if this thing can do that part of my job. It means I can spend my time doing something even more interesting.
What we call "programming" isn't defined as "writing code," as you seem to think. It's defined as "getting a machine to do what we (or our bosses/customers) want." That part will never change. But if you expect the tools and methodologies to remain the same, it's time to start thinking about a third career, because this one was never a good fit for you.
This argument has come up many times in history, and your perspective has never come out on top. Not once. What do you expect to be different this time?
No they won't. If AI art was just as good as it is today, but didn't use copyrighted images in the training set, people would absolutely still be finding some other thing to complain about.
Artists just don't want the tech to exist entirely.
For someone seeking sound/imagery/etc. resulting from human expression (i.e., art), it makes sense that it can't be automated away.
For someone seeking sound/imagery/etc. without caring whether it's the result of human expression (e.g., AI artifacts that aren't art), it can be automated away.
When Alpha Go adds one of its own self-vs-self games to its training database, it is adding a genuine game. The rules are followed. One side wins. The winning side did something right.
Perhaps the standard of play is low. One side makes some bad moves, the other side makes a fatal blunder, the first side pounces and wins. I was surprised that they got training through self play to work; in the earlier stages the player who wins is only playing a little better than the player who loses and it is hard to work out what to learn. But the truth of Go is present in the games and not diluted beyond recovery.
But a LLM is playing a post-modern game of intertextuality. It doesn't know that there is a world beyond language to which language sometimes refers. Is what a LLM writes true or false? It is unaware of either possibility. If its own output is added to the training data, that creates a fascinating dynamic. But where does it go? Without Alpha Go's crutch of the "truth" of which player won the game according to the hard coded rules, I think the dynamics have no anchorage in reality and would drift, first into surrealism and then psychosis.
One sees that AlphaGo is copying the moves that it was trained on and a LLM is also copying the moves that is was trained on and that these two things are not the same.
I think this situation says a lot about the nature of human desire, not just the fact that a few people were ingenious to come up with the idea of diffusion models. A lot of ingenious inventions are relatively boring when exposed to the broader populace, and don't hit on such an appealing latent desire.
What will this say about the limitless yet-to-be-invented ideas that humanity is just raring to give itself, if only someone would hit on the correct chain of breakthroughs? Would even a single person today be interested in building a backyard nuclear warhead in an afternoon, and would attempt to if the barrier of difficulty in doing so was solved?
AI powered surveillance and the ongoing destruction of public institutions will make it hard to stand up for the collective interest.
We are not in hell, but the road to it has not been closed.
Static 2D images that usually serve a commercial purpose. Ex logos, clip art, game sprites, web page design and the like.
And the second is pure art whose purpose is more for the enjoyment of the creator or the viewer.
Business wants to fully automate the first case and must people view it has nothing to do with the essence of humanity. It's simply dollars for products - but it's also one of the very few ways that artists can actually have paying careers for their skills.
The second will still exist, although almost nobody in the world can pay bills off of it. And I wouldn't be shocked it ML models start encroaching there as well.
So a lot of what's being referred to is more like textile workers. And anyone who can type a few sentences can now make "art" significantly lowering barriers to entry. Maybe a designer comes and touches it up.
The short sighted part, is people thinking that this will somehow stay specific to Art and that their cherished field is immune.
Programming will soon follow. Any PM "soon enough" will be able to write text to generate a fully working app. And maybe a coder comes in to touch it up.
Sure you can't stick an entire project in there, but if you know the problem is in class Baz, just toss in the relevant code and it does a pretty damn good job.
Every tool makes some of the 'decisions' about how the artwork results by adding constraints and unexpected results. If anything I'd argue that AI art allows for more direct human expression: going from mental image to a sharable manifestation has the potential to be less lossy with art than with paint.
This feels like a bunch of misplaced ludditism. We need to implement a UBI because 99.9% of human labor is going to be valued below the cost of survival in the next 50-100 years. Always fun to see people thumbing their nose at Disney though.
Art-as-human-expression isn't going anywhere because it's intrinsically motivated. It's what people do because they love doing it. Just like people still do woodworking even though it's cheaper to buy a chair from Walmart, people will still paint and draw.
What is going to go away is design work for low-end advertising agencies or for publishers of cheap novels or any of the other dozens of jobs that were never bastions of human creativity to begin with.
But in my case, I don't happen to find drawing or painting enjoyable. I simply don't, for nature- or nurture-based reasons. I also don't believe that everyone can become a trained manual artist, because not everyone is interested in doing so, even if they still (rightly or wrongly) cling to the idea of having instant creative output and gratification.
I think this lack of interest is what makes me and many other people a prime target for addiction to AI-generated art. Due to my interest in programming I can tweak the experience using my skills without worrying about the baggage people of three years ago had to deal with if they wanted a similar result.
So without any sort of generation, how does one solve the problem of not wanting to draw, but still wanting one's own high-quality visual product to enjoy? I guess it would be learning to be interested in something one is not. And that probably requires virtuosity and integrity, a willingness to move past mistakes, and a positive mindset. The sorts of things that have little to do with the specific mechanics of writing code in an IDE to provoke a dopamine response. Also, the ability to stop focusing so hard on the end result, a detriment to creativity that so many (manual) art classes have pointed out for decades.
I sometimes feel I lack some of those kinds of qualities, and yet I can somehow still generate interesting results with Stable Diffusion. It feels like a contradiction, or an invalidation of a set of ideas many people have held as sacred for so long, a path to the advancement of one's own inner being.
I will relish the day when an AI is capable of convincing me that drawing with my own two hands is more interesting than using its own ability to generate a finished piece in seconds.
So I agree that, on a bigger scale beyond the improvement of automated art, this line of thinking will do more harm to humanity than good. An AI can take the fall for people who can't or don't want to fight the difficult battles needed to grow into better people, and that in turn validates that kind of mindset. It gives even the people who detest the artistic process a way to have the end result, and a decent one at that.
I think this is part of the reason why the anti-AI-art movement has pushed back so loudly. AI art teaches us the wrong lessons of what it means to be human. People could become convinced to not want to go outside and walk amongst the trees and experience the world if an AI can hallucinate a convincing replacement from the comfort of their own rooms.
Let’s not forget the very impressive population explosion in the past century. Every ‘job’ is a skill that has been out streamed so the needs of the population are satisfied by skills of the population so resources are distributed evenly.
Art is no longer a need and there are way too many artists simply proportional to the population.
Further, a lot of ‘art’ taught is technique. It’s not creativity. Can creativity be taught? I don’t think so.
Culture played a part in preserving artists and honoring their skills. But as ‘culture’ becomes global, mainstream is adopted more as it’s more accessible. And mainstream is subject to the vagaries of market as well as vulnerable to market manipulation.
Contrary to population notions, our world is very homogenous. Somehow the promotion of diversity has ended up with the tyranny of conformity. How did this happen? This is the biggest puzzle of this past few decades.
TBH given how derivative humans tend to be, with such a deeper "Human Learning" model and years and years of experiences.. i'm kinda shocked ML is even capable of even appearing non-derivative. Throw a child in a room, starve it of any interaction and somehow (lol) only feed it select images and then ask it to draw something.. i'd expect it to perform similarly. A contrived example, but i'm illustrating the depth of our experiences when compared to ML.
I half expect that the "next generation" of ML is fed by a larger dataset by many orders of magnitude more similarly matching our own. A video feed of years worth of data, simulating the complex inputs that Human Learning gets to benefit from. If/when that day comes i can't imagine we will seem that much more unique than ML.
I should be clear though; i am in no way defending how companies are using these products. I just don't agree that we're so unique in how we think, how we create, and if we're truly unique in any way shape or fashion. (Code, Input) => Output is all i think we are, i guess.
I think it's more a matter of enlarging the scope of what one person can manage. I think moving from the pure manual labor era, limited by how much weight a human body could move from point A to point B, to the steam engine era. Railroads totally wrecked the industry of people moving things on their backs or in mule trains, and that wasn't a bad thing.
> Don't expect an early retirement or a more relaxed job..
That's kinda my point, I don't think this is going to make less work, it'll turbocharge productivity. When has an industry ever found a way to increase productivity and just said cool, now we'll keep the status quo with our output and work less?
The lack you find depressing is natural defensiveness in the face of hostility rooted in the fear, and in most cases, broad ignorance of both the legal and technical context and operation of these systems.
We might look at this and say, "there should have been a roll out with education and appropriate framing, they should have managed this better."
This may be true but of course, there is no "they"; so here we are.
I understand the fear, but my own empathy is blocked by hostility in specific interactions.
Oh, life & death is different? Don't be so sure; there's good reasons to believe that livelihood (not to mention social credit) and life are closely related -- and also, the fundamental point doesn't depend on the specific example: you can't point to an orders-of-magnitude change and then claim we're dealing with a situation that's qualitatively like it's "always" been.
"Easier" doesn't begin to honestly represent what's happened here: we've crossed a threshold where we have technology for production by automated imitation at scale. And where that tech works primarily because of imitation, the work of those imitated has been a crucial part of that. Where that work has a reasonable claim of ownership, those who own it deserve to be recognized & compensated.
Marx makes the case in Grundisse https://thenewobjectivity.com/pdf/marx.pdf that the automation of work could improve the lives of workers -- to "free everyone’s time for their own development". Ruth Gilmore Wilson observes that capital's answer is to build complexes of mass incarceration & policing to deal with the workers rendered jobless by automation https://inquest.org/ruth-wilson-gilmore-the-problem-with-inn... -- that is, those who have too much "free" time. In such a world, Marx speculates that "Wealth is not command over surplus labour time’ (real wealth), ‘but rather, disposable time outside that needed in direct production", but Wilson reminds us that capital's apparent answer to date has been fascism.
I wonder if that could be a solution to this. Anything AI generated is public domain, no one can own the IP to it. It would allow it to be used for research and education, hobbyists, but hinder how large corporations could use it.
Maybe even have it like GNU license, anything using AI generated stuff must also be public domain.
Artists are poets, and they're railing against Trurl's electronic bard.
[https://electricliterature.com/wp-content/uploads/2017/11/Tr...]
Which is why e.g. Bethesda is not going to slap you for your Mr House or Pip-Boy fanart, but will slap the projects that recreate Fallout 3 in engine X.
The question is: do you like human beings? Because there is really no job that can't be replaced, if the technology goes far enough. And then the majority of the population, or all of the population, becomes dead weight. I'm a musician; how long before an AI can write better songs than I can in a few seconds?
This is fundamentally different than past instances of technology replacing human labor, because in the past, there was always something else that humans could do that the machines still could not. Now- that may not be the case.
There is only one choice: I think we should outlaw all machine learning software, worldwide.
Basically, the argument is that you should not have ever charged for your art, since its viewing and utility is increased when more people see it.
The lack of empathy comes from our love of open source. That's why. These engineers have been pirating books, movies, games for a long time. Artists crying for copyright has the same sound as the MPAA sueing grandma 20 years ago.
You describe stuff that is harmful or boring. In an other comment I touched upon this, as there seem to be a clear distinction between people that love programming and those that just want to get results. The former does not enjoy being manager of something larger per se if the lose what they love.
I can see a (short term?) increase in demand of software, but it is not infinite. So when productivity increases and demand does not with at least the same pace, you will see jobless people and you will face competition.
What no one has touched yet is that the nature of programming might change too. We try to optimize for the dev experience now, but it is not unreasonable to expect that we have to bend towards being AI-friendly. Maybe human friendly becomes less of a concern (enough desperate people out there), AI-friendly and performance might be more important metrics to the owner.
We're all "doomed" if this is the case.
Industries have traditionally solved this with planned obsolence. Maybe JavaScript might be our saviour here for a while. :)
There is also a natural plateau of choice we can handle. Of those 2000, only a few will be winners and with reach. It might soon be that the AI model becomes more valuable than any of those apps. Case in point: try to make a profitable app on Android these days.
There are a lot of working commercial artists in between the fine art world and the "cheap novels and low-end advertising agencies" you dismiss, and there's no reason to think AI art won't eat a lot of their employment.
Why would software engineers who work on web apps, kubernetes, and the internet in general need to understand interrupts. Not only they will never ever deal with any of that, but also they are supposed not to. All of those have been automated away so that what we call the Internet can be possible.
All of those stuff turned into specializations as the tech world progressed and the ecosystem grew. A software engineer specialized in hardware would need to know interrupts while he wouldnt need to know how to do devops. For the software engineer who works on Internet apps, its the opposite.
Creating art is not that much harder than programming, creating good art is much harder than programming. That's the reason that a large majority of art isn't very good, and why a large majority of Artists don't make a living by creating art.
Just like the camera didn't kill the artist, neither will AI. For as long as art is about the ideas behind the piece as opposed to the technical skills required to make it (which I would argue has been true since the rise of impressionism) then AI doesn't change much. The good ideas are still required, AI only makes creating art (especially bad art) more accessible.
I think for most people the enjoyable and fulfilling part of life is feeling useful or having some expression and connection through their work. There's definitely some people who can create in a vacuum with no witness and be fulfilled, but I think there's a deep need for human appreciation for most people.
> As for your bleak future… will that even come to pass? I don’t know. Maybe it depends on your notion of “surpass”, and what that looks like.
I don't know either, maybe it will be fine. Maybe this will pass like the transition from traditional to digital. But something about this feels different...like it's actually stealing the creative process rather than just a paradigm shift.
It seems inevitable and I don't think we can stop it, but I just am kind of worried about the collective mental health of humanity. What does a world look like where people have no jobs and even creative outlets are dominated by AI? Are people really just happy only consuming? What even is the point of humanity existing at that point?
Basically the current argument of artists being out of a job but taken to its extreme.
Why would these robots get paid? They wouldn’t. They’d just mine, manufacture, and produce on request.
Imagine a world where chatgpt version 3000 is connected to that swarm of robots and you can type “produce a 7 inch phone with an OLED screen, removable battery, 5 physical buttons, a physical shutter, and removable storage” and X days later arrives that phone, delivered by automation, of course.
Same would work with food, where automation plants the seeds, waters the crops, removes pests, harvests the food, and delivers it to your home.
All of these are simply artists going out of a job, except it’s not artists it’s practically every job humans are forced to do today.
There’d be very little need to work for almost every human on earth. Then I could happily spend all day taking shitty photographs that AI can easily replicate today far better than I could photograph in real life but I don’t have to feel like a waste of life because I enjoy doing it for fun and not because I’m forced to in order to survive.
But you're off course right that the benefits are unevenly distributed, and for some it truly does suck.
The real battle there would be protocols; how everyone's custom apps communicate. Here, we can fall back to existing protocols such as email, ActivityPub, Matrix, etc.
There's nothing stopping anyone from coding for fun, but we get paid for delivering value, and the amount of value that you can create is hugely increased with these new tools. I think for a lot of people their job satisfaction comes from having autonomy and seeing their work make an impact, and these tools will actually provide them with even more autonomy and satisfaction from increased impact as they're able to take on bigger challenges than they were able to in the past.
I don't pay someone to run calculations for me, either, also a difficult and sometimes creative process. I use a computer. And when the computer can't, then I either employ my creativity, or hire a creative.
> I know what you mean, but thinking about it critically, this is just wrong. All software has bugs in it. Small bugs, big bugs, critical bugs, security bugs, everything. No code is immune. The largest software used by millions every day has bugs. Library code that has existed and been in use for 30 years has bugs.
All software has bugs, but it's usually far better that "95% right." Code that's only 95% right probably wouldn't pass half-ass testing or a couple of days of actual use.
1. Proprietary software is harmful and immoral in ways that proprietary books or movies are not.
2. The creative industry has historically used copyright as a tool to tell computer programmers to stop having fun.
So the lack of empathy is actually pretty predictable. Artists - or at least, the people who claim to represent their economic interests - have consistently used copyright as a cudgel to smack programmers about. If you've been marinading in Free Software culture and Cory Doctorow-grade ressentiment for half a century, you're going to be more interested in taking revenge against the people who have been telling you "No, shut up, that's communism" than mere first-order self-preservation[1].
This isn't just "programmers don't have fucks to give", though. In fact, your actual statements about computer programmers are wrong, because there's already an active lawsuit against OpenAI and Microsoft over GitHub Copilot and it's use of FOSS code.
You see, AI actually breaks the copyright and ethical norms of programmers, too. Most public code happens to be licensed under terms that permit reuse (we hate copyright), but only if derivatives and modifications are also shared in the same manner (because we really hate copyright). Artists are worried about being paid, but programmers are worried about keeping the commons open. The former is easy: OpenAI can offer a rev share for people whose images were in the training set. The latter is far harder, because OpenAI's business model is charging people for access to the AI. We don't want to be paid, we want OpenAI to not be paid.
Also, the assumption that "art is more difficult than computer programming" is also hilariously devoid of empathy. For every junior programmer crudly duck-taping code together you have a person drawing MS Paint fanart on their DeviantART page. The two fields test different skills and you cannot just say one is harder than the other. Furthermore, the consequences are different here. If art is bad, it's bad[0] and people potentially lose money; but if code is bad it gets hacked or kills people.
[0] I am intentionally not going to mention the concerns Stability AI has with people generating CSAM with AI art generators. That's an entirely different can of worms.
[1] Revenge can itself be thought of as a second-order self-preservation strategy (i.e. you hurt me, so I'd better hurt you so that you can't hurt me twice).
> There’d be very little need to work for almost every human on earth.
When mankind made a pact with the devil, the burden we got was that we had to earn our bread though sweat and hard labor. This story has survived millennia, there is something to it.
Why is the bottom layer in society not automated by robots? No need to if they are cheaper than robots. If you don't care about humans, you can get quite some labor for a little bit of sugar. If you can work one job to pay your rent, you can possibly do two or three even. If you don't have those social hobbies like universal healthcare and public education, people will be competitive for a very long time with robots. If people are less valuable, they will be treated as such.
Hell is nearer than paradise.
Have you actually looked into CS deeply? Obviously not. (I‘m not saying this cannot also be true for music, which I don‘t know.)
As far as money goes... long run artists will still make money fine as people will value the people generated (artisanal) works. Just as people like hand-made stuff today, even though you can get machine-made stuff way cheaper. You may not have the generic jobs of cranking out stuff for advertisements (and such) but you'll still have artists.
Your diatribe about not caring about humans is ironic. I don’t know where you got all that from, but it certainly wasn’t my previous comment.
I also don’t know what pact you’re on about. The idea of working for survival is used to exploit people for their labor. I guess people with disabilities that aren’t able to work just aren’t human? Should we let them starve to death since they can’t work a 9-5 and work for their food?
It's not even clear you're correct by the apparent (if limited) support of your own argument. "Transmission" of some sort is certainly occurring when the work is given as input. It's probably even tenable to argue that a copy is created in the representation of the model.
You probably mean to argue something to the effect that dissemination by the model is the key threshold by which we'd recognize something like the current copyright law might fail to apply, the transformative nature of output being a key distinction. But some people have already shown that some outputs are much less transformative than others -- and even that's not the overall point, which is that this is a qualitative change much like those that gave birth to industrial-revolution copyright itself, and calls for a similar kind of renegotiation to protect the underlying ethics.
People should have a say in how the fruits of their labor are bargained for and used. Including into how machines and models that drive them are used. That's part of intentionally creating a society that's built for humans, including artists and poets.
Commercial art needs to be eye catching and on brand if it's going to be worth anything, and a random intern isn't going to be able to generate anything with an AI that matches the vision of stakeholders. Artists will still be needed in that middle zone to create things that are on brand, that match stakeholder expectations, and that stand out from every other AI generated piece. These artists will likely start using AI tools, but they're unlikely to be replaced completely any time soon.
That's why I only mentioned the bottom tier of commercial art as being in danger. The only jobs that can be replaced by AI with the technology that we're seeing right now are in the cases where it really doesn't matter exactly what the art looks like, there just has to be something.
The ones at risk (and complaining the most) are semipro online artists who sell one image at a time, like fanart commissions.
- generic expression: commercial/pop/entertainment; audience makes demands on the art
- autonomous expression: artist's vision is paramount; art makes demands on the audience
Obviously these are idealized antipodes. The question about whether it is the art making the demands on the audience or the audience making demands on the art is especially insightful in my opinion. Given this rubric, I'd say AI-generated art must necessarily belong to "generic expression" simply because it's output has to meet fitness criteria.
We don‘t know. We just don‘t.
It‘s too difficult to predict what, say, software developers will do in a few years and how demand or salary or competition will be.
Look at this final video of the 2012 Deep Learning course by Hinton that I still remember from a long time ago: https://m.youtube.com/watch?v=FOqMeBM3EIE
What I do know however is this:
- Short term nothing special will happen.
- In the actually interesting projects that I worked on I always ran out of time. So much more could be imagined that could have been done but there was no time or budget to do it. Looking forward to AI making a dent in this a bit.
Creative professionals might take the first hit in professional services, but AI is going to come for engineers at a much faster and more furious pace. I would even go so far as to say that some (probably a small amount) of the people who have recently gotten laid off at big tech companies may never see a paycheck as high as they previously had.
The vast majority of software engineering hours that are actually paid are for maintenance, and this is where AI is likely to come in like a tornado. Once AI hits upgrade and migration tools it's going to eliminate entire teams permanently.
Mixed social-democratic economies are nice and better than plutocracies, but they have capitalism; they just have other economic forms alongside it.
(Needing to profit isn’t exclusive to capitalism either. Socialist societies also need productivity and profit, because they need to reinvest.)
The problem of a vengeful god who demands the slaughter of infidels lies not in his existence or nonexistence, but peoples' belief in such a god.
Similarly, it does not matter whether AI works or it doesn't. It's irrelevant how good it actually is. What matters is whether people "believe" in it.
AI is not a technology, it's an ideology.
Given time it will fulfil it's own prophecy as "we who believe" steer the world toward that.
That's what's changing now. It's in the air.
The ruling classes (those who own capital and industry) are looking at this. The workers are looking too. Both of them see a new world approaching, and actually everyone is worried. What is under attack is not the jobs of the current generation, but the value of human skill itself, for all generations to come. And, yes, it's the tail of a trajectory we have been on for a long time.
It isn't the only way computers can be. There is IA instead of AI. But intelligence amplification goes against the principles of capital at this stage. Our trajectory has been to make people dumber in service of profit.
I don’t understand this. It reminds me of the Go player who announced he was giving up the game after AlphaGo’s success. To me that’s exactly the same as saying you’re going to give up running, hiking, or walking because horses or cars are faster. That has nothing to do with human meaning, and thinking it does is making a really obvious category error.
Nevertheless, having more engineers around actually causes you to be more valuable, not less. “Taking your job” isn’t a thing; the Fed chairman is the only thing in our economy that can do that.
As a crude analogy, there are a lot of great free or low-cost tools to create websites that didn't exist 15 years ago and can easily replace what would be a much more expensive web developer contract 15 years ago. And yet, in those last 15 years, the "size of the web pot" has increased enough that I don't think many professional web developers are worried about site builder tools threatening the entire industry. There seem to be a lot more web developers now then there were 15 years ago, and they seem to be paid as well or better than they were 15 years. And again, that doesn't mean that certain individuals or firms didn't on occasion experience financial hardship due to pressure from cheaper alternatives, and I don't want to minimize that. It just seems like the industry is still thriving.
To be clear, I really have no idea if this will turn out to be true. I also have no idea if this same thing might happen in other fields like art, music, writing, etc.
Do you have a source for that? Doesn't match my experience unless your definition of maintenance is really broad
Maybe you are better at CS than music and therefore perceive it as easy and the other one as hard.
To be perfectly honest, I absolutely love that particular attempt by artists, because it will likely force 'some' restrictions on how AI is used and maybe even limit that amount 'blackboxiness' it entails ( disclosure of model, data set used, parameters -- I might be dreaming though ).
I disagree with your statement in general. HN has empathy and not just because it could affect their future world. It is a relatively big shift in tech and we should weigh it carefully.
You’re like half a step away from the realization that almost everything you do today is done better if not by AI then someone that can do it better than you but you still do it because you enjoy it.
Now just flip those two, almost everything you do in the future will be done better by AI if not another human.
But that doesn’t remove the fact that you enjoy it.
For example, today I want to spend my day taking photographs and trying to do stupid graphic design in After Effects. I can promise you that there are thousands of humans and even AI that can do a far better job than me at both these things. Yet I have over a terabyte of photographs and failed After Effects experiments. Do I stop enjoying it because I can’t make money from these hobbies? Do I stop enjoying it because there’s some digital artist at corporation X that can take everything I have and do it better, faster, and get paid while doing it?
No. So why would this change things if instead of a human at corporation X, it’s an AI?
> Learning technical skills like draughtsmanship is harder than learning programming because you can't just log onto a free website and start getting instant & accurate feedback on your work.
Really? I sometimes wonder what people think programming really is. Not what you describe, obviously.
On an infinite timeline humans will no longer be needed in the generation of code (we hopefully will still study and appreciate it for leisure), but I doubt we're there yet.
That said, automation is coming for all of us. The problem is not “we Need to stop these AIs/robots from replacing humans.” It’s “We need to figure out the rules for taking care of the humans when their work is automated”
I'm a bit surprised that it got a lookup wrong, but for any other domain, describing it as a "novice" is understating the situation a lot.
Now was Aaron Schwartz (what I view as on ultimate example of this open source idea you cite) naive, no. Maybe he knew in his heart the greater good would outweigh anything.
But I don't think we should judge too harshly merely falling on one side of this issue or not. Perhaps it's down to a debate about what creation/truth/knowledge actually are. Maybe some creators (of which aritsts and computer scientists are) view creations as something they bring into the world, not reveal about the world.
Same if true by the way for writing. So? Doesn‘t mean writing well is easy.
Not sure I fully understand your second point: are you implying that I don't really know what programming is?
I can't copy your GPL code. I might be able to write my own code that does the same thing.
I'm going to defend this statement in advance. A lot of software developers white knight more than they strictly have to; they claim that learning from GPL code unavoidably results in infringing reproduction of that code.
Courts, however, apply a test [1], in an attempt to determine the degree to which the idea is separable from the expression of that idea. Copyright protects particular expression, not idea, and in the case that the idea cannot be separated from the expression, the expression cannot be copyrighted. So either I'm able to produce a non-infringing expression of the idea, or the expression cannot be copyrighted, and the GPL license is redundant.
[1] https://en.wikipedia.org/wiki/Abstraction-Filtration-Compari...
All these talking points about lack of empathy for poor suffering artists have already been made a million times in those other debates. They just don't pack much of a punch anymore.
Current SOTA: https://openai.com/blog/vpt/
This is especially true for complex pieces.
If an AI could produce a world-class totally amazing illustration or even a book I will afterwards easily see or read it.
On the other hand real-world software systems consist of hundreds of thousands or lines in distributed services. How would a layman really judge if they work?
Nevertheless I also expect AI to have a big impact since less engineers can do much more.
The more computers and machines and institutions take that over, the fewer opportunities there are to do that, and the more doing that kind of thing feels forced, or even like an indulgence of the person providing the "service" and an imposition on those served.
Vonnegut wrote quite a bit about this phenomenon in the arts—how recording, broadcast, and mechanical reproduction vastly diminished the social and even economic value of small-time artistic talent. Uncle Bob's storytelling can't compete with Walt Disney Corporation. Grandma's piano playing stopped mattering much when we began turning on the radio instead of having sing-alongs around the upright. Nobody wants your cousin's quite good (but not excellent) sketches of them, or of any other subject—you're doing him a favor if you sit for him, and when you pretend to give a shit about the results. Aunt Gertrude's quilt-making is still kinda cool and you don't mind receiving a quilt from her, but you always feel kinda bad that she spent dozens of hours making something when you could have had a functional equivalent for perhaps $20. It's a nice gesture, and you may appreciate it, but she needed to give it more than you needed to receive it.
Meanwhile, social shifts shrink the set of people for whom any of this might even apply, for most of us. I dunno, maybe online spaces partially replace that, but most of that, especially the creative spaces, seem full of fake-feeling positivity and obligatory engagement, not the same thing at all as meeting another person you know's actual needs or desires.
That's the kind of thing I mean.
The areas where this isn't true are mostly ones that machines and markets are having trouble automating, so they're still expensive relative to the effort to do it yourself. Cooking's a notable one. The last part of our pre-industrial social animal to go extinct may well be meal-focused major holidays.
My point was about skill level, not specialization. Specialization is great.. we can build bigger and bigger things not having to engineer/understand what's beneath everything. We stand on the shoulders of giants as they say.
And I agree, there is no one job specialization that's more valuable than the other. It's contextual. If you have a legal problem, a specialized lawyer is more valuable than a specialized doctor. So yeah I agree that if you have a cloud problem, you want a cloud engineer and not a firmware engineer. Although I should add that things like interrupts/events/synchronization and I/O operations are fairly universal computing concepts even in the cloud world. If you're a cloud programmer and you don't know how long an operation takes / its big-O complexity, how much storage it uses / it's persistence etc. you're probably going to have some explaining to do when your company gets next months AWS bill.
And yes plumbing is useful! Someone has to hook stuff up that needs hooking up! But which task requires more skill; the person that designs a good water flow valve, or the person hooking one up? I'd argue the person designing the valve needs to be more skilled (they certainly need more schooling). The average plumber can't design a good flow valve, while the average non-plumber can fix a leaky sink.
AI is eating unskilled / low-skill work. In the 80's production line workers were afraid of robots. Well, here we are. No more pools of typists, automated call centers handling huge volumes of people, dark factories.
It's a terrible time to be an artist if AI can clipart compose images of the same quality much faster than you can draw by hand.
Back to original comment: I'm merely suggesting that some programming jobs require a lot more skill than others. If software plumbing is easy, then it can and will be automated. If those were the only skill I posessed, I'd be worried about my job.
Like fusion, I just don't see general purpose AI being a thing in my lifetime. For highly skilled programmers, it's going to be a lot longer before they're replaced.
Welcome to our digital future. It's very stressful for the average skilled human.
It's obvious how an expert at regurgitating StackOverflow would be able to correct an NPE or an off-by-one error when given the exact line of code that error is on. Going any deeper, and actually being able to find a bug, requires understanding of the codebase as a whole and the ability to map the code to what the code actually does in real life. GPT has shown none of this.
"But it will get better over time" arguments fail for this because the thing that's needed is a fundamentally new ability, not just "the same but better." Understanding a codebase is a different thing from regurgitating StackOverflow. It's the same thing as saying in 1980, "We have bipedal robots that can hobble, so if we just improve on that enough we'll eventually have bipedal robots that beat humans at football."
If art for you is primarily centered on fidelity of implementation (i.e. "craft") then you will be very threatened by AI, particularly if you've made it your livelihood. However, if your art is more about communication/concepts, then you might even feel empowered by having such a toolset and not having to slog through a bunch of rote implementation when developing your ideas/projects. Not to mention that a single person will be able to achieve much much more.
I feel like it's possibly a good thing for art/humanity overall to stop conflating craft with art, because new ideas will rise above all of the AI-generated images. i.e. splashiness alone will no longer be rewarded.
In an ideal future when we all live in the Star Trek universe, none of it will matter and whoever loves crafting stuff can do it all day long. Until then of course, it's tragic and lots of people will be out of jobs.
AI Mickey Mouse is a possible copyright as well as trademark violation which would likely be enforced in the exact same way if you were to hand draw it. This type of violation is not AI specific.
The main threat that AI poses is not that it outputs copyrighted characters, instead brand new works that are either totally new (idea is never drawn before but the style is derived) or different enough from a known character to be considered a derived work.
Another way to put it: artists' current job is not to draw mickey. It is to draw new works, which is the part AI is threatening to replace. Sure, Disney may chase the AI companies to remove Mickey from the training set, and then we lost AI Mickey. That doesn't solve any problem because there are no artist jobs that draw Mickey.
Even in the case of extreme success where it becomes illegal to train a copyrighted image without explicit consent, the AI problem doesn't go away. They'll just use public domain images. Or sneak in consent without you knowing it. As was the case with your "free and unlimited" Google Photos.
Finally, if there's any player interested in AI art, it has to be Disney. Imagine the insane productivity gains they can make. It's not reasonable to expect that they would fight AI art very hard. Maybe a little, for the optics.
No actually, that's not how that works. You're demonstrating the lack of empathy that the parent comment brings up as alarming.
Regarding programming, code that's only 95% right can just be run through code assist to fix everything.
It might in a few areas, though. I think film making is poised to get really weird, for instance, possibly in some interesting and not-terrible ways, compared with what we're used to. That's mostly because automation might replace entire teams that had to spend thousands of hours before anyone could see the finished work or pay for it, not just a few hours of one or two artists' time on a more-incremental basis. And even that's not quite a revolution—we used to have very-small-crew films, including tons that were big hits, and films with credits lists like the average Summer blockbuster these days were unheard of, so that's more a return to how things were before computer graphics entered the picture (even 70s and 80s films, after the advent of the spectacle- and FX-heavy Summer blockbuster, had crews so small that it's almost hard to believe, when you're used to seeing the list of hundreds of people who work on, say, a Marvel film)
In the sense that art is a 2D visual representation of something, or a marketing tool that evokes a biological response in the viewer, art is easy to automate away. This is no different than when the camera replaced portraitists. We've just invented a camera that shows us things that don't exist.
In the sense that art is human expression, nobody has even tried to automate that yet and I've seen no evidence that expressionary artists are threatened.
a) the panic is entirely misguided and based on two wrong assumptions. The first is that textual input and treating the model as a function (command in -> result out) are sufficient for anything. No, this is a fundamentally deficient way to give artistic directions, which is further handicapped by primitive models and weak compute. Text alone is a toy; the field will just become more and more complex and technically involved, just like 3D CGI did, because if you don't use every trick available, you're missing out. The second wrong assumption is that it's going to replace anyone, instead of making many people re-learn a new tool and produce what was previously unfeasible due to the amount of mechanistic work involved. This second assumption stems from the fundamental misunderstanding of the value artists provide, which is conceptualization, even in a seemingly routine job.
b) the panic is entirely blown out of proportion by the social media. Most people have neither time nor desire to actually dive into this tech and find out what works and what doesn't. They just believe that a magical machine steals their works to replace them, because that's what everyone reposts on Twitter endlessly.
If the actual business costs are less than the price of a team of developers... welp, it was fun while it lasted.
I have the exact, almost completely opposite opinion. Greenfield is where AI going to shine.
Maintenance is riddled with "gotcha's", business context, and legacy issues that were all handled and negotiated over outside of the development workflow.
By contrast, AI can pretty easily generate a new file based on some form of input.
Also kinda curious how you deal with people that have disabilities and can’t exactly fight to survive. Me, I’m practically blind without glasses/contacts, so I’ll not be taking life lessons from the local mountain lion, thanks.
I am wondering why you define being in terms of having. Is that a slip, or is that related to this:
> I want to just push that further and subjugate nature with automation that can feed us and manufacture worthless plastic and metal media consumption devices for us.
Because I can hear sadness in these words. I think we can feel thankful for having the opportunity to observe beauty and the universe and feel belonging to where we are and with who we are. Those free smartphones are not going to substitute that.
I do not mean we have to work because it is our fate or something like that.
> Your diatribe about not caring about humans is ironic.
A pity you feel that way. Maybe you interpreted "If you don't care about humans" as literally you, whereas I meant is as "If one doesn't care".
What I meant was is the assumption you seem to make that when a few have plenty of production means without needing the other 'human resources' anymore, those few will not spontaneously share their wealth with the world, so the others can have free smart phones and a life of consumption. Instead, those others will have to double down and start to compete with increasingly cheaper robots.
----
The pact in that old story I was talking about deals with the idea that we as humans know how to be evil. In the story, the consequence is that those first people had to leave paradise and from then on have to work for their survival.
I just mentioned it because the fact that we exploit not only nature, but other humans too if we are evil enough. People that end up controlling the largest amounts of wealth are usually the most ruthless. That's why we need rules.
----
> I guess people with disabilities that aren’t able to work just aren’t human? Should we let them starve to death since they can’t work a 9-5 and work for their food?
On the contrary, I think I have been misunderstood.:)
Do you people think art is relegated to digital images only? No video? No paintings, sculptures, mixed media, performance art, lighting, woodwork, etc etc. How is it possible that everyone seems to ignore that we still have massive leaps required in AI and robotics to match the technical ability of 99% of artists.
Wow, yes. This is exactly what I've been thinking but you summed it up more eloquently.
I like my ideal world a lot better.
It might take away the joy of programming, feeling of ownership and accomplishment.
People today complain about having to program a bunch of api calls might be in for a rude awakening, tending and debugging the piles of chatbot output that got mashed together. Or do we expect that in the future we will suddenly value quality over speed or #features?
I love coaching juniors. These are humans, I can help them with their struggles and teach them. I try to understand them, we share experiences in life. We laugh. We find meaning by being with each other on this lonely, beautiful planet in the universe.
---
Please do not take offense: observe the language in which we are already conflating human beings with bots. If we do it already now, we will collectively do it in the future.
We are not prepared.
It has been fascinating to watch “copyright infringement is not theft” morph into “actually yes it’s stealing” over the last few years.
It used to be incredibly rare to find copyright maximalists on HackerNews, but with GitHub Co-pilot and StableDiffusion it seems to have created a new generation of them.
In the EU, UK, Japan and Singapore, it is explicitly legal to train AI on copyrighted work. I saw another comment say that AI companies train in those countries.
Something being currently legal and possible doesn’t mean being morally right.
Technology enables things and sometimes the change is qualitatively different.
There's been huge improvements in automating maintenance, and yet I've never once heard someone blame a layoff on e.g. clang-rename (which has probably made me 100x more productive at refactoring compared to doing it manually.)
I'd even say your conclusion is exactly backwards. The implicit assumption is that there's a fixed amount of engineering work to do, so any automation means fewer engineers. In reality there is no such constraint. Firms hire when the marginal benefit of an engineer is larger than the cost. Automation increases productivity, causing firms to hire more, not less.
If artists I employ want to incorporate this stuff into their workflow, that sounds great. They can get more done. There won't be less artists on payroll, just more and better art will be produced. I don't even think it is at the point of incorporating it into a workflow yet though, so this really seems like a nothing burger to me.
At least github copilot is useful. This stuff is really not useful in a professional context, and the idea that it is going to take artists jobs really doesn't make any sense to me. I mean, if there aren't any artists then who exactly do I have that is using these AI tools to make new designs? If you think the answer to that is just some intern, then you really don't know what you're talking about.
Personally, I think "copyright infringement is not theft" but I also think that using artists' work without their permission for profit is never OK, and that's what's happening here.
I am in, but just wanted to let you know many had this idea before. People thought in the past we would barely work these days anymore. What they got wrong is that productivity gains didn't reach the common man. It was partly lost through mass consumption, fueled by advertising, and wealth concentration. Instead, people at the bottom of the pyramid have to work harder.
> I like my ideal world a lot better.
Me too, without being consumption oriented though. Nonetheless, people that take a blind eye to the weaknesses of humankind often runs into unpleasant surprises. It requires work, lots of work.
It's not possible for training an AI using data that was obtained legally to be copyright infringement. This is what I was talking about regarding transmission. Copyright provides a legal means for a rights holder to limit the creation of a copy of their image in order to be transmitted to me. If a rights holder has placed their image on the internet for me to view, then copyright does not provide them a means to restrict how I choose to consume that image.
The AI may or may not create outputs that can be considered derivative works, or contain characters protected by copyright.
You seem to be making an argument that we should be changing this somehow. I suppose I'll say "maybe". But it is apparent to me that many people don't know how intellectual property works.
Yes, artists can also utilize AI as a photoshop filter, and some artists have started using it to fill in backgrounds in drawings, etc. Inpainting can also be used to do unimportant textures for 3d models. But that doesn't mean that AI art is no threat to artists' livelihoods, especially for scenarios like "I need a dozen illustrations to go with these articles" where quality isn't so important to the commissioner that they are willing to spend an extra few hundred bucks instead of spending 15 minutes in midjourney or stable diffusion.
As long as these networks continue being trained on artists' work without permission or compensation, they will continue to improve in output quality and muscle the actual artists out of work.
The confusion is that “copyright infringement is not theft” really was about being against corporate abuse of individuals. It's still the same situation here.
In art these parts are often overlooked, but they are significant none the less. E.g. getting the proportions right is an objective metric and really off putting if it is wrong.
And in programming the "art" parts are often overlooked and precisely the reason why I feel that most software of today is horrible. It is just made to barely "work" and get the technical parts right up to spec and that's it. Beyond that nobody cares about resource efficiency, performance, security, maintainability or yet alone elegance.
This will closely mimic van Gogh's style but nobody cares because style cannot be copyrighted in itself. So it draws a robot owl, which for the sake of this example, is a new character.
Zero copyright violations.
My point remains that AI users aren't going to aim for output that directly looks like an existing character. These artists are now intentionally doing that for the sake of the protest but this is not how AI is used. It's used to create new works or far-derived works.
This isn't the only option though? You could restrict it to data where permission has been acquired, and many people would probably grant permission for free or for a small fee. Lots of stuff already exists in the public domain.
What ML people seem to want is the ability to just scoop up a billion images off the net with a spider and then feed it into their network, utilizing the unpaid labor of thousands-to-millions for free and turning it into profit. That is transparently unfair, I think. If you're going to enrich yourself, you should also enrich the people who made your success possible.
Many people would probably happily allow use of their work for this if asked first, or would grant it for a small fee. Lots of stuff is in the public domain. But you have to actually go through the trouble of getting permission/verifying PD status, and that's apparently Too Hard
> A small amount of actual artists
It's extremely funny that you say this, because taking a look at the Trending on Artstation page tells a different story.
I think we are talking about a different job. I mentioned it somewhere else, but strapping together piles of bot generated code and having to debug that will feel more like a burden for most I fear.
If a programmer wanted to operate on a level where "value delivering" and "impact" are the most critical criteria for job satisfaction, one would be better of in a product management or even project management role. A good programmer will care a lot about her product, but she still might derive the most joy out of having it build mostly by herself.
I think that most passionate programmers want to build something by themselves. If api mashups are already not fun enough for them, I doubt that herding a bunch of code generators will bring that spark of joy.
How is training AI on imagery from the internet without permission different than decades of film and game artists borrowing H. R. Giger's style for alien technology?[1]
How is it different from decades of professional and amateur artists using the characteristic big-eyed manga/anime look without getting permission from Osamu Tezuka?
Copyright law doesn't cover general "style". Try to imagine the minefield that would exist if it were changed to work that way.
[1] No, I don't mean Alien, or other works that actually involved Giger himself.
Because those things, while dumb and simple, are not continuous in the way that visual art is. Subtle perturbations to a piece of visual art stay subtle. There is room for error. By contrast, subtle changes to source code can have drastic implications for the output of a program. In some domains this might be tolerable, but in any domain where you’re dealing significant sums of money it won’t be.
Will human artists be able to compete with artificial artists commercially? If not, is that bad or is it progress, like Photoshop or Autotune?
The political/social factors which apply to the life-and-death decisions made driving a car, don't apply to whether one of the websites I work on works perfectly.
I'm 35, and I've paid to write code for about 15 years. To be honest, ChatGPT probably writes better code than I did at my first paid internship. It's got a ways to go to catch up with even a junior developer in my opinion, but it's only a matter of time.
And how much time? The expectation in the US is that my career will last until I'm 65ish. That's 30 years from now. Tesla has only been around 19 years and now makes self-driving cars.
So yeah, I'm not immediately worried that I'm going to lose my job to ChatGPT in the next year, but I am quite confident that my role will either cease existing or drastically change because of AI before the end of my career. The idea that we won't see AI replacing professional coders in the next 30 years strains credulity.
Luckily for me, I already have considered some career changes I'd want to do even if I weren't forced to by AI. But if folks my age were planning to finish out their careers in this field, they should come up with an alternative plan. And people starting this field are already in direct competition to stay ahead of AI.
Are there any documented cases where copyright law didn't seem to offer sufficient protection against something that really did seem like copyright infringement but done using AI tooling? I started looking for some a few weeks ago because of this debate and still haven't seen anything conclusive.
Take for example video games. They distracted many people from movies, but also created a huge new field, hungry for talents. Or another one, quite a few genres calcified into distinctive boring styles over the years (see anything related to manga/anime as an example) simply because those styles require less mechanical work and are cheaper to produce. They could use a deep refresh. This tech will also lead to novel applications, created by those who embraced it and are willing to learn the increasingly complex toolset. That's what been happening the last several decades, which have seen several tech revolutions.
>As long as these networks continue being trained on artists' work
This misses the point. The real power of those things is not in the collection of styles baked into it. It's in the ability to learn new stuff. Finetuning and style transfer is what all the wizards do. Construct your own visual style by hand, make it produce more of that. And that's not just about static 2D images; neither do 2D illustrators represent all artists in the broad sense. Everyone who types "blah blah in the style of Ilya Kuvshinov" or is using img2img or whatever is just missing out, because the same stuff is going to be everywhere real soon.
I was the first photographer I knew of that combined astrophotography with wedding portraiture. That was new. Now lots of people do it - far better than me (I rarely get the chance)!
I’m a small fry so they almost assuredly didn’t get the idea from me, before anyone says I claim otherwise. There were probably a few photographers who thought to do it and now everybody has seen it and emulates it. The true artists put just a little spin on it, from which others will learn. So it goes.
1. This is theft and that's bad.
2. People who do this are getting gains without putting the work and that's bad. (And, per quite a few commenters I've seen, are talentless hacks.)
I have a lot of empathy for the first, and think it has merit, and have a much smaller amount of empathy for the second.
I ended up reading a lot of the quote tweets on this guy the other day: https://twitter.com/ammaar/status/1601284293363261441/retwee...
Here's just a few of thousands in the vein of number 2:
> No talent or passion whatsoever
> He thinks he created something
> Why don't you subscribe to writing and art classes?
> This so ugly and shows real disrespect for people who have made stuff by themselves for years.
> Men will literally sell AI trash and call it "art" instead of go to therapy
> Can’t write or draw but wants to do both
> This is nothing but a HUGE disrespect to all the writers and artists around the world, and all it does is belittle their REAL work and effort. > > This is not art. > Nothing to be proud of.
> I just spent 8 months illustrating a children’s book by hand—working, not “playing”—after a lifetime of training. > > FUCK OFF!
There are also plenty people are complaining about "theft", but it honestly, re-reading through it now, it feels like a minority. If this were done using fully public-domain content, does it sound like any of the people I quoted above been okay with it?
There's a clear disdain for "non-artists" creating art in a new way. I very much feel for the people who see their careers going away, and I can also empathize people who spent a long time acquiring a creative skill that's now "unnecessary". Programming has this too—those darn kids programming in Python rather than Assembly, or doing bootcamps that don't teach big-O notation. This is a normal, human way to feel, and I feel that too from time to time. BUT, I also resist that feeling. I choose not to express disdain for newcomers using new technology, or skipping the old ways.
A large (or at least loud) part of the art community seen here is expressing absolute disdain for those of us who are "cheating" not because "copyright infringement" but because we're using new technology that bypasses years of learning and that's very much eating into my empathy for the community in general. I find it toxic in the programming community and I find it toxic in the art community. Right now, it's exploding in the art community in a way far beyond what I've witnessed in programming.
I fail to see the skill level in someone working on the web knowing about interrupts. And a firmware engineer knowing about devops, integrations or react.
> Although I should add that things like interrupts/events/synchronization and I/O operations are fairly universal computing concepts even in the cloud world
Not really. I/O has nothing to do with cloud, likewise interrupts. Those remain buried way, way down in the hardware that run the cloud at a place where not even datacenter engineers reach.
> If you're a cloud programmer and you don't know how long an operation takes / its big-O complexity
That still has nothing to do with interrupts or hardware I/O.
Try telling one of the programmers to produce a work of art based on a review of all of the works that went into training the models and see how it works out.
Those engineers consented to creating the new tools so that's different
If people can't differentiate between computer and human generated art, wouldn't that be the definition of being replaceable?
And ironically, the overwhelming majority of knowledge used by these models to produce pictures that superficially look like their work (usually not at all), is not coming from any artworks at all. It's as simple as that. They are mostly trained on photos which constitute the bulk of models' knowledge about the real world. They are the main source of coherency. Artist names and keywords like "trending on artstation" are just easily discoverable and very rough handles for pieces of the memory of the models.
Several people, including myself, have been fighting AI IP theft for years. I filed the first BBB complaint, in fact...
But the fact that the human looked at a bunch of Mickey Mouse pictures and gained the ability to draw Mickey Mouse does not infringe copyright because that's just potential inside their brain.
I don't think the potential inside a learning model should infringe copyright either. It's a matter of how it's used.
No one WANTED to pay artists to begin with if they could bring their own ideas to life. Artists should realize they were just a necessary evil to many other people's creative endeavors.
Every single human being alive has the creative spark.
That same work=survival idea is what incentivizes competitiveness and of course, under that construct, some humans will put on their competitive goggles and exploit others.
There are a lot of human constructs that need to fade away before we can get to a fully automated world. But that’s okay. Humans aren’t the type to get stuck on a problem forever.
Roads are extremely regular, as things go, and as soon as you are off the beaten path with those AIs start having trouble too.
It seems that in general that the long tail will be problematic for a while yet.
Honestly, I think “learn to code” is mostly used sarcastically?
It may be a significant chunk of the butt-in-seat-time under our archaic 40hour/week paradigm, but it's not a significant chunk of the programmer's actual mental effort. You're not going to be able to get people to work 5x more intensely by automating the boring stuff, that was never the limiting factor.
It's almost like the real problem is asymmetry and abuse of power.
This is like saying that photoshop is going to put all the artists out of work because one artist can now do the work of a team of people drawing by hand. So far these AIs are just tools. Tools help humans to produce more and the economy keeps chugging ever upwards.
There is no upper limit of how much art we need. Marvel movies and videogames will just keep looking better and better as our artists increase their capabilities using AI tools to assist them.
Daz3d didn't put modelers and artists out of work, and what Daz and iClone can do is way way more impressive(and useful in a professional setting) than AI Art.
I actually think we will. People are starting to realise where slapping together crap that works 80% of the time gets us, and starting to have second thoughts. If and when we reach a world where leaking people's personal information costs serious money (and the EU in particular is lumbering towards that), the whole way we do programming will change.
Twitch pulls in multiple billions of dollars of revenue from video game streaming, which hasn’t been tested in court and may very well be copyright infringement. People regularly pirate games, movies, television shows, music, books, software, research papers, etc.
I believe that the culture benefits tremendously from this. My question is, why should we draw the line exactly here, at AI generated images, code, and writing?
Happiness needs loss, fulfillment, pain, hunger, boredom, fear and they need to be experiences backed up by both chemical feelings and experiences and memory and they have to be true.
But here's the thing, already the damage is done beyond just some art. I don't mean to diminish art, but frankly, look at how hostile, ugly and inhuman the world outside is in any regular city. Literal death worlds in fantasy 40k settings look more homey, comfortable, fulfilling, and human.
The poor are economically better off than at almost any point in history; actual food poverty is almost unknown, objectively people are living in better houses than ever before, and so on. It just doesn't seem like any of that makes poor people any happier or poverty any less wretched, somehow.
Can SD create artistic renderings without actual art being incorporated? Just from photos alone? I don't believe so, unless someone shows me evidence to the contrary.
Hence, SD necessitates having artwork in it's training corpus in order to emulate style, no matter how little it's represented in the training data.
I think people will not stop forming a social hierarchy, and so competition remains a sticky trait I think.
> work=survival idea is what incentivizes competitiveness
True, the idea that you can do better than the Jones through hard work is alluring. Having a job is now a requirement for being worthy, the kind of job defines your social position. Compare with the days of nobility though, where those nobleman had everything but a job ("what is a weekend?").
Is it though? What if I were to look at your art style and replicate that style manually in my own works? I see no difference whether it's done by a machine, or done by hand. The reality is that every art is a derivative of some other art. Interestingly, the music industry has been doing this for years. Ever since samplers became a thing, musicians spliced and diced loops into their own tracks for donkeys years, and created an explosion of new genres and sound. Hip-hop, techno, dark ambient, EDM, ..., all fall into the same category. Machine learning is just another new tool to create something.
It's not that uncommon for professional programmers to be pro-level musical soloists on the side, or for retired programmers to play top-level music. The reverse is far less common. I do think that says something.
> Anything as competitive as an artistic field will always result in amounts of mastery needed at the top level that are barely noticeable to outside observers.
Sure. Top-level artistic fields are well into the diminishing returns level, whereas programming is still at the level where even a lot of professional programmers are not just bad, but obviously bad in a way that even non-programmers can understand.
Even in the easiest fields, you can always find something to compete on (e.g. the existence of serious competitive rubik's cube doesn't mean solving a rubik's cube is hard). A difficult field is one where the difference between the top and the middle is obvious to an outsider.
Get a grip me old fruit. You've basically described "growing up". The world is a pretty wild place and you need to find your niche or not (rince/repeat). You are not a failed artist at all. You probed at something "had a dabble" if you like and it didn't work out. Never mind. Move on and try something else but keep your interest in mind.
There are loads of professions that I'd like to have done but as it turns out I'm me and that's who I am. Personally speaking I'm a MD of a little IT firm in the UK that can fiddle up a decent 3-2-1 conc mix and do fairly decent first and second fix wood work. I studied Civ Eng.
"The lack of empathy" - really?
If you fancy your chances as an artist then go for it. At worst you will fulfill your ambition and create some daubs. At best, you will traverse reality and be a wealthy living artist.
Just do it.
This is the first wave of half decent AI.
But more importantly, you are vastly underestimating the millions of small jobs out there that artists use as a stepping stone.
Think of the millions of managers who would happily be presented with a choice of 10 artistic interpretations, and pick one for the sake of getting a quick job done.
No way on earth this isn't going to make a major impact. Empathy absolutely required.
Personally, I'm all for AI training and using human artwork. I think telling it not to prevents progress/innovation, and that innovation is going to happen somewhere.
If it happens somewhere, humans who live in that somewhere will just use those tools to launder the AI-generated artwork, and companies will hire those offshore humans and reap the benefits, all the while, the effect on local artists' wages is even more negative because now they don't have access to the tools to compete in this ar(tificial intelligence)ms race.
Most people do not understand the purpose of copyright. Copyright is a bargain between society and the creator. The creator receives limited protection of the work for a limited time. Why is this the deal?
The purpose of copyright is to advance the progress of science and the useful arts. It is to benefit humanity as a whole.
AI takes nothing more than an idea. It does not take a “creative expression fixed in a tangible media”.
Disclaimer: I am not an IP lawyer and have roughly no idea what I'm talking about.
And yes, it will transform art completely, initially by lowering the barrier for producing quality art, and then by raising the bar in terms of quality, it's coming for every artistic field, 3d, film, music etc
If you want a career in these fields, you will need to ride this AI wave from the get go, but even that career will eventually succumb to automation, this is the inevitable end point, as an example, eventually you will be able to give a brief synopsis to an AI and it will be able to flesh that out and create a full movie of it with the actors you choose.
Style transfer combined with the overall coherency of pre-trained models is the real power of these. "Country house in the style of Picasso" is generally not how you use this at full power, because "Picasso" is a poor descriptor for particular memory coordinates. You type "Country house" (a generic descriptor it knows very well) and provide your own embedding or any kind of finetuned addon to precisely lean the result towards the desired style, whether constructed by you or anyone else.
So, if anyone believes that this thing would drive the artists out of their jobs, then removing their works from the training set will change very little as it will still be able to generate anything given a few examples, on a consumer GPU. And that's only the current generation of such models and tools. (which admittedly doesn't pass the quality/controllability threshold required for serious work, just yet)
As a firmware/app guy I'm not qualified on talking about relative skill sets between different areas of cloud development. I agree that interrupts/threads aren't important at all to the person writing a web interface, should have found a better example. I'm not here to argue, for sure there are talented people up and down the stack.
What I can tell you is that I'm amazed at the mistakes I see this new generation of junior programmers making, the kind of stuff indicating they have little understanding of how computers actually actually work.
As an example, I continue to run into young devs that don't have any idea of what numeric over/underflow is. We do a lot of IoT and edge computing, so ranges/limits/size of the data being passed around matters a lot. Attempting to explain the concept reveals that a great many of them have no mental concept of how a computer even holds a number (let alone different variable sizes, types, signed/unsigned etc). When you explain that variables are a fixed size and don't have unlimited range, it's a revelation to many of them.
Sometimes they'll argue that this stuff doesn't matter, even as as you're showing them the error in their code. They feel the problem is that the other devs built it wrong, chose the wrong language or tool for the problem at hand etc. We had a dev (wrote test scripts) that would argue with his boss that everyone (including app and firmware teams) should ditch their languages and write everything in python, where mistakes can't be made. He was dead serious, ended up quitting out of frustration. I'm sure that was a personality problem, but still, the lack of basic understanding astounded us, and the phrase "knows enough to be dangerous" comes to mind.
I find it strange that there is a new type of programmer that knows very little about how computers actually work. I find it stranger that they are even a bit productive in their careers, although I suspect it's because the problem domains they work in are much more tolerant to these kinds of errors. CI/CD system is setup to catch/fix their problems, and hence the job positions can tolerate what used to be considered a below average programmer. Efficient? No. Good enough? Sure.
I suspect some of these positions can be automated before the others can.
This is not intrinsic, though. It is a cultural imperative, so perhaps we need to revisit that?
We don’t need to “try to imagine”, we just need to wait a bit and watch Walt’s reanimated corpse and army of undead lawyers come out swinging for those “mice in the general style of Mickey Mouse”.
With AI art gradually improving, I think that line of reasoning will convince less and less people that would otherwise have second thoughts. They would spend a couple of hours on Midjourney and decide that's as far they want to take their "art" hobby. The power of instant gratification will convince many faster than spending hundreds of hours honing a craft.
I think in the future a lot of people's gut reaction to failing as a manual artist will be to retreat to Midjourney or similar to satisfy their remaining desire to have creative work they can call their own instead of trying again. I personally find the near-instant feedback loop very addicting, and I think it will have a similar effect to social platforms in normalizing a desire for quick results over the patience needed to hone a craft.
But as opposed to scrolling newsfeeds for hours, at least the user obtains a creative output through generative art, and it doesn't carry the same type of guilt for me. This kind of thing is unprecedented and I don't look forward to how it will polarize the various communities involved in the coming years.
You can make sure the people from which their jobs where taken by an AI should be able to live from its proceeds. We all benefit and make progress.
Ok so now many more people can generate cool looking photos now in an automatic fashion. So what? It just means we’ve raised the bar… for what can be considered cool.
Think of the 80/20 model, if it gets you 80% there (don't take that literally) then that's huge in it of itself. This tool is getting us closer to the example you mention and that in of itself is really cool.
I wonder if the nerds have shot themselves in the foot here with terminology? I suspect the nerd’s lawyers would have been much happier if the entire field was named “automated mechanical creativity” instead of “artificial intelligence”. It’d be kinda amusing to see the whole field of study lose in court because of their own persistent claims that what they’re doing is not just “creating in a mechanical fashion” but creating “intelligence” which can therefore be held to account for copyright infringement. Shades of Al Capone getting busted for taxes…
Also, should a human artist creating a pastiche count as copyright infringement as well?
Humans have my sympathy. We are literally at the brink of the multiple major industries being wiped out. What was only theoretical for the last 10-15 years started to happen right now.
In few short years most humans will not be able to find any employment because machine will be more efficient and cheaper. Society will transform beyond any previous transformations in history. Most likely it's going to be very rough. But we just argue that of course our specific jobs are going to stay.
You're in for a rude awakening when you get laid off and replaced with a bot that creates garbage code that is slow and buggy but works and so the boss gets to save on your salary. "But it's slow, redundant, looks like it was made by some who just copy and pasted endlessly from stackoverflow" but your boss won't care, he just needs to make a buck.
Now imagine that automation in food and expand it to everything. A table factory wouldn’t purchase wood from another company. There’s automation to extract wood from trees and the table factory just requests it and automation produces a table. With robots at every step of the process, there are no labor costs. There’s no shift manager, there’s no CEO with a CEO salary, there’s no table factory worker spending 12+ hours a day drilling a table leg to a table for $3 an hour in China.
That former factory worker in China is instead pursuing their passions in life.
Other architectures exist, but you can notice from the lack of people talking about them that they don't produce any output nearly as developed as the chatGPT kind. They will get there eventually, but that's not what we are seeing here.
That is only because the vast majority of computer programming that is done is not very good.
Some artists just do the descriptive part though, right? The name I can think of is Sol LeWitt, but I'm sure there are others. A lot of it looks like it could be programmed, but might be tricky.
Essentially we are going to get away from market economy, money, private property. The problem is that once these things go personal freedom goes as well. So either accept the inevitable totalitarian society, or something else? But what?
My primary empathy is with end users, who could be empowered with AI-based tools to express their dreams and create graphics or software without need to pay professional artists or programmers.
One can see AI tools as progress here while also recognising that this is likely to have a huge impact on a lot of lives.
At the same time I recognise that this is a massive threat to artists, both low-visibility folks who throw out concepts and logos for companies, and people who may sell their art to the public. Because I can spend a couple of dollars and half an hour to come up with an image I’d be happy to put on my wall.
I’m not sure what the answer is here, but I don’t think a sort of “human origin art” Puritanism is going to hold back the flood, though it may secure a niche like handmade craft goods and organic food…
I have no idea how well it holds up to modern reading, but I found it interesting at the time.
He posits two outcomes - in the fictionalised US the ownership class owns more and more of everything, because automation and intelligence remove the need for workers and even most technicians over time. Everyone else is basically a prisoner given the minimum needed to maintain life.
Or we can become “socialist” in a sort of techno-utopian way, realising that the economy and our laws should work for us and that a post-labor society should be one in which humans are free from dependence on work rather than defined by it.
Does this latter one imply a total lack of freedom? It certainly implies dependence on the state, but for most people (more or less by definition) an equal share would be a better share than they can get now, and they would be free to pursue art or learning or just leisure.
Twenty years down the pike I've gotten pretty solid at programming, certainly not genius-level but competent.
I agree strongly that making art anyone cares anout is massively harder than being a competent programmer. In both you need strong technical abilities to be effective, but intuition and a deep grasp of human psychology are really crucial in art - almost table stakes.
Because software dev is usually practical, a craft, you can get paid decently with far less brilliance and fire than will suffice to make an artist profitable.
...though perhaps the DNN code assist tools will change that soon.
As the price of a bit dropped the quality of the comms dropped. It is inevitable that the price of the creation of (crappy) art will do the same thing if only because it will drag down the average.
> As an example, I continue to run into young devs that don't have any idea of what numeric over/underflow is
That doesn't happen in web application development either. You don't write that low level code that you could cause an overflow or underflow. There are a zillion layers in between your code and what could cause an overflow.
> they have little understanding of how computers actually actually work.
'The computer' has been abstracted away at the level of the Internet. Not even the experts who attend datacenters would ever pass near anything that is related to a numeric overflow. That stuff is hidden deep inside hardware or deep inside the software stack near the OS level in any given system. If there is anything that causes an overflow in such a machine, what they do would be to replace that machine instead of going into debugging. Its the hardware manufacturers' and OS developers' responsibility to do that. No company that does cloud or develops apps on the Internet would need to know about interrupts, numeric overflows and whatnot.
> I find it stranger that they are even a bit productive in their careers, although I suspect it's because the problem domains they work in are much more tolerant to these kinds of errors
Interrupt errors dont happen in web development. You have no idea at the level of abstraction that was built between the layers where it could happen and the modern Internet apps. We are even abstracting away servers, databases at this point.
You are applying a hardware perspective to the Internet. That's not applicable.
A lot used to escape the market logic. And I hope we go back to some of that. Not everything has to be profitable / a market.
Example: commons infrastructure, common grazing place for cattle, the woods.
What I wish would be pulled of the markets : School, hospital, energy infra
It’s not a fantasy idea. I grow up there and it’s still working.
It’s not out of beautiful idea either. But sheer pragmatism.
A country will always need those things and those are important things. We might as well invest in them for the long run.
Clearly those are not hip idea anymore. Oh well.
For me at least, Stable Diffusion has been this great tool for personal expression in a medium that was previously inaccessible to me: images. Now I could communicate with people in this new, accessible way! I've learned more about art history and techniques in the last 3 months than in my entire life up to that point.
So I came up with a few ideas about making some paintings for my mother, and children's books for my nieces and nephew. The anger I received from my artistically inclined colleagues over this saddened me greatly, so I tried to talk to more people to see if this was an anomaly. There was more anger, and argument for censorship! I have to admit I struggled to maintain any empathy after receiving that reception.
I'm personally really excited about a future where we don't have to suffer to create art, whether it's code, an image, or music. Isn't more art and less suffering in our lives a good thing? If there are economic structures we've set up that make that a bad thing, maybe it would be fruitful take a critical look at those.
Presently I'm looking at creating a few small B2B products out of various fine-tuned public AI models. The first thing I realized is that I'd be addressing niches that were just not possible to tackle before (cost, scale, latency). The second thing I noticed is I'd need to hire designers, copywriters, etc. for their judgement -- at least as quality control. So at least in my limited scope of activity, the use of AI permits me to hire creative professionals, to tackle jobs that previously employed zero creative professionals (because previously they weren't done at all, or just done very poorly, e.g. English website copy for small business in non-English-speaking developing economies).
I do feel for people that have decided that they need to retool because they feel AI threatens their job. I do that every couple of years when some new thing threatens an old thing that I do, it's a chunk of work, and not always fun. To show better empathy, I think I'm going to reach out to more artists and show them what the current AI tools can and cannot do, to help them along this path. So thank you for your post, because it gave me the idea to take this approach!
...and on the weekends, I can still write code in hand-optimized assembly, because that's the brush I love painting with.
It amounts to saying that anything that benefits me is good and anything to my detriment is bad. Sure, there's a consistency to that. However, if that's the foundation of one's positions, it leads to all manner of other logical inconsistencies and hypocrisies.
Also I'm not sure most artist jobs are middle-upper class.
What is that?
Personally, I'm new in my career, and I'd like to not have the rug pulled out from under me. If I were a student again, I would have to consider whether the university debt was going to be worth it in the long term or if I should look at a more traditional field to be in.
However, these are individual reactions, not behaviours as a community/society. If you read comments around HN or some other liberal circles, you have the feeling that is our human-ness is being threatened, one of our core defining traits. It seems like "artistic creativity" is being enshrined as a circular argument (also I'm wary of calling startup-landing-page illustrators "artists" – more like craftpeople, although this distinction might hurt the conversation).
My broader point is that ChatGPT is not "the beginning of the end", but another chapter in a history of automation and replacement that will pose serious challenges for humankind. That treating it as more critical than factory automation is demeaning to blue-collar workers and also untrue. Everything we do is what defines us as people: cherry-picking some skills is a relic from Enlightenment we should get rid of.
> Also I'm not sure most artist jobs are middle-upper class.
I do not have any data at hand, only my circle of friends and former colleagues (I was formerly a graphic designer). Few people endure being a "starving artist" without a little financial safety coming from above. Also, it is a profession that only provides status to a certain socio-economic milieu.
But of all the examples of cheap and convenient beating quality: photography, film, music, et al, the many industries that digital technology has disrupted, newspapers are more analogous than builders. Software companies are publishers, like newspapers. And newspapers had entire building floors occupied by highly skilled mechanical typesetters, who have long been replaced. A handful of employees on a couple computers could do the job faster, more easily, and of good enough quality.
Software has already disrupted everything else, eventually it would disrupt the process of making software.
We are a long political fight away from people in industries affected by AI not feeling like their livelihoods are under attack. It would be better received, at least for me, if the AI guys would admit that under the system we have they're playing with a big heaping flamethrower in a vast ocean of gasoline.
I do wonder what happens as the market for the “old way” dries up, because it implies that there is no career path to lead to doing things better - any fool (I include myself) can be an AI jockey, but without people that need the skills of average designers, from what pool will the greats spring?
Ultimately those who are able to integrate it into their creative process will be the winners. There will always be small niche for those who oppose it out of principle.
In what sense did Copilot fizzle badly? It's a tool that you incorporated into your workflow and that you pay money for.
Does it solve all programming? No, of course not, and it's far from there. I think even if improves a lot it will not be close to replacing a programmer.
But a tool that lets you write code 10x,100x faster is a big deal. I don't think we're far away from a world in which every programmer has to use AI to be somewhat proficient in their job.
Frustratingly, most people don't fully appreciate the art, and are quite happy for artists to put in only 20% of the effort. Heck, old enough to remember people who regarded Quake as "photorealistic", some in a negative way saying this made it a terrible threat to the minds of children who might see the violence it depicted, and others in a positive way saying it was so good that Riven should've used that engine instead of being pre-rendered.
Bugs like this are easy to fix: `x = x – 4;` which should be `x = x - 4;`
Bugs like this, much harder:
#define TOBYTE(x) (x) & 255
#define SWAP(x,y) do { x^=y; y^=x; x^=y; } while (0)
static unsigned char A[256];
static int i=0, j=0;
void init(char \*passphrase) {
int passlen = strlen(passphrase);
for (i=0; i<256; i++)
A[i] = i;
for (i=0; i<256; i++) {
j = TOBYTE(j + A[TOBYTE(i)] + passphrase[j % passlen]);
SWAP(A[TOBYTE(i)], A[j]);
}
i = 0; j = 0;
}
unsigned char encrypt_one_byte(unsigned char c) {
int k;
i = TOBYTE(i+1);
j = TOBYTE(j + A[i]);
SWAP(A[i], A[j]);
k = TOBYTE(A[i] + A[j]);
return c ^ A[k];
}This sounds mystical and mysterious; it would be a mistake to project one mode of production as being the brand all humans must live with until we go extinct.
I think majority wouldn't know what hit them when the time comes. My experience with chatgpt has been highly positive changing me from a skeptic to a believer. It takes a bit of skill to tune the prompts but I got it to write frontend, backend, unit test cases, automation test cases, generate test data flawlessly. I have seen and worked with much worse developers than what this current iteration is.
Indeed, you should not read it as an imperative. The other commentator was also put on the wrong foot by this.
Maybe I should not have assumed people would know Genesis, https://en.wikipedia.org/wiki/Book_of_Genesis. I should be more explicit: we are not some holy creatures. Don't assume that the few who are gonna reap the rewards will spontaneously share them with others. We are able to let others suffer to gain a personal advantage.
To get good output on larger scales we're going to need a model that is hierarchical with longer term self attention.
Intellectual property generally includes copyright, patents, trademark, and trade secrets, though there are broader claims such as likeness, celebrity rights, moral rights (e.g., droit d'auteur in French/EU law), and probably a few others since I began writing this comment (the scope seems to be increasing, generally).
I suspect you intended to distinguish trademark and copyright.
Imagine all the good things that aren't done because they just don't make any money. Instead we put resources towards things that make our lives worse because they're profitable.
What search algorithms have you developed?
What non-trivial, non-Flask/Django/React, non-plugin/non-API tool, or library, or frameworks, have you written?
What actual percentage of your work output comprises computationally hard problems?
If we're talking programming, that's real programming, the kind you should be comparing 'hard' art to.
The other patterns of AI that seem to be able to arrive at novel solutions basically use a brute force approach of predicting every outcome if it has perfect information or a brute force process where it tries everything until it finds the thing that "works". Both of those seem approaches seem problematic in the "real world". (though i would find convincing the argument that the billions of people all trying things act as a de facto brute force approach in practice)
For someone to be able to do a novel implementation in a field dominated by AI might be impossible, because core foundational skills cant get developed anymore by humans for them to achieve heights that the AI hasn't reached yet. We are now stuck, things cant really get "better", we just get maybe iterative improvements on how the AI implements the already arrived at solutions.
TLDR, lets sic the AI on making a new Javascript framework and see what happens :)
Anyone finding their own artistic voice with the tools, I respect that, those people are artists - but training with the aim to create derivative models, that should be called out.
A derivative work is a creative expression based on another work that receives its own copyright protection. It's very unlikely that AI weights would be considered a creative expression, and would thus not be considered a derivative work. At this point, you probably can't copyright your AI weights.
An AI might create work that could be considered derivative if it were the creative output of a human, but it's not a human, and thus the outputs are unlikely to be considered derivative works, though they may be infringing.
My first job was to write C code for industrial machines that replaced humans doing manual work. Sometimes I even had to go watch them work so I could fully understand what they were doing.
In my second job as a developer, I wrote a Django application that automated away a whole department in the company. I saw 100 people getting fired due to a script that I wrote.
That was all happening in the third world country were I came from. These were real people getting fired, with families that depend on them. Most of them were already in poverty even before being fired.
These artists complaining sound like a very 1st world problem to me. I doubt that anyone actually "lost a job" because of this technology so far.
This was my reply: https://news.ycombinator.com/item?id=34005604
I also agree that artist employment isn't sacred, but after extensive use of the generation tools I don't see them replacing anything but the lowest end of the industry, where they just need something to fill a space. The tools can give you something that matches a prompt, but they're only really good if you don't have strong opinions about details, which most middle tier customers will.
My probably perverse takeaway is that Barbara Streisand might have been wrong: people who need people (to appreciate their work) may not be the luckiest people in the world. One can enjoy one’s accomplishments without needing to have everyone else appreciate them. Or you can find other people with similar interests, and enjoy shared appreciation.
In the extreme, the need for external validation seems to lead to people like Trump and Musk. Perhaps a shift in how we view this would be beneficial for society?
I don't mean this in a "people love work, actually", hooray-capitalism sense (LOL, god no), but the sense that humans tend to be happier and more content when they're helpful to those around them. It used to be a lot easier to provide that kind of value through creative and self-expressive efforts, than it is now. Any true need for artists and creative work (and, for the most part, craftspeople) at the scale of friend & family circles or towns or whatever, is all but completely gone.
I still stand behind my main point, which is that some of these jobs will be automated before others. Apparently the skill set differences between different kinds of programmers even wider than I thought it was. So instead of talking about whether AI will/won't automate programming in general, it's more productive to discuss which kind of programming AI will automate first.
So AI puts artists out of a job and in some utopian vision, one day puts programmers out of a job, and nobody has jobs and that's what we should want, right, so why are you complaining about your personal suffering on the inevitable march of progress?
There is little to no worthwhile discussion from those same people about if the Puritanical worldview of work-to-live will be addressed, or how billionaires/capitalists/holders-of-the-resources respond to a world where no one has jobs, an income stream, and thus money to buy their products. Because Capitalist Realism has permeated, and we can no longer imagine a plausibly possible future that isn't increasingly technofeudalist. Welcome back to Dune?
Both personal autonomy and private property are social constructs we agree are valuable. Stealing a car and raping a person are things we've identified as unacceptable and codified into law.
And in stark contrast, intellectual property is something we've identified as being valuable to extend limited protections to in order to incentivize creative and technological development. It is not a sacred right, it's a gambit.
It's us saying, "We identify that if we have no IP protection whatsoever, many people will have no incentive to create, and nobody will ever have an incentive to share. Therefore, we will create some protection in these specific ways in order to spur on creativity and development."
There's no (or very little) ethics to it. We've created a system not out of respect for people's connections to their creations, but in order to entice them to create so we can ultimately expropriate it for society as a whole. And that system affords protection in particular ways. Any usage that is permitted by the system is not only not unethical, it is the system working.
For people caught in that kind of situation, progress sucks.
If you can't support yourself for whatever reason, you rely on others to do that work on your behalf. Social animals, wolves for example, try to provide for their sick and handicapped, but that's only after their own needs are met first.
We have physical needs just like other members of the natural world - food for example, if we can't provide food for ourselves, we'll starve to death just like an animal. Why bother judging this situation as good or bad when it's not something that can be changed.
A few people engaged in “hand ringing” but not deep, regular discourse on the evolving nature of what we want “tech” and “programming” to be going forward.
Despite delivering transformative social shifts, even this last decade, where is the collective reflection?
If the original is a creative expression, then recording it using some different tech is still a creative expression. I don't see the qualitative difference between a bunch of numbers that constitutes weights in a neural net, and a bunch of numbers that constitute bytes in a compressed image file, if both can be used to recreate the original with minor deviations (like compression artifacts in the latter case).
Isnt that the case in every field in technology? Way back engineers used to know how circuits worked. Now network engineers never deal with actual circuits themselves. Way back back programmers had to do a lot of things manually. Now the underlying stack automates much of that. On top of TCP/IP, we laid the WWW, then we laid web apps, then we laid CMSes, then we came to such a point that CMSes like WordPress has their own plugins, and the very INDIVIDUAL plugins themselves became expertise fields. When looking for someone to work on a Woocommerce store, people dont look for WordPress developers, or plugin developers. They look for 'Woocommerce developers'. WP became so big that every facet of it became specializations in itself.
Same for everything else in tech: We create a technology, which enables people to build stuff on it, then people build so much stuff that each of those became individual worlds in themselves. Then people standardize that layer and then move on to building next level up. It goes infinitely upwards.
It doesn’t really matter to humanity if strong people can still win fights, but it might matter if artists and designers who do produce great, original work stop being produced. It probably even matters to the AI models because that forms part of their input.
Case in point: https://stackoverflow.com/help/gpt-policy
> This trust is broken when users copy and paste information into answers without validating that the answer provided by GPT is correct, ensuring that the sources used in the answer are properly cited (a service GPT does not provide), and verifying that the answer provided by GPT clearly and concisely answers the question asked.
My argument is just, and has always been, that this is a novel right that is not covered by existing legislation.
- If I draw an animation and post it to YouTube, and one of the characters happens to be Mickey Mouse, that will be legal. But I still can't name my channel "Mickey Mouse Official" or put the character's face in my channel profile, since that's source-identifying material.
- If I just flat-out reupload Steamboat Willie to YouTube, with the (possibly incorrect) title "Walt Disney's FIRST EVER CARTOON", that also will be legal - because the title is purely nominative and does not imply that I'm licensed by Disney.
- If I release STL or STEP files on Thingiverse for printing Mickey Mouse christmas ornaments, that will be legal - but I have to make sure that nobody thinks this is actually made by Disney.
- Mass-produced merchandise sold in stores will be very difficult to sell legally, since generally speaking the whole object is considered source-identifying when you put it on a store shelf. About the only thing you could do is sell figurine blind-bags with no indication that there's public-domain Disney stuff in there.
That last one is probably why Disney isn't trying to, say, push Mexican life+100 terms[1] on everyone. Mickey Mouse is more valuable as a branding and merchandising tool than as a creative work.
Copyright law itself also has a preemption clause[2] which prohibits making copyright-shaped claims under other laws. This is usually mentioned in the context of state right-of-publicity laws[3], but the text of the clause would also apply to trying to "trademark a copyright" to keep the mouse in his cage.
[0] This is part of "trademark fair use", which is an entirely different concept to the copyright fair use one.
[1] Oh, yeah, I forgot - in all those YouTube examples you need to convince YouTube to block your upload in Mexico, which they are unwilling to do. The stated reason is that pirates could be harder to catch if they geoblocked their uploads. However, this already causes problems for, say, people reviewing anime - which is actually illegal in Japan! So I suspect that YouTube might have to change their policies on this at some point as more large publishers' work hits the public domain in certain countries but not others.
[2] 18 USC 301
[3] https://www.sheppardmullin.com/media/article/753_CL%2025-4%2...
Most programmers are working in business-focused jobs. I don't think many of us, in grade school, said "I sure hope I can program business logic all day when I grow up." So I think the passion for 90% of people writing code is really about getting a paycheck. Then they use that paycheck to do what they're really passionate about in their personal life.
So I completely agree that people passionate about coding might want to write that code by hand, I just don't think that group accounts for most people writing code professionally.
Art is really not cheap. I think people think about how little artists generate in income and assume that means art is cheap, but non-mass-produced art is pretty much inaccessible for the vast majority of people.
There's a very real chance that adding these costs on top will drive development away from the sort that pays the people who lose out. For example, attempting to require licensing for images may simply push model training towards public domain materials. Then the models still work and the usable commercial art is still generated cheaply, but there are no living artists getting paid.
We should not blithely assume an ideal option that makes everyone happy is readily available or even at all. The core incentive of a lot of users is to spend less on commercial imagery. The core incentive of artists is to get paid at least as much as before. We should take seriously the possibility that there is not a medium in there that satisfies everyone.
It makes sense. My own experience driving a non-Tesla car the speed limit nearly always, is that other drivers will try to pressure you to do dangerous stuff so they can get where they're going a few seconds faster. I sometimes give into that pressure, but the AI doesn't feel that pressure at all. So if you're paying attention and see the AI not giving into that pressure, the tendency is to take manual control so you can. But that's not safer--quite the opposite. That's an example of the AI driving better than the human.
On the opposite end of the social anxiety spectrum, there's a genre of pornography where people are having sex in the driver's seats of Teslas while the AI is driving. They certainly aren't intervening 3 times in 20 minutes, and so far I don't know of any of these people getting in car accidents.