The lack of empathy is incredibly depressing...
At the dawn of mechanization, these same arguments were being used by the luddites, I'd recommend you to read them, it was quite an interesting situation, same as now
The reality is that advances such as these can't be stopped, even if you forbid ml legislation in the US there are hundreds of other countries which won't care same as it happens with piracy
As long as the world is not entirely made of AI, there will always be some expertise to add, so instead of being afraid, you should just evolve with your time
I completely agree with it. Take a contemporary pianist for example, the amount of dedication to both theory and practice, posture, mastering the instrument and what not, networking skills, technology skills, video recording, music recording, social media management, etc.
It's up to us to distribute those gains back.
Because in my time the stakeholders in companies have never actually been decisive when scoping features.
Co-pilot is indeed the endgame for AI assisted programming. So I would say for art, someone mindful could train an AI on their own dataset and use that to accelerate their workflow. Imagine it drawing outlines instead of the full picture.
The job market will always keep on changing, you have to adept to it to a certain degree.
Now we can talk about supporting art as a public good and I am all for that but I don't see how artists are owed a corporate job. Many of my current programming skill will be obsolete one day, that's part of the game.
* Can a Copilot-like generator be trained with the GPL code of RMS? What is the license of the output?
* Can a Copilot-like generator be trained with the leaked source code of MS Windows? What is the license of the output?
If an AI will take care of most of the finicky details for me and let me focus on defining what I want and how I want it to work, then that is nothing but an improvement for everyone.
That said Microsoft didn't allow their kernel developers to look at Linux code for a reason.
I think the two biggest differences between art AI and code AI are that (a) code that's only 95% right is just wrong, whereas art can be very wrong before a client even notices [0]; and (b) we've been expecting this for ages already, to the extent that many of us are cynical and jaded about what the newest AI can do.
[0] for example, I was recently in the Cambridge University Press Bookshop, and they sell gift maps of the city. The background of the poster advertising these is pixelated and has JPEG artefacts.
It's highly regarded, and the shop has existed since 1581, and yet they have what I think is an amateur-hour advert on their walls.
If it's not, why worry about it ?
What they were however was against was companies using that technology to slash their wages in exchange for being forced to do significantly more dangerous jobs.
In less than a decade, textile work went from a safe job with respectable pay for artisans and craftsmen into one of the most dangerous jobs of the industrialised era with often less than a third of the pay and the workers primarily being children.
That's what the luddites were afraid of. And the government response was military/police intervention, breaking of any and all strikes, and harsh punishments such as execution for damaging company property.
What do you think cloud computing did? A lot of sysadmins, networking, backups, ops went the way of dinosaurs. A lot of programmers have also fallen on the side by being replaced with tech and need to catch up.
Wallowing in pity is not going to make help, we saw a glimpse of this with github-copilot. Some people built the hardware, the software behind these AIs, some others are constructing the models, applying it to distinct domains. There's work to be done for those who wish to find their place in the new world.
Artists will survive through innovation.
I know current AI is very different from an organic brain at many levels, but I don't know if any of those differences really matters.
Code should not need to be done by humans at all. There's no reason coding as it exists today should exist as a job in the future.
Any time I or a colleague are "debugging" something, I'm just sad we are so "dark ages" that the IDE isn't saying "THERE, humans, the bug is THERE!" in flashing red. The IDE has the potential to have perfect information, so where is the bug is solvable.
The job of coding today should continue to rise up the stack tomorrow to where modules and libraries and frameworks are just things machines generate in response to a dialog about “the job to be done”.
The primary problem space of software is in the business domain, today requiring people who speak barely abstracted machine language to implement -- still such painfully early days.
We're cavemen chipping at rocks to make fire still amazed at the trick. No empathy, just, self-awareness sufficient to provoke us into researching fusion.
It still needs a human to tell it what to paint, and the best outputs generally require hours of refinement and then possibly touch-up in photoshop. It's not generating art on its own.
Artists still have a job in deciding what to make and using their taste to make it look good, that hasn't changed. Maybe the fine-motor skills and hand-eye coordination are not as necessary as they were, but that's it.
I would personally be astonished if any of the distributed systems I've worked on in my career were even close to 95% correct, haha.
So I don’t think art is “harder”. It’s just harder for the average practitioner/professional to find “success” (however you like to define it).
I think artists feeling like shit in this situation is totally understandable. I'm just a dilettante painter and amateur hentai sketcher, but some of the real artists I know are practically in the middle of an existential crisis. Feeling empathy for them is not the same as thinking that we should make futile efforts to halt the progress of this technology.
Because of how capitalism works and people always try to corner markets, extract value from other people, etc. etc.?
> If it's not, why worry about it ?
Because we can choose different professions that are less susceptible to automation? Or we can study DL to implement our own AI.
It would be great if there was an AI that could be a liaison between developers and stakeholders, translating the languages of each side for mutual understanding.
If an AI were to make it impossible to make a living doing programming, would that be an improvement for most readers of this site?
It's still worth it on the whole but I have already gotten caught up on subtly wrong Copilot code a few times.
Art and programming are hard for different reasons.
The difference in the AI context is that a computer program has to do just about exactly whats asked of it to be useful, whereas a piece of art can go many ways and still be a piece of art. If you know what you want its quite hard to get DALL-E to produce that exactly (or it has been for me), but it still generates something that is very good looking.
Not disagreeing with your comment but this is not the case with Midjourney. Very little is needed to produce stunning images. But afaik they modify/enhance the prompts behind the screen
The steady progress of the Industrial Revolution that has made the average person unimaginably richer and healthier several times over, looks in the moment just like this:
"Oh no, entire industries of people are being made obsolete, and will have to beg on the streets now".
And yet, as jobs and industries are automated away, we keep getting richer and healthier.
It may be that one day AI will also make their creators obsolete. But at that point so many professions will be replaced by it already, that we will live in a massively changed society where talking about the "job" has no meaning anymore.
Edit: Typo
> Who can't contribute to Wine?
> Some people cannot contribute to Wine because of potential copyright violation. This would be anyone who has seen Microsoft Windows source code (stolen, under an NDA, disassembled, or otherwise). There are some exceptions for the source code of add-on components (ATL, MFC, msvcrt); see the next question.
I've seen a few MIT/BSD projects that ask people not to contribute if they have seen the equivalent GPL project. It's a problem because Copilot has seen "all" GPL projects.
Rendering was only ever a small part of the visual arts process anyway. And you can still manually add pixel perfect details to these images by hand that you wouldn't know how to create an AI prompt for. And further, you can mash together AI outputs in beautifully unique and highly controlled ways to produce original compositions that still take work to reproduce.
To me, these AI's are just a tool for increased speed, like copy and paste.
Of course software gets copied all the time, but we have jobs because so much bespoke software is needed. Looking at some of what AI can do now, I wouldn't need surprised if our floor gets raised a lot in the next few years as well.
Are artists really "doomed"? Or are they just worse at redistribution?
While technically both artists and developers make their living by producing copyrighted works, our relationship to copyright is very different; while artists rely on copyright and overwhelmingly support its enforcement as-is, many developers (including myself) would argue for a significant reduction of its length or scale.
For tech workers (tech company owners could have a different perspective) copyright is just an accidental fact of life, and since most of paid development work is done as work-for-hire for custom stuff needed by one company, that model would work just as well even if copyright didn't exist or didn't extend to software. While in many cases copyright benefits our profession, in many other cases it harms our profession, and while things like GPL rely on copyright, they are also in large part a reaction to copyright that wouldn't be needed if copyright for code didn't exist or was significantly restricted.
Art is more difficult than programming for people with talents in programming but not in arts. Art is easier than programming for people with talents in arts but not in programming. Granted, those two sentences are tautology, but nonetheless a reminder that the difficulty of art and programming does not form a total order.
If you want to give programming work to an AI, give it the things where incorrect behaviour is going to be really obvious, so that it can be fixed. Don't give it the stuff where everyone will just naively trust the computer without thinking about it.
I know what you mean, but thinking about it critically, this is just wrong. All software has bugs in it. Small bugs, big bugs, critical bugs, security bugs, everything. No code is immune. The largest software used by millions every day has bugs. Library code that has existed and been in use for 30 years has bugs.
I don't think you were actually thinking of this in your comparison, but I think it's actually a great analogy - code, like art, can be 95% complete, and that's usually enough. (For art, looks good and is what I wanted is enough, for code, does what I want right now, nevermind edge cases is enough.)
> code that's only 95% right is just wrong,
I know what you mean, but thinking about it critically, this is just wrong. All software has bugs in it. Small bugs, big bugs, critical bugs, security bugs, everything. No code is immune. The largest software used by millions every day has bugs. Library code that has existed and been in use for 30 years has bugs.
I don't think you were actually thinking of this in your comparison, but I think it's actually a great analogy - code, like art, can be 95% complete, and that's usually enough. (For art, looks good and is what I wanted is enough, for code, does what I want right now, nevermind edge cases is enough.)
The reason ChatGPT isn't threatening programmers is for other reasons. Firstly, it's code isn't 95% good, it's like 80% good.
Secondly, we do a lot more than write one-off pieces of code. We write much, much larger systems, and the connections between different pieces of code, even on a function-to-function level, are very complex.
You'll find out because you're now an enlightened immortal being, or you won't find out at all because the thermonuclear blast (or the engineered plague, or the terminators...) killed you and everybody else.
Does that mean there won't be some enterprising fellas who will hook up a chat prompt to some website thing? And that you can demo something like "Add a banner. More to the right. Blue button under it" and that works? Sure. And when it's time to fiddle with the details of how the bloody button doesn't do the right thing when clicked, it's back to hiring a professional that knows how to talk to the machine so it does what you want. Not a developer! No, of course not, no, no, we don't do development here, no. We do prompts.
What are those? It seems it's low-margin, physical work that's seeing the least AI progress. Like berry picking. Maybe also work that will be kept AI-free longer by regulators like being a judge?
But at least every job I've had so far also entailed understanding the entire system, the surrounding ecosystem, upstream and downstream dependencies and interactions, the overall goal being worked toward, and playing some role in coming up with the requirements in the first place.
ChatGPT can't even currently update its fixed-in-time knowledge state, which is entirely based on public information. That means it can't even write a conforming component of a software system that relies on any internal APIs! It won't know your codebase if it wasn't in its training set. You can include the API in the prompt, but then that is still a job for a human with some understanding of how software works, isn't it?
Ultimately though this isn't a technical problem but an economic one about how we as a society decide to share our resources. AI growth the pie, but removes leverage from some to claim their slice. Automation is why we'll inevitably need UBI at some point
It's fair to suppose (albeit based on a very small sample size, i.e., the last couple hundred, abnormal years of history) that all sorts of new jobs will arise as a result of these changes- but it seems to me unreasonable to suppose that these new jobs of the future will necessarily be more interesting or enjoyable than the ones they destroyed. I think it's easy to imagine a case in which the jobs are all much less pleasant (even supposing we all are wealthier, which also isn't necessarily going to be true)- imagine a future where the remaining jobs are either managerial/ownership based in nature or manual labor. To me at least, it's a bleak prospect.
Now imagine a future where AI can assist in law. Or should we not have that because lawyers pay so much for education and they work so bitterly? Should we do away with farm equipment as well? Should we destroy musical synths so that we can have more musicians?
It’s one thing to say we should have a government program to ease transitions in industry. It’s something else to say that we should hold back technological progress because jobs will be destroyed.
How do we develop a coherent moral framework to address this matter?
To me, code that is 95% correct will either fail catastrophically or give very wrong results. Imagine if the code you wrote was off 5% for every number it was supposed to generate. Code that is 99.99% correct will introduce subtle bugs.
* No shade to chatGPT, writing a function that calculates shap values is tough lol, I just wanted to see what it could do. I do think that, given time, it'll be able to write a days worth of high quality code in a few seconds.
The role that is possibly highly streamlined with a near-future ChatGPT/CoPilot are requirements-gathering business analysts, but developers at Staff level on up sits closer to requiring AGI to even become 30% good. We'll likely see a bifurcation/barbell: Moravec's Paradox on one end, AGI on the other.
An LLM that can transcribe a verbal discussion directly with a domain expert for a particular business process with high fidelity, give a precis of domain jargon to a developer in a sidebar, extracts out further jargon created by the conversation, summarize the discussion into documentation, and extract how the how's and why's like a judicious editor might at 80% fidelity, then put out semi-working code at even 50% fidelity, that works 24x7x365 and automatically incorporates everything from GitHub it created for you before and that your team polished into working code and final documentation?
I have clients who would pay for an initial deployment of that for an appliance/container head end of that which transits the processing through the vendor SaaS' GPU farm but holds the model data at rest within their network / cloud account boundary. Being able to condense weeks or even months of work by a team into several hours that requires say a team to tighten and polish it up by a handful of developers would be interesting to explore as a new way to work.
I dont think that is absolutely something a society must guarantee. People are made obsolete all the time.
What needs to be done is to produce new needs that currently cannot be serviced by the new AI's. I'm sure it will come - as it has for the past hundred years when technology supplanted an existing corpus of workers. A society can make this transition smoother - such as a nice social safety-net, and low-cost/free education for retraining into a different field.
In fact, these things are all sorely needed today, without having the AIs' disruptions.
But that is giving AI too much credit. As advanced as modern AI models are, they are not AGIs comparable to human cognition. I don't get the impulse to elevate/equate the output of trained AI models to that of human beings.
This is the main reason I haven't actually incorporated any AI tools into my daily programming yet - I'm mindful that I might end up spending more time tracking down issues in the auto-generated code than I saved using it in the first place.
[0] You can see the results here https://twitter.com/NickFisherAU/status/1601838829882986496
I'm sure your employer would love that more than you. That's the issue here.
> That said while these tools are incredibly impressive, having messed with this for a few days to try to even do basic stuff, what am I missing here? It is a nice starting point and can be a productivity boost but the code produced is often wrong and it feels a long way away from automating my day to day work.
This is the first irritation of such a tool and it's already very competent. I'm not even sure I'm better at writing code than GPT, the only thing I can do that it can't is compile and test the code I produce. If you asked me to create a React app from a two sentence prompt and didn't allow me to search the internet, compile or test it I'm sure I'd probably make more mistakes than GPT to be honest.
But that is not "programming". That is glueing together bullshit until it works and the results of that "work" are "blessing" us everyday. The gift that keeps on giving. You FAANG people are indeed astronomically, immorally, overpaid and actively harm the world.
But, luckily, the world has more layers than that. Programming for Facebook is not the same as programming for a small chemical startup or programming in any resource-restricted environment where you can't just spin up 1000 AWS instances at your leisure and you actually have to know what you're doing with the metal.
Be entertaining. Be outrageous. Be endearing. An AI can't cut off their ear.
Clearly ChatGPT is going to improve, and AI development is moving at a breakneck pace and accelerating. Dinging it for totally fumbling 5% or 10% of written code is completely missing the forest for the trees.
I fully expect there will be zero reciprocation. There will, instead, be a strong expectation that that empathy turns into centering of fear and a resulting series of economic choices. AI systems are now threatening the ability of some artists to get paid and those artists would like that to stop.
I think we're seeing it right now. You shift effortlessly from talking about empathy to talking about the money. You consider the one the way to get the other, so you deplore the horrifying lack of empathy.
Let me put it another way. Would you be happy if you saw an outpouring of empathy, sympathy, and identification with artists coupled with exactly the same decisions about machine learning systems?
So any transformativity of the action should be attributed to the human and the same copyright laws would apply.
The question is perhaps not if we should have empathy for them. The question is what we should do with it once we have it. I have empathy for the cabbies with the Knowledge of London, but I don't think making any policy based on or around that empathy is wise.
This is tricky in practice. A surprising number of people regard prioritizing the internal emotional experience of empathy in policy as experiencing empathy.
Art can also be extremely wrong in a way everyone notices and still be highly successful. For example: Rob Liefeld.
In a program, you can't really afford that. A small mistake can have dramatic consequences. Now, maybe in the next few years you'll only need one human supervisor fixing AI bugs where you used to need 10 high-end developers, but you probably won't be able to make reliable programs just by typing a prompt, the way you can currently generate a cover for an e-book just by asking midjourney.
As for the political consequences of all of this, this is yet another issue.
It would be very easy to make training ML models on publicly available data illegal. I think that would be a very bad thing because it would legally enshrine a difference between human learning and machine learning in a broader sense, and I think machine learning has huge potential to improve everyone's lives.
Artists are in a similar position to grooms and farriers demanding the combustion engine be banned from the roads for spooking horses. They have a good point, but could easily screw everyone else over and halt technological progress for decades. I want to help them, but want to unblock ML progress more.
Now, I have empathy. I paused a moment before writing this comment to identify with artists, art students, and those who have been unable to reach their dreams for financial reasons. I emphatically empathize with them. I understand their emotional experiences and the pain of having their dreams crushed by cold and unfeeling machines and the engineers who ignore who they crush.
Yet I must confess I am uncertain how this is supposed to change things for me. I have no doubt that there used to be a lot of people who deeply enjoyed making carriages, too.
Programming is definitely easier to make a living from. I'm a very mediocre artist and developer and I'm never making enough off of art to live on, but I could get a programming job at a boring company and it would pay a living wage. In that sense, it's definitely 'easier'.
I would turn this around to you: if a braindead AI can do these astonishingly difficult art, maybe art was never difficult to begin with, and that artists are merely finagling dumb, simple things to their work. Sounds annoying and condescending right? If you disagree what I said about art, maybe you ought to be more aware of your own lack of empathy.
I'll go so far as to say that in many cases, displaying empathy for the artists without also advocating for futile efforts to halt the progress of this technology will be regarded as a lack of empathy.
Art currently requires two skills - technical rendering ability, and creative vision/composition. AI tools have basically destroyed the former, but the latter is still necessary. Professional artists will have to adjust their skillset, much like they had to adjust their skillset when photography killed portrait painting as a profession.
It can't? I could've sworn I've seen (cherry-picked) examples of it doing exactly that, when prompted. It even explains what the bug is and why the fix works.
Extreme specialists are found everywhere. Mastering skateboarding at world level will eat your life too, but it's not "harder" than programming. At least, for any commonsensical interpretation of "harder".
All the rest, we do too. Except I don't record videos and I'm sure it is not childishly easy, but it will not eat my life.
Technologists acting like technocrats and expecting everyone to give them sympathy, empathy and identification is laughably rude and insulting.
At the end of the day though, i think i'm an oddball in this camp. I just don't think there's that much difference between ML and Human Learning (HL). I believe we are nearly infinitely more complex but as time goes on i think the gulf between ML and HL complexity will shrink.
I recently saw some of MKBHD's critiques of ML and my takeaway was that he believes ML cannot possibly be creative. That it's just inputs and outputs.. and, well, isn't that what i am? Would the art i create (i am also trying to get into art) not be entirely influenced by my experiences in life, the memories i retain from it, etc? Humans also unknowingly reproduce work all the time. "Inspiration" sits in the back of their minds and then we regurgitate it out thinking it as original.. but often it's not, it's derivative.
Given that all creative work is learned, though, the line between derivative and originality seems to just be about how close it is to pre-existing work. We mash together ideas, and try to distance it from other works. It doesn't matter what we take as inspiration, or so we claim, as long as the output doesn't overlap too much with pre-existing work.
ML is coming for many jobs and we need to spend a lot of time and effort thinking about how to adapt. Fighting it seems an uphill battle. One we will lose, eventually. The question is what will we do when that day comes? How will society function? Will we be able to pay rent?
What bothers me personally is just that companies get so much free-reign in these scenarios. To me it isn't about ML vs HL. Rather it's that companies get to use all our works for their profit.
Shoe's on the other foot now and they don't like it.
The solution isn't to halt technological progress to try to defend the few jobs that are actually available in that sector, the solution is to fight forward to a future where no one has to do dull and boring things just to put food on the table. Fight for future where people can pursue what they want regardless of whether it's profitable.
Most of that fight is social and political, but progress in ML is an important precursor. We can't free everyone from the dull and repetitive until we have automated all of it.
My empathy for artists is aligned with my concern for everyone else's future.
> I want to help them, but want to unblock ML progress more.
But progress towards what end? The ML future looks very bleak to me, the world of "The Machine Stops," with humans perhaps reduced to organic effectors for the few remaining tasks that the machine cannot perform economically on its own: carrying packages upstairs, fixing pipes, etc.
We used to imagine that machines would take up the burden our physical labor, freeing our minds for more creative and interesting pursuits: art, science, the study of history, the study of human society, etc. Now it seems the opposite will happen.
Comparing these is very "apples and oranges", but I think you'd better have a strong background in both if you're gonna try.
I have no idea if the AI that's getting code 80% right today will get it 95% right in two years, but given current progress, I wouldn't bet against it. I don't think there's any fundamental reason it can't produce better code than I can, at least not at the "write a function that does X" level.
Whole systems are a way harder problem that I wouldn't even think of making guesses about.
Most of coding is routine patterns that are only perceived as complex because of the presence of other coders and the need to "talk" with them, which creates a need for reference materials(common protocols, documentation, etc.)
Likewise, most of painting is routine patterns complicated by a mix of human intent(what's actually communicated) and the need for reference materials to make the image representational.
Advancements in Western painting between the Renaissance and the invention of photography track with developments in optics; the Hockney-Falco thesis is the "strong" version of this, asserting that specific elements in historical paintings had to have come through the use of optical projections, not through the artist's eyes. A weaker form of this would say that the optics were tools for study and development of the artist's eye, but not always the go-to tool, especially not early on when their quality was not good.
Coding has been around for a much shorter time, but mostly operates on the assumptions of bureaucracy: that which is information is information that can be modelled, sorted, searched. And the need for more code exists relative to having more categories of modelled data.
Art already faced its first crisis of purpose with the combination of photography and mass reproduction. Photos produced a high level of realism, and as it became cheaper to copy and print them, the artist moved from a necessary role towards a specialist one - an "illustrator" or "fine artist".
What an AI can do - given appropriate training, prompt interfaces and supplementary ability to test and validate its output - is produce a routine result in a fraction of the time. And this means that it can sidestep the bureaucratic mode entirely in many circumstances and be instructed "more of this, less of that" - which produces features like spam filters and engagement-based algorithms, but also means that entire protocols are reduced to output data if the AI is a sufficiently good compiler; if you can tell the AI what you want the layout to look like and it produces the necessary CSS, then CSS is more of a commodity. You can just draw a thing, possibly add some tagging structure, and use that as the compiler's input. Visual coding.
But that makes the role a specialized one; nobody needs a "code monkey" for such a task, they need a graphic designer...which is an arts job.
That is, the counterpoint to "structured, symbolic prompts generating visual data" is "visual prompts generating structured, symbolic data". ML can be structured in either direction, it just takes thoughtful engineering. And if the result is a slightly glitchy web site, it's an acceptable tradeoff.
Either way, we've got a pile of old careers on their way out and new careers replacing them.
Part of my job is something like that. I make custom programs for my department in the university. I don't care how long is the copyright. Anyway, I like to milk the work for a few years. There are some programs I made 5 or 10 years ago that we are still using and saving time of my coworkers and I like to use that leverage to get more freedom with my time. (How many 20% projects can I have?) Anyway, most of them need some updating because the requirements change of the environment changes, so it's not zero work on them.
There are very few projects that have a long term value. Games sell a lot of copies in a short time. MS Office gets an update every other year (Hello Clippy! Bye Clippy!) , and the online version is eating them. I think it's very hard to think programs that will have a lot of value in 50 years, but I'm still running some code in Classic VB6.
people are mad because job & portfolio sites are being flooded with aishit which is making them unusable for both artists and clients .
people are mad because their copyright is being scraped and resold for profit by third parties without their consent.
whether ai is the future is an utterly meaningless distraction until these concerns are addressed. as an aside, ai evangelists telling working professionals that they 'simply don't get' their field of expertise has been an incredibly poor tact for generating goodwill towards this technology or the operations attempting to extract massive profit from it's implementation.
Technological progress is not a linear deterministic progression. We decide how to progress every step of the way. The problem is that we are making dogshit decisions for some reason
Maybe we lack the creativity to envision alternative futures. How does a society become so uncreative I wonder
The value of work is not measured by its difficulty. There's a small amount of people who make a living doing contract work that may be replaced by an AI, but these people were in a precarious position in the first place. The well-to-do artists are not threatened by AI art. The value of their work is derived from them having put their name on it.
If you assume that most programming work could be done by an AI "soon", then we really have to question what sort of dumb programming work people are doing today and whether that wouldn't disappeared anyway, once funding runs dry. Mindlessly assembling snippets from Stackoverflow may well be threatened by AI very soon, so if that's your job, consider the alternatives.
What about the horse-powered carrioles devastated by cars !!
That said, I don't think AIs ability to generate art is a major milestone in the progress of things, I think it's more of the same, automating low value-add processes.
I agree that AI is/will-be an incredibly disruptive technology. And that automation in general is putting more and more people out of jobs, and extrapolated forward you end up in a world where most humans don't have any practical work to do other than breed and consume resources at ever increasing rates.
As much as I'm impressed by AI art (it's gorgeous), at the end of the day it's mainly just copying/pasting/smoothing out objects it's seen before (training set). We don't think of it as clipart, but that's essentially what it is underneath it all, just a new form of clipart. Amazing in it's ability to reposition, adjust, smooth images, have some sense of artistic placement, etc. It's lightyears beyond where clipart started (small vector and bitmap libraries). But at the end of the day it's just automating the creation of images using clipart. Re-arranging images you've seen before so is not going to make anyone big $$$. End of the day the quality of the output is entirely subjective, just about anything reasonable will do.
This reminds me a lot of GPT-3... looks like it has substance but not really. GPT-3 is great at making low value clickbait articles of cut-and-paste information on your favorite band or celebrity. GPT-3 will never be able to do the job of a real journalist, pulling pieces together to identify and expose deeper truths, to say, uncover the Theranos fraud. It's just Eliza [1] on steroids.
The AI parlor tricks started with Eliza, and have gotten quite elaborate as of late. But they're still just parlor tricks.
Comparing it to the challenges of programming, well yes I agree AI will automate portions of it, but with major caveats.
A lot of what people call "programming" today is really just plumbing. I'm a career embedded real-time firmware engineer, and it continues to astonish me that there's an entire generation of young "programmers" who don't understand basic computing principles, stacks, interrupts, I/O operations.. at the end of the day their knowledge base seems comprised of knowing which tool to use where in orchestration, and how to plumb it together. And if they don't know the answer they simply google and stack overflow will tell them. Low code, no code, etc. (python is perfect for quickly plumbing two systems together). This skill set is very limited and wouldn't even get you a junior dev position when I started out. I'm not suprised it's easy to automate, as it will generally have the same quality code (and make the same mistakes) as a human dev that simply copies/pastes Stack Overflow solutions.
This is in stark contrast to the types of problems that most programmers used to solve in the old days (and a smaller number still do). Stuff that needed an engineering degree and complex problem solving skills. But when I started out 30 years ago, "programmers" and "software engineers" were essentially the same thing. They aren't now, there is a world of difference between your average programmer and a true software engineer today.
Not saying plumbers aren't valuable.. they absolutely are as more and more of the modern world is built on plumbing things together. Highly skilled software engineers are needed less and less, and that's a net-good thing for humanity. No one needs to write operating systems anymore, lets add value building on top of them. Those are the people making the big $$$, their skillset is quite valuable. We're in the middle of a bi-furcation of software engineering careers. More and more positions will only require limited skills, and fewer and fewer (as a percentage) will continue to be highly skilled.
So is AI going to come in and help automate the plumbing? Heck yes, and rightly so... They've automated call centers, warehouse logistics, click-bait article writing, carry-out order taking, the list goes on and on. I'd love to have an AI plumber I could trust to do most of the low-level work right (and in CI/CD world you can just push out a fix if you missed something).
I don't believe for a second that today's latest and greatest "cutting edge" AI will ever be able to solve the hard problems that keep highly skilled people employed. New breakthroughs are needed, but I'm extremely skeptical. Like fusion promises, general purpose AI always seems just a decade or two away. Skilled labor is safe, for now.. maybe for a while yet.
The real problem as I see it, is that AI automation is on course to eliminate most low skilled jobs in the next century, which puts it on a collision course with the fact that most humans aren't capable of performing highly skilled work (half are below average by definition). Single parent workig the GM line in the 50's was enough afford an average family a decent life. Not so much where technology is going. At the end of the day the average human will have little to contribute to civilization, but still expects to eat and breed.
Universal basic income has been touted as a solution to the coming crisis, but all that does is kick the can down the road. It leads to a world of too much idle time (and the devil will find work for idle hands) and ever growing resource consumption. A perfect storm.... at the end of the day what's the point of existing when all you do is consume everything around you and don't add any value? Maybe that's someone's idea of utopia, but not mine.
This has been coming for a long time, AI art is just a small step on the current journey, not a big breakthrough but a new application in automation.
/rant
The people who generated the training data should have a say in how their work is used. Opt-in, not opt-out.
How about we legally enshrine a difference between human learning and corporate product learning? If you want to use things others made for free, you should give back for free. Otherwise if you’re profiting off of it, you have to come to some agreement with the people whose work you’re profiting off of.
Developers will be fine because software engineering is an arms race - a rather unique position to be in as a professional. I saw this play out during the 2000s offshoring scare when many of us thought we'd get outsourced to India. Instead of getting outsourced, the industry exploded in size globally and everything that made engineers more productive also made them a bigger threat to competitors, forcing everyone to hire or die.
Businesses only need so much copy or graphic design, but the second a competitors gains a competitive advantage via software they have to respond in kind - even if it's a marginal advantage - because software costs so little to scale out. As the tech debt and the revenue that depends on it grows, the baseline number of staff required for maintenance and upkeep grows because our job is to manage the complexity.
I think software is going to continue eating the world at an accelerated pace because AI opens up the uncanny valley: software that is too difficult to implement using human developers writing heuristics but not so difficult it requires artificial general intelligence. Unlike with artists, improvements in AI don’t threaten us, they instead open up entire classes of problems for us to tackle
You're comparing apples to oranges. Digging a trench by hand is also vastly more difficult than art or programming.
There's just as much AI hype around code generation, and some programmers are also complaining (https://www.theverge.com/2022/11/8/23446821/microsoft-openai...).
Overall though the sentiment is that AI tools are useful and are a sign of progress. The fact that they are stirring so much contention and controversy is just a sign of how revolutionary they are.
I'm nonplussed by ChatGPT because the hype around it is largely the same as was for Github Copilot and Copilot fizzled badly. (Full disclosure: I pay for Copilot because it is somewhat useful).
I feel a big part what makes it okay or not okay here is intention and capability. Early in an artistic journey things can be highly derivative but that's due to the student's capabilities. A beginner may not intend to be derivative but can't do better.
I see pages of applications of ML out there being derivative on purpose (Edit: seemingly trying to 'outperform' given freelance artists with glee, in their own styles).
Obviously a lot of money will be lost for artists in a variety of commercial fields, but the ultimate "success of art" will be unapproachable by AI given its subjective nature.
Developers though will be struggling to compete from both a speed and technical point of view, and those hurdles can't be simply overcome with a shift in how someone feels. And you're right about the arms race, it just won't be happening with humans. It'll be computing power, AIs and the people capable of programming those AIs.
As a developer/manager i am not yet scared of AI because i have had to already correct multiple people this week who tried to use chatGPT to figure something out.
It’s actually pretty good but when it’s wrong it seems to be really wrong and when you don’t have the background to figure that out a ton of time is wasted. It’s just a better Stackoverflow at the end of the day imo.
People should stop giving work all this meaning and also they should study economics so they chill.
Learn and chill.
I don't think this is going to put developers out of work, however. Instead, lots of small businesses that couldn't afford to be small software companies suddenly will be able to. They'll build 'free puppies,' new applications that are easy to start building, but that require ongoing development and maintenance. As the cambrian explosion of new software happens we'll only end up with more work on our hands.
With no training, I, or even a 1 year old, could make something and call it art. I wouldn't claim it's very good but I think most people would accept it as art. The same cannot be said for programming.
I'm just as excited for myself as I am for artists. The current crop of these tools look like they could be powerful enablers for productivity and new creativity in their respective spaces.
I happen to also welcome being fully replaced, which is another conversation and isn't really where I see these current tools going, though it's hard to extrapolate.
That is absurd. Sure some basic AI tools have been helpful like co-pilot and it's sometimes really impressive how it can help me autofill some code instead of typing it out... but come on, there is no way we are anywhere close to AI replacing 99.99% of developers.
>making art is vastly more difficult than the huge majority of computer programming that is done
I don't know.. art is "easy" in the sense that we all know what art looks like. You want a picture of a man holding a cup with a baby raven in it? I can picture that in my head to some degree right away, and then it's just "doing the process" to draw it in some way using shapes we know.
How in the heck can you correlate that to 99% of business applications? Most of the time no one even knows exactly what they want out of a project.. so first there is the massive amount of constant changes just from using stuff. Then there is the actual way the code is created itself. Let's even say you could tell it "Make me an angular website with two pages and a live chat functionality" and it worked. Well, ok great it got you a starting template.. but first, maybe the code is so weird or unintuitive that it's almost impossible to really keep building upon- not helpful. Now let's say it is "descent enough", well fine.. then it's almost like an advanced co-pilot at this point. It helps with boilerplate boring template.
But comparing this all to art is still just ridiculous. Again, everyone can look at a picture and say "this is what I wanted" or "this is not what I wanted at all". Development is so crazy intricate that it's nothing like art.. I could look at two websites (similar to art) and say "these look the same", but under the hood it could be a million times different in functionality, how it works, how well it's structured to evolve over time.. etc etc. But if I look at two pictures that look exactly the same, I don't care how it got there or how it was created- it's done and exactly the same. Not true of development for 99% of cases.
Both answers were orders of magnitude wrong, and vastly different from each other.
JS code suggested for a simple database connection had glaring SQL injection vulnerabilities.
I think it's an ok tool for discovering new libraries and getting oriented quickly to languages and coding domains you're unfamiliar with. But it's more like a forum post from a novice who read a tutorial and otherwise has little experience.
We developers are hired because our coworkers can’t express what they really want. No one pays six figures to solve glorified advent of code prompts. The prompts are much more complex, ever changing as more information comes in, and in someone’s head to be coaxed out by another human and iterated on together. They are no more going to be prompt engineers than they were backend engineeers.
I say this as someone who used TabNine for over a year before CoPilot came out and now use ChatGPT for architectural explorations and code scaffolding/testing. I’m bullish on AI but I just don’t see the threat.
There's no fundamental reason it can't be the world expert at everything, but that's not a reason to assume we know how to get there from here.
Jobs have been automated since the industrial revolution, but this usually takes the form of someone inventing a widget that makes human labor unnecessary. From a worker's perspective, the automation is coming from "the outside". What's novel with AI models is that the workers' own work is used to create the thing that replaces them. It's one thing to be automated away, it's another to have your own work used against you like this, and I'm sure it feels extra-shitty as a result.
I see this as another step toward having a smaller and smaller space in which to find our own meaning or "point" to life, which is the only option left after the march of secularization. Recording and mass media / reproduction already curtailed that really badly on the "art" side of things. Work is staring at glowing rectangles and tapping clacky plastic boards—almost nobody finds it satisfying or fulfilling or engaging, which is why so many take pills to be able to tolerate it. Work, art... if this tech fulfills its promise and makes major cuts to the role for people in those areas, what's left?
The space in which to find human meaning seems to shrink by the day, the circle in which we can provide personal value and joy to others without it becoming a question of cold economics shrinks by the day, et c.
I don't think that's great for everyone's future. Though admittedly we've already done so much harm to that, that this may hardly matter in the scheme of things.
I'm not sure the direction we're going looks like success, even if it happens to also mean medicine gets really good or whatever.
Then again I'm a bit of a technological-determinist and almost nobody agrees with this take anyway, so it's not like there's anything to be done about it. If we don't do [bad but economically-advantageous-on-a-state-level thing], someone else will, then we'll also have to, because fucking Moloch. It'll turn out how it turns out, and no meaningful part in determining that direction is whether it'll put us somewhere good, except "good" as blind-ass Moloch judges it.
I hear these sorts of statements a lot, and always wonder how people come to the conclusion that "people who said A were the ones who were saying B". Barring survey data, how would you know that it isn't just the case that it seems that way?
The idea that people who would tell someone else to learn to code are now luddites seems super counter-intuitive to me. Wouldn't people opposing automation now likely be the same ones opposing it in the past? Why would you assume they're the same group without data showing it?
I know a bunch of artists personally and none of them seem to oppose blue-collar work
1) It'll no longer be possible to work as an artist without being incredibly productive. Output, output, output. The value of each individual thing will be so low that you have to be both excellent at what you do (which will largely be curating and tweaking AI-generated art) and extremely prolific. There will be a very few exceptions to this, but even fewer than today.
2) Art becomes another thing lots of people in the office are expected to do simply as a part of their non-artist job, like a whole bunch of other things that used to be specialized roles but become a little part of everyone's job thanks to computers. It'll be like being semi-OK at using Excel.
I expect a mix of both to happen. It's not gonna be a good thing for artists, in general.
I would argue because most AI imagery right now is made for fun and not monetary gains, so it is actually a purer form of art.
"It'll massively suck for you, but don't worry, it'll be better for everyone else" is little comfort for most of us
Ideally we’d see something opt-in to decide exactly how much you have to give back, and how much you have to constrain your own downstream users. And in fact we do see that. We have copyleft licenses for tons of code and media released to the public (e.g. GPL, CC-BY-SA NC, etc). It lets you define how someone can use your stuff without talking to you, and lays out the parameters for exactly how/whether you have to give back.
This is a moment where individual humans substantially increase their ability to affect change in the world. I’m watching as these tools quickly become commoditized. I’m seeing low income first generation Americans who speak broken English using ChatGPT to translate their messages to “upper middle class business professional” and land contracts that were off limits before. I’m seeing individuals rapidly iterate and explore visual spaces on the scale of 100s to 1000s of designs using stable diffusion, a process that was financially infeasible even for well funded corps due to the cost of human labor this time last year. These aren’t fanciful dreams of how this tech is going to change society - Ive observed these outcomes in real life.
I’m empathetic that the entire world is moving out from under all of our feet. But the direction it’s moving is unbelievably exciting. AI isn’t going to replace humans, humans using AI are going to replace humans who don’t.
Be the human that helps other humans wield AI.
Could the bot not curate its own output? It has been shown that back feeding into the model result in improvement. I got the idea that better results come from increments. The AI overlords (model owners) will make sure they learn from all that curating you might do too, making your job even less skilled. Read: you are more replaceable.
Please prove me wrong! I hope I am just anxious. History has proven that increases in productivity tend to go to capital owners, unless workers have bargaining power. Mine workers were paid relatively well here, back in the day. Complete villages and cities thrived around this business. When those workers were no longer needed the government had to implement social programs to prevent a societal collapse there.
Look around, Musk wants you to work 10 hours per day already. Don't expect an early retirement or a more relaxed job..
Progress is cool if you're on the side of the wheel that's going up. It's the worst fucking thing in the world if you're on the side that's going down and are about to get smashed into the mud.
I think the wheels are turning. It's just a resultant movement from thousands of small movements, but nobody is controlling it. If you take a look not even wars dent the steady progress of science and technology.
A belief system that centers around human well being sounds more reasonable than *unbounded* capitalism. We know it, we don't know what to do with it.
That's a huge ethical issue whether or not it's explicitly addressed in copyright/ip law.
I'd just say the scale is different. Old school automation just required one expert to guide the development of an automation. AI art requires the expertise of thousands.
The fundamental design of transformer architecture isn't capable of what you think it is.
There are still radical, fundamental breakthroughs needed. It's not a matter of incremental improvement over time.
Some programmers are upset and already filing suit.
When was making living through art guaranteed? Society has mocked artists for centuries.
Let them make art and give them a UBI.
AI will replace programmers too; if a user can ask a future AI to organize their machines state into an arbitrary video game, photorealistic movie, generate reports from sources abc, weighing for xyz on a bar chart; only the AI code base (whatever boot strapping and runtime it needs) becomes necessary.
Why would I ask an AI that can produce the end result to produce code? Code is just minimized ideal machine state.
You’re correct; there is a lot of empathy lacking in our culture, but it’s not just when it comes to art.
The thing is, empathy doesn't really do anything. Pandora's Box is open and there's no effective way of shutting it that is more than a hopeful dream. Stopping technology is like every doomed effort that has existed to stop capitalism.
Probably has something to do with years of artists trash talking engineers.
"Observing humans under capitalism and concluding it's only in our nature to be greedy is like observing humans under water and concluding it's only in our nature to drown."
Whether 95% or 99.9% correct, when there is a serious bug, you're still going to need people that can fix the gap between almost correct and actually correct.
If I can just ask for a certain arbitrary machine state (with some yet unrealized future version of AI) who needs programmers?
We’ll need to vet AI output so there will still be knowledge work; we’re not going to let the AI decide to launch nukes, or inject whatever level of morphine it wants.
Data entry (programming being specialized data entry; code is a data model of primitives for a compiler/interpreter) work at a computer is not long for this world but analysis will be.
Not, if the worker is an engineer or similar. Some engineers built tools that improved building tools.
And this started even earlier than the industrial revolution. Think for example of Johannes Gutenberg. His real important invention was not the printing press (this already existed) and not even moveable types, but a process by which a printer could mold his own set of identical moveable types.
I see a certain analogy between what Gutenberg's invention meant for scribes then and what Stable Diffusion means for artists today.
Another thought: In engineering we do not have extremly long lasting copyright, but a lot shorter protection periods via patents. I have never understood why software has to be protected for such long copyright periods and not for much shorter patent-like periods. Perhaps we should look for something similar for AI and artists: An artist as copyright as usual for close reproductions, but after 20 years after publication it may be used without her or his consent for training AI models.
That’s really what we’re protecting here?
I’d rather live in the future where automation does practically everything not for the benefit of some billionaire born into wealth but because the automation is supposed to. Similar to the economy in Factorio.
Then people can derive meaning from themselves rather that whatever this dystopian nightmare we’re currently living in.
It’s absurdly depressing that some people want to stifle this progress only because it’s going to remove this god awful and completely made up idea that work is freedom or work is what life is about.
The future of work will not be decided by now 60+ year olds in another 10-15 year; Millennials and Gen Z are not growing conservative as they age into and through their 30s as Gen X and Boomers did. Generational churn is a huge wildcard.
I think you need to see there are 2 types of people:
- those who want to generate results ("get the job done, quickly"), and
- those who enjoy programming because of it.
The first one are the ones who can't see what is getting lost. They see programming as an obstacle. Strangely, some of them believe that on the one hand that many more people can produce lots more of software because of AI, and simultaneously expect to keep being in demand.
They might think your job is producing pictures, which is just a burden.
I am from the second group. I never choose this profession because of the money, or dreaming about big business I could create. I dread pasting generated code all over the place. The only one being happy would be the owner of that software. And the AI model overlord of course.
I hope that technical and artistic skill will gain appreciation again and that you will have a happy live in doing what you like the most.
The whole point of art is human expression. The idea that artists can be "automated away" is just sad and disgusting and the amount of people who want art but don't want to pay the artist is astounding.
Why are we so eager to rid ourselves of what makes us human to save a buck? This isn't innovation, its self destruction.
We've just made "learning style" easier, so a thing that was always a risk is now happening.
I'd reframe this to: making a living from your art is far more difficult than making money from programming.
> also be able to do the dumb, simple things most programmers do for their jobs?
I'm all for Ai automating all the boring shit for me. Just like frameworks have. Just like libraries have. Just like DevOps have. Take all the plumbing and make it automated! I'm all for it!
But. At some point. Someone needs to take business speak and turn it into input for this machine. And wouldn't ya know it, I'm already getting paid for that!
The real answer is AI are not people, and it is ok to have different rules for them, and that is where the fight would need to be.
Capitalism is particularly good at weaponizing our own ideas against us. See large corporations co-opting anti-capitalist movements for sales and PR.
Pepsi-co was probably mad that they couldn't co-op "defund the police", "fuck 12", and "ACAB" like they could with "black lives matter".
Anything near and dear to us will be manipulated into a scientific formula to make a profit, and anything that cannot is rejected by any kind of mainstream media.
See: Capitalist Realism and Manufactured Consent(for how advertising effects freedom of speech in any media platform).
You're projecting your own fears on everyone else. I'm a programmer, too, among other things. I write code in order to get other things done. (Don't you?) It's fucking awesome if this thing can do that part of my job. It means I can spend my time doing something even more interesting.
What we call "programming" isn't defined as "writing code," as you seem to think. It's defined as "getting a machine to do what we (or our bosses/customers) want." That part will never change. But if you expect the tools and methodologies to remain the same, it's time to start thinking about a third career, because this one was never a good fit for you.
This argument has come up many times in history, and your perspective has never come out on top. Not once. What do you expect to be different this time?
For someone seeking sound/imagery/etc. resulting from human expression (i.e., art), it makes sense that it can't be automated away.
For someone seeking sound/imagery/etc. without caring whether it's the result of human expression (e.g., AI artifacts that aren't art), it can be automated away.
AI powered surveillance and the ongoing destruction of public institutions will make it hard to stand up for the collective interest.
We are not in hell, but the road to it has not been closed.
Static 2D images that usually serve a commercial purpose. Ex logos, clip art, game sprites, web page design and the like.
And the second is pure art whose purpose is more for the enjoyment of the creator or the viewer.
Business wants to fully automate the first case and must people view it has nothing to do with the essence of humanity. It's simply dollars for products - but it's also one of the very few ways that artists can actually have paying careers for their skills.
The second will still exist, although almost nobody in the world can pay bills off of it. And I wouldn't be shocked it ML models start encroaching there as well.
So a lot of what's being referred to is more like textile workers. And anyone who can type a few sentences can now make "art" significantly lowering barriers to entry. Maybe a designer comes and touches it up.
The short sighted part, is people thinking that this will somehow stay specific to Art and that their cherished field is immune.
Programming will soon follow. Any PM "soon enough" will be able to write text to generate a fully working app. And maybe a coder comes in to touch it up.
Sure you can't stick an entire project in there, but if you know the problem is in class Baz, just toss in the relevant code and it does a pretty damn good job.
Art-as-human-expression isn't going anywhere because it's intrinsically motivated. It's what people do because they love doing it. Just like people still do woodworking even though it's cheaper to buy a chair from Walmart, people will still paint and draw.
What is going to go away is design work for low-end advertising agencies or for publishers of cheap novels or any of the other dozens of jobs that were never bastions of human creativity to begin with.
TBH given how derivative humans tend to be, with such a deeper "Human Learning" model and years and years of experiences.. i'm kinda shocked ML is even capable of even appearing non-derivative. Throw a child in a room, starve it of any interaction and somehow (lol) only feed it select images and then ask it to draw something.. i'd expect it to perform similarly. A contrived example, but i'm illustrating the depth of our experiences when compared to ML.
I half expect that the "next generation" of ML is fed by a larger dataset by many orders of magnitude more similarly matching our own. A video feed of years worth of data, simulating the complex inputs that Human Learning gets to benefit from. If/when that day comes i can't imagine we will seem that much more unique than ML.
I should be clear though; i am in no way defending how companies are using these products. I just don't agree that we're so unique in how we think, how we create, and if we're truly unique in any way shape or fashion. (Code, Input) => Output is all i think we are, i guess.
I think it's more a matter of enlarging the scope of what one person can manage. I think moving from the pure manual labor era, limited by how much weight a human body could move from point A to point B, to the steam engine era. Railroads totally wrecked the industry of people moving things on their backs or in mule trains, and that wasn't a bad thing.
> Don't expect an early retirement or a more relaxed job..
That's kinda my point, I don't think this is going to make less work, it'll turbocharge productivity. When has an industry ever found a way to increase productivity and just said cool, now we'll keep the status quo with our output and work less?
The lack you find depressing is natural defensiveness in the face of hostility rooted in the fear, and in most cases, broad ignorance of both the legal and technical context and operation of these systems.
We might look at this and say, "there should have been a roll out with education and appropriate framing, they should have managed this better."
This may be true but of course, there is no "they"; so here we are.
I understand the fear, but my own empathy is blocked by hostility in specific interactions.
Oh, life & death is different? Don't be so sure; there's good reasons to believe that livelihood (not to mention social credit) and life are closely related -- and also, the fundamental point doesn't depend on the specific example: you can't point to an orders-of-magnitude change and then claim we're dealing with a situation that's qualitatively like it's "always" been.
"Easier" doesn't begin to honestly represent what's happened here: we've crossed a threshold where we have technology for production by automated imitation at scale. And where that tech works primarily because of imitation, the work of those imitated has been a crucial part of that. Where that work has a reasonable claim of ownership, those who own it deserve to be recognized & compensated.
Artists are poets, and they're railing against Trurl's electronic bard.
[https://electricliterature.com/wp-content/uploads/2017/11/Tr...]
Basically, the argument is that you should not have ever charged for your art, since its viewing and utility is increased when more people see it.
The lack of empathy comes from our love of open source. That's why. These engineers have been pirating books, movies, games for a long time. Artists crying for copyright has the same sound as the MPAA sueing grandma 20 years ago.
You describe stuff that is harmful or boring. In an other comment I touched upon this, as there seem to be a clear distinction between people that love programming and those that just want to get results. The former does not enjoy being manager of something larger per se if the lose what they love.
I can see a (short term?) increase in demand of software, but it is not infinite. So when productivity increases and demand does not with at least the same pace, you will see jobless people and you will face competition.
What no one has touched yet is that the nature of programming might change too. We try to optimize for the dev experience now, but it is not unreasonable to expect that we have to bend towards being AI-friendly. Maybe human friendly becomes less of a concern (enough desperate people out there), AI-friendly and performance might be more important metrics to the owner.
We're all "doomed" if this is the case.
Industries have traditionally solved this with planned obsolence. Maybe JavaScript might be our saviour here for a while. :)
There is also a natural plateau of choice we can handle. Of those 2000, only a few will be winners and with reach. It might soon be that the AI model becomes more valuable than any of those apps. Case in point: try to make a profitable app on Android these days.
There are a lot of working commercial artists in between the fine art world and the "cheap novels and low-end advertising agencies" you dismiss, and there's no reason to think AI art won't eat a lot of their employment.
Why would software engineers who work on web apps, kubernetes, and the internet in general need to understand interrupts. Not only they will never ever deal with any of that, but also they are supposed not to. All of those have been automated away so that what we call the Internet can be possible.
All of those stuff turned into specializations as the tech world progressed and the ecosystem grew. A software engineer specialized in hardware would need to know interrupts while he wouldnt need to know how to do devops. For the software engineer who works on Internet apps, its the opposite.
Creating art is not that much harder than programming, creating good art is much harder than programming. That's the reason that a large majority of art isn't very good, and why a large majority of Artists don't make a living by creating art.
Just like the camera didn't kill the artist, neither will AI. For as long as art is about the ideas behind the piece as opposed to the technical skills required to make it (which I would argue has been true since the rise of impressionism) then AI doesn't change much. The good ideas are still required, AI only makes creating art (especially bad art) more accessible.
Basically the current argument of artists being out of a job but taken to its extreme.
Why would these robots get paid? They wouldn’t. They’d just mine, manufacture, and produce on request.
Imagine a world where chatgpt version 3000 is connected to that swarm of robots and you can type “produce a 7 inch phone with an OLED screen, removable battery, 5 physical buttons, a physical shutter, and removable storage” and X days later arrives that phone, delivered by automation, of course.
Same would work with food, where automation plants the seeds, waters the crops, removes pests, harvests the food, and delivers it to your home.
All of these are simply artists going out of a job, except it’s not artists it’s practically every job humans are forced to do today.
There’d be very little need to work for almost every human on earth. Then I could happily spend all day taking shitty photographs that AI can easily replicate today far better than I could photograph in real life but I don’t have to feel like a waste of life because I enjoy doing it for fun and not because I’m forced to in order to survive.
But you're off course right that the benefits are unevenly distributed, and for some it truly does suck.
The real battle there would be protocols; how everyone's custom apps communicate. Here, we can fall back to existing protocols such as email, ActivityPub, Matrix, etc.
There's nothing stopping anyone from coding for fun, but we get paid for delivering value, and the amount of value that you can create is hugely increased with these new tools. I think for a lot of people their job satisfaction comes from having autonomy and seeing their work make an impact, and these tools will actually provide them with even more autonomy and satisfaction from increased impact as they're able to take on bigger challenges than they were able to in the past.
I don't pay someone to run calculations for me, either, also a difficult and sometimes creative process. I use a computer. And when the computer can't, then I either employ my creativity, or hire a creative.
> I know what you mean, but thinking about it critically, this is just wrong. All software has bugs in it. Small bugs, big bugs, critical bugs, security bugs, everything. No code is immune. The largest software used by millions every day has bugs. Library code that has existed and been in use for 30 years has bugs.
All software has bugs, but it's usually far better that "95% right." Code that's only 95% right probably wouldn't pass half-ass testing or a couple of days of actual use.
1. Proprietary software is harmful and immoral in ways that proprietary books or movies are not.
2. The creative industry has historically used copyright as a tool to tell computer programmers to stop having fun.
So the lack of empathy is actually pretty predictable. Artists - or at least, the people who claim to represent their economic interests - have consistently used copyright as a cudgel to smack programmers about. If you've been marinading in Free Software culture and Cory Doctorow-grade ressentiment for half a century, you're going to be more interested in taking revenge against the people who have been telling you "No, shut up, that's communism" than mere first-order self-preservation[1].
This isn't just "programmers don't have fucks to give", though. In fact, your actual statements about computer programmers are wrong, because there's already an active lawsuit against OpenAI and Microsoft over GitHub Copilot and it's use of FOSS code.
You see, AI actually breaks the copyright and ethical norms of programmers, too. Most public code happens to be licensed under terms that permit reuse (we hate copyright), but only if derivatives and modifications are also shared in the same manner (because we really hate copyright). Artists are worried about being paid, but programmers are worried about keeping the commons open. The former is easy: OpenAI can offer a rev share for people whose images were in the training set. The latter is far harder, because OpenAI's business model is charging people for access to the AI. We don't want to be paid, we want OpenAI to not be paid.
Also, the assumption that "art is more difficult than computer programming" is also hilariously devoid of empathy. For every junior programmer crudly duck-taping code together you have a person drawing MS Paint fanart on their DeviantART page. The two fields test different skills and you cannot just say one is harder than the other. Furthermore, the consequences are different here. If art is bad, it's bad[0] and people potentially lose money; but if code is bad it gets hacked or kills people.
[0] I am intentionally not going to mention the concerns Stability AI has with people generating CSAM with AI art generators. That's an entirely different can of worms.
[1] Revenge can itself be thought of as a second-order self-preservation strategy (i.e. you hurt me, so I'd better hurt you so that you can't hurt me twice).
> There’d be very little need to work for almost every human on earth.
When mankind made a pact with the devil, the burden we got was that we had to earn our bread though sweat and hard labor. This story has survived millennia, there is something to it.
Why is the bottom layer in society not automated by robots? No need to if they are cheaper than robots. If you don't care about humans, you can get quite some labor for a little bit of sugar. If you can work one job to pay your rent, you can possibly do two or three even. If you don't have those social hobbies like universal healthcare and public education, people will be competitive for a very long time with robots. If people are less valuable, they will be treated as such.
Hell is nearer than paradise.
Have you actually looked into CS deeply? Obviously not. (I‘m not saying this cannot also be true for music, which I don‘t know.)
As far as money goes... long run artists will still make money fine as people will value the people generated (artisanal) works. Just as people like hand-made stuff today, even though you can get machine-made stuff way cheaper. You may not have the generic jobs of cranking out stuff for advertisements (and such) but you'll still have artists.
Your diatribe about not caring about humans is ironic. I don’t know where you got all that from, but it certainly wasn’t my previous comment.
I also don’t know what pact you’re on about. The idea of working for survival is used to exploit people for their labor. I guess people with disabilities that aren’t able to work just aren’t human? Should we let them starve to death since they can’t work a 9-5 and work for their food?
It's not even clear you're correct by the apparent (if limited) support of your own argument. "Transmission" of some sort is certainly occurring when the work is given as input. It's probably even tenable to argue that a copy is created in the representation of the model.
You probably mean to argue something to the effect that dissemination by the model is the key threshold by which we'd recognize something like the current copyright law might fail to apply, the transformative nature of output being a key distinction. But some people have already shown that some outputs are much less transformative than others -- and even that's not the overall point, which is that this is a qualitative change much like those that gave birth to industrial-revolution copyright itself, and calls for a similar kind of renegotiation to protect the underlying ethics.
People should have a say in how the fruits of their labor are bargained for and used. Including into how machines and models that drive them are used. That's part of intentionally creating a society that's built for humans, including artists and poets.
Commercial art needs to be eye catching and on brand if it's going to be worth anything, and a random intern isn't going to be able to generate anything with an AI that matches the vision of stakeholders. Artists will still be needed in that middle zone to create things that are on brand, that match stakeholder expectations, and that stand out from every other AI generated piece. These artists will likely start using AI tools, but they're unlikely to be replaced completely any time soon.
That's why I only mentioned the bottom tier of commercial art as being in danger. The only jobs that can be replaced by AI with the technology that we're seeing right now are in the cases where it really doesn't matter exactly what the art looks like, there just has to be something.
The ones at risk (and complaining the most) are semipro online artists who sell one image at a time, like fanart commissions.
- generic expression: commercial/pop/entertainment; audience makes demands on the art
- autonomous expression: artist's vision is paramount; art makes demands on the audience
Obviously these are idealized antipodes. The question about whether it is the art making the demands on the audience or the audience making demands on the art is especially insightful in my opinion. Given this rubric, I'd say AI-generated art must necessarily belong to "generic expression" simply because it's output has to meet fitness criteria.
Creative professionals might take the first hit in professional services, but AI is going to come for engineers at a much faster and more furious pace. I would even go so far as to say that some (probably a small amount) of the people who have recently gotten laid off at big tech companies may never see a paycheck as high as they previously had.
The vast majority of software engineering hours that are actually paid are for maintenance, and this is where AI is likely to come in like a tornado. Once AI hits upgrade and migration tools it's going to eliminate entire teams permanently.
Mixed social-democratic economies are nice and better than plutocracies, but they have capitalism; they just have other economic forms alongside it.
(Needing to profit isn’t exclusive to capitalism either. Socialist societies also need productivity and profit, because they need to reinvest.)
The problem of a vengeful god who demands the slaughter of infidels lies not in his existence or nonexistence, but peoples' belief in such a god.
Similarly, it does not matter whether AI works or it doesn't. It's irrelevant how good it actually is. What matters is whether people "believe" in it.
AI is not a technology, it's an ideology.
Given time it will fulfil it's own prophecy as "we who believe" steer the world toward that.
That's what's changing now. It's in the air.
The ruling classes (those who own capital and industry) are looking at this. The workers are looking too. Both of them see a new world approaching, and actually everyone is worried. What is under attack is not the jobs of the current generation, but the value of human skill itself, for all generations to come. And, yes, it's the tail of a trajectory we have been on for a long time.
It isn't the only way computers can be. There is IA instead of AI. But intelligence amplification goes against the principles of capital at this stage. Our trajectory has been to make people dumber in service of profit.
I don’t understand this. It reminds me of the Go player who announced he was giving up the game after AlphaGo’s success. To me that’s exactly the same as saying you’re going to give up running, hiking, or walking because horses or cars are faster. That has nothing to do with human meaning, and thinking it does is making a really obvious category error.
Nevertheless, having more engineers around actually causes you to be more valuable, not less. “Taking your job” isn’t a thing; the Fed chairman is the only thing in our economy that can do that.
As a crude analogy, there are a lot of great free or low-cost tools to create websites that didn't exist 15 years ago and can easily replace what would be a much more expensive web developer contract 15 years ago. And yet, in those last 15 years, the "size of the web pot" has increased enough that I don't think many professional web developers are worried about site builder tools threatening the entire industry. There seem to be a lot more web developers now then there were 15 years ago, and they seem to be paid as well or better than they were 15 years. And again, that doesn't mean that certain individuals or firms didn't on occasion experience financial hardship due to pressure from cheaper alternatives, and I don't want to minimize that. It just seems like the industry is still thriving.
To be clear, I really have no idea if this will turn out to be true. I also have no idea if this same thing might happen in other fields like art, music, writing, etc.
Do you have a source for that? Doesn't match my experience unless your definition of maintenance is really broad
Maybe you are better at CS than music and therefore perceive it as easy and the other one as hard.
To be perfectly honest, I absolutely love that particular attempt by artists, because it will likely force 'some' restrictions on how AI is used and maybe even limit that amount 'blackboxiness' it entails ( disclosure of model, data set used, parameters -- I might be dreaming though ).
I disagree with your statement in general. HN has empathy and not just because it could affect their future world. It is a relatively big shift in tech and we should weigh it carefully.
You’re like half a step away from the realization that almost everything you do today is done better if not by AI then someone that can do it better than you but you still do it because you enjoy it.
Now just flip those two, almost everything you do in the future will be done better by AI if not another human.
But that doesn’t remove the fact that you enjoy it.
For example, today I want to spend my day taking photographs and trying to do stupid graphic design in After Effects. I can promise you that there are thousands of humans and even AI that can do a far better job than me at both these things. Yet I have over a terabyte of photographs and failed After Effects experiments. Do I stop enjoying it because I can’t make money from these hobbies? Do I stop enjoying it because there’s some digital artist at corporation X that can take everything I have and do it better, faster, and get paid while doing it?
No. So why would this change things if instead of a human at corporation X, it’s an AI?
> Learning technical skills like draughtsmanship is harder than learning programming because you can't just log onto a free website and start getting instant & accurate feedback on your work.
Really? I sometimes wonder what people think programming really is. Not what you describe, obviously.
On an infinite timeline humans will no longer be needed in the generation of code (we hopefully will still study and appreciate it for leisure), but I doubt we're there yet.
I'm a bit surprised that it got a lookup wrong, but for any other domain, describing it as a "novice" is understating the situation a lot.
Now was Aaron Schwartz (what I view as on ultimate example of this open source idea you cite) naive, no. Maybe he knew in his heart the greater good would outweigh anything.
But I don't think we should judge too harshly merely falling on one side of this issue or not. Perhaps it's down to a debate about what creation/truth/knowledge actually are. Maybe some creators (of which aritsts and computer scientists are) view creations as something they bring into the world, not reveal about the world.
Same if true by the way for writing. So? Doesn‘t mean writing well is easy.
Not sure I fully understand your second point: are you implying that I don't really know what programming is?
I can't copy your GPL code. I might be able to write my own code that does the same thing.
I'm going to defend this statement in advance. A lot of software developers white knight more than they strictly have to; they claim that learning from GPL code unavoidably results in infringing reproduction of that code.
Courts, however, apply a test [1], in an attempt to determine the degree to which the idea is separable from the expression of that idea. Copyright protects particular expression, not idea, and in the case that the idea cannot be separated from the expression, the expression cannot be copyrighted. So either I'm able to produce a non-infringing expression of the idea, or the expression cannot be copyrighted, and the GPL license is redundant.
[1] https://en.wikipedia.org/wiki/Abstraction-Filtration-Compari...
All these talking points about lack of empathy for poor suffering artists have already been made a million times in those other debates. They just don't pack much of a punch anymore.
Current SOTA: https://openai.com/blog/vpt/
This is especially true for complex pieces.
If an AI could produce a world-class totally amazing illustration or even a book I will afterwards easily see or read it.
On the other hand real-world software systems consist of hundreds of thousands or lines in distributed services. How would a layman really judge if they work?
Nevertheless I also expect AI to have a big impact since less engineers can do much more.
The more computers and machines and institutions take that over, the fewer opportunities there are to do that, and the more doing that kind of thing feels forced, or even like an indulgence of the person providing the "service" and an imposition on those served.
Vonnegut wrote quite a bit about this phenomenon in the arts—how recording, broadcast, and mechanical reproduction vastly diminished the social and even economic value of small-time artistic talent. Uncle Bob's storytelling can't compete with Walt Disney Corporation. Grandma's piano playing stopped mattering much when we began turning on the radio instead of having sing-alongs around the upright. Nobody wants your cousin's quite good (but not excellent) sketches of them, or of any other subject—you're doing him a favor if you sit for him, and when you pretend to give a shit about the results. Aunt Gertrude's quilt-making is still kinda cool and you don't mind receiving a quilt from her, but you always feel kinda bad that she spent dozens of hours making something when you could have had a functional equivalent for perhaps $20. It's a nice gesture, and you may appreciate it, but she needed to give it more than you needed to receive it.
Meanwhile, social shifts shrink the set of people for whom any of this might even apply, for most of us. I dunno, maybe online spaces partially replace that, but most of that, especially the creative spaces, seem full of fake-feeling positivity and obligatory engagement, not the same thing at all as meeting another person you know's actual needs or desires.
That's the kind of thing I mean.
The areas where this isn't true are mostly ones that machines and markets are having trouble automating, so they're still expensive relative to the effort to do it yourself. Cooking's a notable one. The last part of our pre-industrial social animal to go extinct may well be meal-focused major holidays.
My point was about skill level, not specialization. Specialization is great.. we can build bigger and bigger things not having to engineer/understand what's beneath everything. We stand on the shoulders of giants as they say.
And I agree, there is no one job specialization that's more valuable than the other. It's contextual. If you have a legal problem, a specialized lawyer is more valuable than a specialized doctor. So yeah I agree that if you have a cloud problem, you want a cloud engineer and not a firmware engineer. Although I should add that things like interrupts/events/synchronization and I/O operations are fairly universal computing concepts even in the cloud world. If you're a cloud programmer and you don't know how long an operation takes / its big-O complexity, how much storage it uses / it's persistence etc. you're probably going to have some explaining to do when your company gets next months AWS bill.
And yes plumbing is useful! Someone has to hook stuff up that needs hooking up! But which task requires more skill; the person that designs a good water flow valve, or the person hooking one up? I'd argue the person designing the valve needs to be more skilled (they certainly need more schooling). The average plumber can't design a good flow valve, while the average non-plumber can fix a leaky sink.
AI is eating unskilled / low-skill work. In the 80's production line workers were afraid of robots. Well, here we are. No more pools of typists, automated call centers handling huge volumes of people, dark factories.
It's a terrible time to be an artist if AI can clipart compose images of the same quality much faster than you can draw by hand.
Back to original comment: I'm merely suggesting that some programming jobs require a lot more skill than others. If software plumbing is easy, then it can and will be automated. If those were the only skill I posessed, I'd be worried about my job.
Like fusion, I just don't see general purpose AI being a thing in my lifetime. For highly skilled programmers, it's going to be a lot longer before they're replaced.
Welcome to our digital future. It's very stressful for the average skilled human.
It's obvious how an expert at regurgitating StackOverflow would be able to correct an NPE or an off-by-one error when given the exact line of code that error is on. Going any deeper, and actually being able to find a bug, requires understanding of the codebase as a whole and the ability to map the code to what the code actually does in real life. GPT has shown none of this.
"But it will get better over time" arguments fail for this because the thing that's needed is a fundamentally new ability, not just "the same but better." Understanding a codebase is a different thing from regurgitating StackOverflow. It's the same thing as saying in 1980, "We have bipedal robots that can hobble, so if we just improve on that enough we'll eventually have bipedal robots that beat humans at football."
No actually, that's not how that works. You're demonstrating the lack of empathy that the parent comment brings up as alarming.
Regarding programming, code that's only 95% right can just be run through code assist to fix everything.
It might in a few areas, though. I think film making is poised to get really weird, for instance, possibly in some interesting and not-terrible ways, compared with what we're used to. That's mostly because automation might replace entire teams that had to spend thousands of hours before anyone could see the finished work or pay for it, not just a few hours of one or two artists' time on a more-incremental basis. And even that's not quite a revolution—we used to have very-small-crew films, including tons that were big hits, and films with credits lists like the average Summer blockbuster these days were unheard of, so that's more a return to how things were before computer graphics entered the picture (even 70s and 80s films, after the advent of the spectacle- and FX-heavy Summer blockbuster, had crews so small that it's almost hard to believe, when you're used to seeing the list of hundreds of people who work on, say, a Marvel film)
In the sense that art is a 2D visual representation of something, or a marketing tool that evokes a biological response in the viewer, art is easy to automate away. This is no different than when the camera replaced portraitists. We've just invented a camera that shows us things that don't exist.
In the sense that art is human expression, nobody has even tried to automate that yet and I've seen no evidence that expressionary artists are threatened.
a) the panic is entirely misguided and based on two wrong assumptions. The first is that textual input and treating the model as a function (command in -> result out) are sufficient for anything. No, this is a fundamentally deficient way to give artistic directions, which is further handicapped by primitive models and weak compute. Text alone is a toy; the field will just become more and more complex and technically involved, just like 3D CGI did, because if you don't use every trick available, you're missing out. The second wrong assumption is that it's going to replace anyone, instead of making many people re-learn a new tool and produce what was previously unfeasible due to the amount of mechanistic work involved. This second assumption stems from the fundamental misunderstanding of the value artists provide, which is conceptualization, even in a seemingly routine job.
b) the panic is entirely blown out of proportion by the social media. Most people have neither time nor desire to actually dive into this tech and find out what works and what doesn't. They just believe that a magical machine steals their works to replace them, because that's what everyone reposts on Twitter endlessly.
If the actual business costs are less than the price of a team of developers... welp, it was fun while it lasted.
I have the exact, almost completely opposite opinion. Greenfield is where AI going to shine.
Maintenance is riddled with "gotcha's", business context, and legacy issues that were all handled and negotiated over outside of the development workflow.
By contrast, AI can pretty easily generate a new file based on some form of input.
Also kinda curious how you deal with people that have disabilities and can’t exactly fight to survive. Me, I’m practically blind without glasses/contacts, so I’ll not be taking life lessons from the local mountain lion, thanks.
I am wondering why you define being in terms of having. Is that a slip, or is that related to this:
> I want to just push that further and subjugate nature with automation that can feed us and manufacture worthless plastic and metal media consumption devices for us.
Because I can hear sadness in these words. I think we can feel thankful for having the opportunity to observe beauty and the universe and feel belonging to where we are and with who we are. Those free smartphones are not going to substitute that.
I do not mean we have to work because it is our fate or something like that.
> Your diatribe about not caring about humans is ironic.
A pity you feel that way. Maybe you interpreted "If you don't care about humans" as literally you, whereas I meant is as "If one doesn't care".
What I meant was is the assumption you seem to make that when a few have plenty of production means without needing the other 'human resources' anymore, those few will not spontaneously share their wealth with the world, so the others can have free smart phones and a life of consumption. Instead, those others will have to double down and start to compete with increasingly cheaper robots.
----
The pact in that old story I was talking about deals with the idea that we as humans know how to be evil. In the story, the consequence is that those first people had to leave paradise and from then on have to work for their survival.
I just mentioned it because the fact that we exploit not only nature, but other humans too if we are evil enough. People that end up controlling the largest amounts of wealth are usually the most ruthless. That's why we need rules.
----
> I guess people with disabilities that aren’t able to work just aren’t human? Should we let them starve to death since they can’t work a 9-5 and work for their food?
On the contrary, I think I have been misunderstood.:)
Do you people think art is relegated to digital images only? No video? No paintings, sculptures, mixed media, performance art, lighting, woodwork, etc etc. How is it possible that everyone seems to ignore that we still have massive leaps required in AI and robotics to match the technical ability of 99% of artists.
Wow, yes. This is exactly what I've been thinking but you summed it up more eloquently.
I like my ideal world a lot better.
It might take away the joy of programming, feeling of ownership and accomplishment.
People today complain about having to program a bunch of api calls might be in for a rude awakening, tending and debugging the piles of chatbot output that got mashed together. Or do we expect that in the future we will suddenly value quality over speed or #features?
I love coaching juniors. These are humans, I can help them with their struggles and teach them. I try to understand them, we share experiences in life. We laugh. We find meaning by being with each other on this lonely, beautiful planet in the universe.
---
Please do not take offense: observe the language in which we are already conflating human beings with bots. If we do it already now, we will collectively do it in the future.
We are not prepared.
It has been fascinating to watch “copyright infringement is not theft” morph into “actually yes it’s stealing” over the last few years.
It used to be incredibly rare to find copyright maximalists on HackerNews, but with GitHub Co-pilot and StableDiffusion it seems to have created a new generation of them.
Something being currently legal and possible doesn’t mean being morally right.
Technology enables things and sometimes the change is qualitatively different.
There's been huge improvements in automating maintenance, and yet I've never once heard someone blame a layoff on e.g. clang-rename (which has probably made me 100x more productive at refactoring compared to doing it manually.)
I'd even say your conclusion is exactly backwards. The implicit assumption is that there's a fixed amount of engineering work to do, so any automation means fewer engineers. In reality there is no such constraint. Firms hire when the marginal benefit of an engineer is larger than the cost. Automation increases productivity, causing firms to hire more, not less.
If artists I employ want to incorporate this stuff into their workflow, that sounds great. They can get more done. There won't be less artists on payroll, just more and better art will be produced. I don't even think it is at the point of incorporating it into a workflow yet though, so this really seems like a nothing burger to me.
At least github copilot is useful. This stuff is really not useful in a professional context, and the idea that it is going to take artists jobs really doesn't make any sense to me. I mean, if there aren't any artists then who exactly do I have that is using these AI tools to make new designs? If you think the answer to that is just some intern, then you really don't know what you're talking about.
Personally, I think "copyright infringement is not theft" but I also think that using artists' work without their permission for profit is never OK, and that's what's happening here.
I am in, but just wanted to let you know many had this idea before. People thought in the past we would barely work these days anymore. What they got wrong is that productivity gains didn't reach the common man. It was partly lost through mass consumption, fueled by advertising, and wealth concentration. Instead, people at the bottom of the pyramid have to work harder.
> I like my ideal world a lot better.
Me too, without being consumption oriented though. Nonetheless, people that take a blind eye to the weaknesses of humankind often runs into unpleasant surprises. It requires work, lots of work.
It's not possible for training an AI using data that was obtained legally to be copyright infringement. This is what I was talking about regarding transmission. Copyright provides a legal means for a rights holder to limit the creation of a copy of their image in order to be transmitted to me. If a rights holder has placed their image on the internet for me to view, then copyright does not provide them a means to restrict how I choose to consume that image.
The AI may or may not create outputs that can be considered derivative works, or contain characters protected by copyright.
You seem to be making an argument that we should be changing this somehow. I suppose I'll say "maybe". But it is apparent to me that many people don't know how intellectual property works.
Yes, artists can also utilize AI as a photoshop filter, and some artists have started using it to fill in backgrounds in drawings, etc. Inpainting can also be used to do unimportant textures for 3d models. But that doesn't mean that AI art is no threat to artists' livelihoods, especially for scenarios like "I need a dozen illustrations to go with these articles" where quality isn't so important to the commissioner that they are willing to spend an extra few hundred bucks instead of spending 15 minutes in midjourney or stable diffusion.
As long as these networks continue being trained on artists' work without permission or compensation, they will continue to improve in output quality and muscle the actual artists out of work.
The confusion is that “copyright infringement is not theft” really was about being against corporate abuse of individuals. It's still the same situation here.
In art these parts are often overlooked, but they are significant none the less. E.g. getting the proportions right is an objective metric and really off putting if it is wrong.
And in programming the "art" parts are often overlooked and precisely the reason why I feel that most software of today is horrible. It is just made to barely "work" and get the technical parts right up to spec and that's it. Beyond that nobody cares about resource efficiency, performance, security, maintainability or yet alone elegance.
This isn't the only option though? You could restrict it to data where permission has been acquired, and many people would probably grant permission for free or for a small fee. Lots of stuff already exists in the public domain.
What ML people seem to want is the ability to just scoop up a billion images off the net with a spider and then feed it into their network, utilizing the unpaid labor of thousands-to-millions for free and turning it into profit. That is transparently unfair, I think. If you're going to enrich yourself, you should also enrich the people who made your success possible.
Many people would probably happily allow use of their work for this if asked first, or would grant it for a small fee. Lots of stuff is in the public domain. But you have to actually go through the trouble of getting permission/verifying PD status, and that's apparently Too Hard
> A small amount of actual artists
It's extremely funny that you say this, because taking a look at the Trending on Artstation page tells a different story.
I think we are talking about a different job. I mentioned it somewhere else, but strapping together piles of bot generated code and having to debug that will feel more like a burden for most I fear.
If a programmer wanted to operate on a level where "value delivering" and "impact" are the most critical criteria for job satisfaction, one would be better of in a product management or even project management role. A good programmer will care a lot about her product, but she still might derive the most joy out of having it build mostly by herself.
I think that most passionate programmers want to build something by themselves. If api mashups are already not fun enough for them, I doubt that herding a bunch of code generators will bring that spark of joy.
How is training AI on imagery from the internet without permission different than decades of film and game artists borrowing H. R. Giger's style for alien technology?[1]
How is it different from decades of professional and amateur artists using the characteristic big-eyed manga/anime look without getting permission from Osamu Tezuka?
Copyright law doesn't cover general "style". Try to imagine the minefield that would exist if it were changed to work that way.
[1] No, I don't mean Alien, or other works that actually involved Giger himself.
Because those things, while dumb and simple, are not continuous in the way that visual art is. Subtle perturbations to a piece of visual art stay subtle. There is room for error. By contrast, subtle changes to source code can have drastic implications for the output of a program. In some domains this might be tolerable, but in any domain where you’re dealing significant sums of money it won’t be.
Will human artists be able to compete with artificial artists commercially? If not, is that bad or is it progress, like Photoshop or Autotune?
The political/social factors which apply to the life-and-death decisions made driving a car, don't apply to whether one of the websites I work on works perfectly.
I'm 35, and I've paid to write code for about 15 years. To be honest, ChatGPT probably writes better code than I did at my first paid internship. It's got a ways to go to catch up with even a junior developer in my opinion, but it's only a matter of time.
And how much time? The expectation in the US is that my career will last until I'm 65ish. That's 30 years from now. Tesla has only been around 19 years and now makes self-driving cars.
So yeah, I'm not immediately worried that I'm going to lose my job to ChatGPT in the next year, but I am quite confident that my role will either cease existing or drastically change because of AI before the end of my career. The idea that we won't see AI replacing professional coders in the next 30 years strains credulity.
Luckily for me, I already have considered some career changes I'd want to do even if I weren't forced to by AI. But if folks my age were planning to finish out their careers in this field, they should come up with an alternative plan. And people starting this field are already in direct competition to stay ahead of AI.
Are there any documented cases where copyright law didn't seem to offer sufficient protection against something that really did seem like copyright infringement but done using AI tooling? I started looking for some a few weeks ago because of this debate and still haven't seen anything conclusive.
Take for example video games. They distracted many people from movies, but also created a huge new field, hungry for talents. Or another one, quite a few genres calcified into distinctive boring styles over the years (see anything related to manga/anime as an example) simply because those styles require less mechanical work and are cheaper to produce. They could use a deep refresh. This tech will also lead to novel applications, created by those who embraced it and are willing to learn the increasingly complex toolset. That's what been happening the last several decades, which have seen several tech revolutions.
>As long as these networks continue being trained on artists' work
This misses the point. The real power of those things is not in the collection of styles baked into it. It's in the ability to learn new stuff. Finetuning and style transfer is what all the wizards do. Construct your own visual style by hand, make it produce more of that. And that's not just about static 2D images; neither do 2D illustrators represent all artists in the broad sense. Everyone who types "blah blah in the style of Ilya Kuvshinov" or is using img2img or whatever is just missing out, because the same stuff is going to be everywhere real soon.
1. This is theft and that's bad.
2. People who do this are getting gains without putting the work and that's bad. (And, per quite a few commenters I've seen, are talentless hacks.)
I have a lot of empathy for the first, and think it has merit, and have a much smaller amount of empathy for the second.
I ended up reading a lot of the quote tweets on this guy the other day: https://twitter.com/ammaar/status/1601284293363261441/retwee...
Here's just a few of thousands in the vein of number 2:
> No talent or passion whatsoever
> He thinks he created something
> Why don't you subscribe to writing and art classes?
> This so ugly and shows real disrespect for people who have made stuff by themselves for years.
> Men will literally sell AI trash and call it "art" instead of go to therapy
> Can’t write or draw but wants to do both
> This is nothing but a HUGE disrespect to all the writers and artists around the world, and all it does is belittle their REAL work and effort. > > This is not art. > Nothing to be proud of.
> I just spent 8 months illustrating a children’s book by hand—working, not “playing”—after a lifetime of training. > > FUCK OFF!
There are also plenty people are complaining about "theft", but it honestly, re-reading through it now, it feels like a minority. If this were done using fully public-domain content, does it sound like any of the people I quoted above been okay with it?
There's a clear disdain for "non-artists" creating art in a new way. I very much feel for the people who see their careers going away, and I can also empathize people who spent a long time acquiring a creative skill that's now "unnecessary". Programming has this too—those darn kids programming in Python rather than Assembly, or doing bootcamps that don't teach big-O notation. This is a normal, human way to feel, and I feel that too from time to time. BUT, I also resist that feeling. I choose not to express disdain for newcomers using new technology, or skipping the old ways.
A large (or at least loud) part of the art community seen here is expressing absolute disdain for those of us who are "cheating" not because "copyright infringement" but because we're using new technology that bypasses years of learning and that's very much eating into my empathy for the community in general. I find it toxic in the programming community and I find it toxic in the art community. Right now, it's exploding in the art community in a way far beyond what I've witnessed in programming.
I fail to see the skill level in someone working on the web knowing about interrupts. And a firmware engineer knowing about devops, integrations or react.
> Although I should add that things like interrupts/events/synchronization and I/O operations are fairly universal computing concepts even in the cloud world
Not really. I/O has nothing to do with cloud, likewise interrupts. Those remain buried way, way down in the hardware that run the cloud at a place where not even datacenter engineers reach.
> If you're a cloud programmer and you don't know how long an operation takes / its big-O complexity
That still has nothing to do with interrupts or hardware I/O.
Try telling one of the programmers to produce a work of art based on a review of all of the works that went into training the models and see how it works out.
Those engineers consented to creating the new tools so that's different
If people can't differentiate between computer and human generated art, wouldn't that be the definition of being replaceable?
And ironically, the overwhelming majority of knowledge used by these models to produce pictures that superficially look like their work (usually not at all), is not coming from any artworks at all. It's as simple as that. They are mostly trained on photos which constitute the bulk of models' knowledge about the real world. They are the main source of coherency. Artist names and keywords like "trending on artstation" are just easily discoverable and very rough handles for pieces of the memory of the models.
But the fact that the human looked at a bunch of Mickey Mouse pictures and gained the ability to draw Mickey Mouse does not infringe copyright because that's just potential inside their brain.
I don't think the potential inside a learning model should infringe copyright either. It's a matter of how it's used.
That same work=survival idea is what incentivizes competitiveness and of course, under that construct, some humans will put on their competitive goggles and exploit others.
There are a lot of human constructs that need to fade away before we can get to a fully automated world. But that’s okay. Humans aren’t the type to get stuck on a problem forever.
Roads are extremely regular, as things go, and as soon as you are off the beaten path with those AIs start having trouble too.
It seems that in general that the long tail will be problematic for a while yet.
Honestly, I think “learn to code” is mostly used sarcastically?
It may be a significant chunk of the butt-in-seat-time under our archaic 40hour/week paradigm, but it's not a significant chunk of the programmer's actual mental effort. You're not going to be able to get people to work 5x more intensely by automating the boring stuff, that was never the limiting factor.
It's almost like the real problem is asymmetry and abuse of power.
This is like saying that photoshop is going to put all the artists out of work because one artist can now do the work of a team of people drawing by hand. So far these AIs are just tools. Tools help humans to produce more and the economy keeps chugging ever upwards.
There is no upper limit of how much art we need. Marvel movies and videogames will just keep looking better and better as our artists increase their capabilities using AI tools to assist them.
Daz3d didn't put modelers and artists out of work, and what Daz and iClone can do is way way more impressive(and useful in a professional setting) than AI Art.
I actually think we will. People are starting to realise where slapping together crap that works 80% of the time gets us, and starting to have second thoughts. If and when we reach a world where leaking people's personal information costs serious money (and the EU in particular is lumbering towards that), the whole way we do programming will change.
Happiness needs loss, fulfillment, pain, hunger, boredom, fear and they need to be experiences backed up by both chemical feelings and experiences and memory and they have to be true.
But here's the thing, already the damage is done beyond just some art. I don't mean to diminish art, but frankly, look at how hostile, ugly and inhuman the world outside is in any regular city. Literal death worlds in fantasy 40k settings look more homey, comfortable, fulfilling, and human.
The poor are economically better off than at almost any point in history; actual food poverty is almost unknown, objectively people are living in better houses than ever before, and so on. It just doesn't seem like any of that makes poor people any happier or poverty any less wretched, somehow.
Can SD create artistic renderings without actual art being incorporated? Just from photos alone? I don't believe so, unless someone shows me evidence to the contrary.
Hence, SD necessitates having artwork in it's training corpus in order to emulate style, no matter how little it's represented in the training data.
I think people will not stop forming a social hierarchy, and so competition remains a sticky trait I think.
> work=survival idea is what incentivizes competitiveness
True, the idea that you can do better than the Jones through hard work is alluring. Having a job is now a requirement for being worthy, the kind of job defines your social position. Compare with the days of nobility though, where those nobleman had everything but a job ("what is a weekend?").
Is it though? What if I were to look at your art style and replicate that style manually in my own works? I see no difference whether it's done by a machine, or done by hand. The reality is that every art is a derivative of some other art. Interestingly, the music industry has been doing this for years. Ever since samplers became a thing, musicians spliced and diced loops into their own tracks for donkeys years, and created an explosion of new genres and sound. Hip-hop, techno, dark ambient, EDM, ..., all fall into the same category. Machine learning is just another new tool to create something.
It's not that uncommon for professional programmers to be pro-level musical soloists on the side, or for retired programmers to play top-level music. The reverse is far less common. I do think that says something.
> Anything as competitive as an artistic field will always result in amounts of mastery needed at the top level that are barely noticeable to outside observers.
Sure. Top-level artistic fields are well into the diminishing returns level, whereas programming is still at the level where even a lot of professional programmers are not just bad, but obviously bad in a way that even non-programmers can understand.
Even in the easiest fields, you can always find something to compete on (e.g. the existence of serious competitive rubik's cube doesn't mean solving a rubik's cube is hard). A difficult field is one where the difference between the top and the middle is obvious to an outsider.
Get a grip me old fruit. You've basically described "growing up". The world is a pretty wild place and you need to find your niche or not (rince/repeat). You are not a failed artist at all. You probed at something "had a dabble" if you like and it didn't work out. Never mind. Move on and try something else but keep your interest in mind.
There are loads of professions that I'd like to have done but as it turns out I'm me and that's who I am. Personally speaking I'm a MD of a little IT firm in the UK that can fiddle up a decent 3-2-1 conc mix and do fairly decent first and second fix wood work. I studied Civ Eng.
"The lack of empathy" - really?
If you fancy your chances as an artist then go for it. At worst you will fulfill your ambition and create some daubs. At best, you will traverse reality and be a wealthy living artist.
Just do it.
This is the first wave of half decent AI.
But more importantly, you are vastly underestimating the millions of small jobs out there that artists use as a stepping stone.
Think of the millions of managers who would happily be presented with a choice of 10 artistic interpretations, and pick one for the sake of getting a quick job done.
No way on earth this isn't going to make a major impact. Empathy absolutely required.
Personally, I'm all for AI training and using human artwork. I think telling it not to prevents progress/innovation, and that innovation is going to happen somewhere.
If it happens somewhere, humans who live in that somewhere will just use those tools to launder the AI-generated artwork, and companies will hire those offshore humans and reap the benefits, all the while, the effect on local artists' wages is even more negative because now they don't have access to the tools to compete in this ar(tificial intelligence)ms race.
Most people do not understand the purpose of copyright. Copyright is a bargain between society and the creator. The creator receives limited protection of the work for a limited time. Why is this the deal?
The purpose of copyright is to advance the progress of science and the useful arts. It is to benefit humanity as a whole.
AI takes nothing more than an idea. It does not take a “creative expression fixed in a tangible media”.
And yes, it will transform art completely, initially by lowering the barrier for producing quality art, and then by raising the bar in terms of quality, it's coming for every artistic field, 3d, film, music etc
If you want a career in these fields, you will need to ride this AI wave from the get go, but even that career will eventually succumb to automation, this is the inevitable end point, as an example, eventually you will be able to give a brief synopsis to an AI and it will be able to flesh that out and create a full movie of it with the actors you choose.
Style transfer combined with the overall coherency of pre-trained models is the real power of these. "Country house in the style of Picasso" is generally not how you use this at full power, because "Picasso" is a poor descriptor for particular memory coordinates. You type "Country house" (a generic descriptor it knows very well) and provide your own embedding or any kind of finetuned addon to precisely lean the result towards the desired style, whether constructed by you or anyone else.
So, if anyone believes that this thing would drive the artists out of their jobs, then removing their works from the training set will change very little as it will still be able to generate anything given a few examples, on a consumer GPU. And that's only the current generation of such models and tools. (which admittedly doesn't pass the quality/controllability threshold required for serious work, just yet)
As a firmware/app guy I'm not qualified on talking about relative skill sets between different areas of cloud development. I agree that interrupts/threads aren't important at all to the person writing a web interface, should have found a better example. I'm not here to argue, for sure there are talented people up and down the stack.
What I can tell you is that I'm amazed at the mistakes I see this new generation of junior programmers making, the kind of stuff indicating they have little understanding of how computers actually actually work.
As an example, I continue to run into young devs that don't have any idea of what numeric over/underflow is. We do a lot of IoT and edge computing, so ranges/limits/size of the data being passed around matters a lot. Attempting to explain the concept reveals that a great many of them have no mental concept of how a computer even holds a number (let alone different variable sizes, types, signed/unsigned etc). When you explain that variables are a fixed size and don't have unlimited range, it's a revelation to many of them.
Sometimes they'll argue that this stuff doesn't matter, even as as you're showing them the error in their code. They feel the problem is that the other devs built it wrong, chose the wrong language or tool for the problem at hand etc. We had a dev (wrote test scripts) that would argue with his boss that everyone (including app and firmware teams) should ditch their languages and write everything in python, where mistakes can't be made. He was dead serious, ended up quitting out of frustration. I'm sure that was a personality problem, but still, the lack of basic understanding astounded us, and the phrase "knows enough to be dangerous" comes to mind.
I find it strange that there is a new type of programmer that knows very little about how computers actually work. I find it stranger that they are even a bit productive in their careers, although I suspect it's because the problem domains they work in are much more tolerant to these kinds of errors. CI/CD system is setup to catch/fix their problems, and hence the job positions can tolerate what used to be considered a below average programmer. Efficient? No. Good enough? Sure.
I suspect some of these positions can be automated before the others can.
This is not intrinsic, though. It is a cultural imperative, so perhaps we need to revisit that?
We don’t need to “try to imagine”, we just need to wait a bit and watch Walt’s reanimated corpse and army of undead lawyers come out swinging for those “mice in the general style of Mickey Mouse”.
With AI art gradually improving, I think that line of reasoning will convince less and less people that would otherwise have second thoughts. They would spend a couple of hours on Midjourney and decide that's as far they want to take their "art" hobby. The power of instant gratification will convince many faster than spending hundreds of hours honing a craft.
I think in the future a lot of people's gut reaction to failing as a manual artist will be to retreat to Midjourney or similar to satisfy their remaining desire to have creative work they can call their own instead of trying again. I personally find the near-instant feedback loop very addicting, and I think it will have a similar effect to social platforms in normalizing a desire for quick results over the patience needed to hone a craft.
But as opposed to scrolling newsfeeds for hours, at least the user obtains a creative output through generative art, and it doesn't carry the same type of guilt for me. This kind of thing is unprecedented and I don't look forward to how it will polarize the various communities involved in the coming years.
You can make sure the people from which their jobs where taken by an AI should be able to live from its proceeds. We all benefit and make progress.
Ok so now many more people can generate cool looking photos now in an automatic fashion. So what? It just means we’ve raised the bar… for what can be considered cool.
Think of the 80/20 model, if it gets you 80% there (don't take that literally) then that's huge in it of itself. This tool is getting us closer to the example you mention and that in of itself is really cool.
I wonder if the nerds have shot themselves in the foot here with terminology? I suspect the nerd’s lawyers would have been much happier if the entire field was named “automated mechanical creativity” instead of “artificial intelligence”. It’d be kinda amusing to see the whole field of study lose in court because of their own persistent claims that what they’re doing is not just “creating in a mechanical fashion” but creating “intelligence” which can therefore be held to account for copyright infringement. Shades of Al Capone getting busted for taxes…
Also, should a human artist creating a pastiche count as copyright infringement as well?
Humans have my sympathy. We are literally at the brink of the multiple major industries being wiped out. What was only theoretical for the last 10-15 years started to happen right now.
In few short years most humans will not be able to find any employment because machine will be more efficient and cheaper. Society will transform beyond any previous transformations in history. Most likely it's going to be very rough. But we just argue that of course our specific jobs are going to stay.
You're in for a rude awakening when you get laid off and replaced with a bot that creates garbage code that is slow and buggy but works and so the boss gets to save on your salary. "But it's slow, redundant, looks like it was made by some who just copy and pasted endlessly from stackoverflow" but your boss won't care, he just needs to make a buck.
Now imagine that automation in food and expand it to everything. A table factory wouldn’t purchase wood from another company. There’s automation to extract wood from trees and the table factory just requests it and automation produces a table. With robots at every step of the process, there are no labor costs. There’s no shift manager, there’s no CEO with a CEO salary, there’s no table factory worker spending 12+ hours a day drilling a table leg to a table for $3 an hour in China.
That former factory worker in China is instead pursuing their passions in life.
Other architectures exist, but you can notice from the lack of people talking about them that they don't produce any output nearly as developed as the chatGPT kind. They will get there eventually, but that's not what we are seeing here.
That is only because the vast majority of computer programming that is done is not very good.
Some artists just do the descriptive part though, right? The name I can think of is Sol LeWitt, but I'm sure there are others. A lot of it looks like it could be programmed, but might be tricky.
Essentially we are going to get away from market economy, money, private property. The problem is that once these things go personal freedom goes as well. So either accept the inevitable totalitarian society, or something else? But what?
My primary empathy is with end users, who could be empowered with AI-based tools to express their dreams and create graphics or software without need to pay professional artists or programmers.
One can see AI tools as progress here while also recognising that this is likely to have a huge impact on a lot of lives.
At the same time I recognise that this is a massive threat to artists, both low-visibility folks who throw out concepts and logos for companies, and people who may sell their art to the public. Because I can spend a couple of dollars and half an hour to come up with an image I’d be happy to put on my wall.
I’m not sure what the answer is here, but I don’t think a sort of “human origin art” Puritanism is going to hold back the flood, though it may secure a niche like handmade craft goods and organic food…
I have no idea how well it holds up to modern reading, but I found it interesting at the time.
He posits two outcomes - in the fictionalised US the ownership class owns more and more of everything, because automation and intelligence remove the need for workers and even most technicians over time. Everyone else is basically a prisoner given the minimum needed to maintain life.
Or we can become “socialist” in a sort of techno-utopian way, realising that the economy and our laws should work for us and that a post-labor society should be one in which humans are free from dependence on work rather than defined by it.
Does this latter one imply a total lack of freedom? It certainly implies dependence on the state, but for most people (more or less by definition) an equal share would be a better share than they can get now, and they would be free to pursue art or learning or just leisure.
Twenty years down the pike I've gotten pretty solid at programming, certainly not genius-level but competent.
I agree strongly that making art anyone cares anout is massively harder than being a competent programmer. In both you need strong technical abilities to be effective, but intuition and a deep grasp of human psychology are really crucial in art - almost table stakes.
Because software dev is usually practical, a craft, you can get paid decently with far less brilliance and fire than will suffice to make an artist profitable.
...though perhaps the DNN code assist tools will change that soon.
As the price of a bit dropped the quality of the comms dropped. It is inevitable that the price of the creation of (crappy) art will do the same thing if only because it will drag down the average.
> As an example, I continue to run into young devs that don't have any idea of what numeric over/underflow is
That doesn't happen in web application development either. You don't write that low level code that you could cause an overflow or underflow. There are a zillion layers in between your code and what could cause an overflow.
> they have little understanding of how computers actually actually work.
'The computer' has been abstracted away at the level of the Internet. Not even the experts who attend datacenters would ever pass near anything that is related to a numeric overflow. That stuff is hidden deep inside hardware or deep inside the software stack near the OS level in any given system. If there is anything that causes an overflow in such a machine, what they do would be to replace that machine instead of going into debugging. Its the hardware manufacturers' and OS developers' responsibility to do that. No company that does cloud or develops apps on the Internet would need to know about interrupts, numeric overflows and whatnot.
> I find it stranger that they are even a bit productive in their careers, although I suspect it's because the problem domains they work in are much more tolerant to these kinds of errors
Interrupt errors dont happen in web development. You have no idea at the level of abstraction that was built between the layers where it could happen and the modern Internet apps. We are even abstracting away servers, databases at this point.
You are applying a hardware perspective to the Internet. That's not applicable.
A lot used to escape the market logic. And I hope we go back to some of that. Not everything has to be profitable / a market.
Example: commons infrastructure, common grazing place for cattle, the woods.
What I wish would be pulled of the markets : School, hospital, energy infra
It’s not a fantasy idea. I grow up there and it’s still working.
It’s not out of beautiful idea either. But sheer pragmatism.
A country will always need those things and those are important things. We might as well invest in them for the long run.
Clearly those are not hip idea anymore. Oh well.
For me at least, Stable Diffusion has been this great tool for personal expression in a medium that was previously inaccessible to me: images. Now I could communicate with people in this new, accessible way! I've learned more about art history and techniques in the last 3 months than in my entire life up to that point.
So I came up with a few ideas about making some paintings for my mother, and children's books for my nieces and nephew. The anger I received from my artistically inclined colleagues over this saddened me greatly, so I tried to talk to more people to see if this was an anomaly. There was more anger, and argument for censorship! I have to admit I struggled to maintain any empathy after receiving that reception.
I'm personally really excited about a future where we don't have to suffer to create art, whether it's code, an image, or music. Isn't more art and less suffering in our lives a good thing? If there are economic structures we've set up that make that a bad thing, maybe it would be fruitful take a critical look at those.
Presently I'm looking at creating a few small B2B products out of various fine-tuned public AI models. The first thing I realized is that I'd be addressing niches that were just not possible to tackle before (cost, scale, latency). The second thing I noticed is I'd need to hire designers, copywriters, etc. for their judgement -- at least as quality control. So at least in my limited scope of activity, the use of AI permits me to hire creative professionals, to tackle jobs that previously employed zero creative professionals (because previously they weren't done at all, or just done very poorly, e.g. English website copy for small business in non-English-speaking developing economies).
I do feel for people that have decided that they need to retool because they feel AI threatens their job. I do that every couple of years when some new thing threatens an old thing that I do, it's a chunk of work, and not always fun. To show better empathy, I think I'm going to reach out to more artists and show them what the current AI tools can and cannot do, to help them along this path. So thank you for your post, because it gave me the idea to take this approach!
...and on the weekends, I can still write code in hand-optimized assembly, because that's the brush I love painting with.
It amounts to saying that anything that benefits me is good and anything to my detriment is bad. Sure, there's a consistency to that. However, if that's the foundation of one's positions, it leads to all manner of other logical inconsistencies and hypocrisies.
What is that?
But of all the examples of cheap and convenient beating quality: photography, film, music, et al, the many industries that digital technology has disrupted, newspapers are more analogous than builders. Software companies are publishers, like newspapers. And newspapers had entire building floors occupied by highly skilled mechanical typesetters, who have long been replaced. A handful of employees on a couple computers could do the job faster, more easily, and of good enough quality.
Software has already disrupted everything else, eventually it would disrupt the process of making software.
I do wonder what happens as the market for the “old way” dries up, because it implies that there is no career path to lead to doing things better - any fool (I include myself) can be an AI jockey, but without people that need the skills of average designers, from what pool will the greats spring?
Ultimately those who are able to integrate it into their creative process will be the winners. There will always be small niche for those who oppose it out of principle.
In what sense did Copilot fizzle badly? It's a tool that you incorporated into your workflow and that you pay money for.
Does it solve all programming? No, of course not, and it's far from there. I think even if improves a lot it will not be close to replacing a programmer.
But a tool that lets you write code 10x,100x faster is a big deal. I don't think we're far away from a world in which every programmer has to use AI to be somewhat proficient in their job.
Frustratingly, most people don't fully appreciate the art, and are quite happy for artists to put in only 20% of the effort. Heck, old enough to remember people who regarded Quake as "photorealistic", some in a negative way saying this made it a terrible threat to the minds of children who might see the violence it depicted, and others in a positive way saying it was so good that Riven should've used that engine instead of being pre-rendered.
Bugs like this are easy to fix: `x = x – 4;` which should be `x = x - 4;`
Bugs like this, much harder:
#define TOBYTE(x) (x) & 255
#define SWAP(x,y) do { x^=y; y^=x; x^=y; } while (0)
static unsigned char A[256];
static int i=0, j=0;
void init(char \*passphrase) {
int passlen = strlen(passphrase);
for (i=0; i<256; i++)
A[i] = i;
for (i=0; i<256; i++) {
j = TOBYTE(j + A[TOBYTE(i)] + passphrase[j % passlen]);
SWAP(A[TOBYTE(i)], A[j]);
}
i = 0; j = 0;
}
unsigned char encrypt_one_byte(unsigned char c) {
int k;
i = TOBYTE(i+1);
j = TOBYTE(j + A[i]);
SWAP(A[i], A[j]);
k = TOBYTE(A[i] + A[j]);
return c ^ A[k];
}This sounds mystical and mysterious; it would be a mistake to project one mode of production as being the brand all humans must live with until we go extinct.
I think majority wouldn't know what hit them when the time comes. My experience with chatgpt has been highly positive changing me from a skeptic to a believer. It takes a bit of skill to tune the prompts but I got it to write frontend, backend, unit test cases, automation test cases, generate test data flawlessly. I have seen and worked with much worse developers than what this current iteration is.
Indeed, you should not read it as an imperative. The other commentator was also put on the wrong foot by this.
Maybe I should not have assumed people would know Genesis, https://en.wikipedia.org/wiki/Book_of_Genesis. I should be more explicit: we are not some holy creatures. Don't assume that the few who are gonna reap the rewards will spontaneously share them with others. We are able to let others suffer to gain a personal advantage.
To get good output on larger scales we're going to need a model that is hierarchical with longer term self attention.
Intellectual property generally includes copyright, patents, trademark, and trade secrets, though there are broader claims such as likeness, celebrity rights, moral rights (e.g., droit d'auteur in French/EU law), and probably a few others since I began writing this comment (the scope seems to be increasing, generally).
I suspect you intended to distinguish trademark and copyright.
Imagine all the good things that aren't done because they just don't make any money. Instead we put resources towards things that make our lives worse because they're profitable.
What search algorithms have you developed?
What non-trivial, non-Flask/Django/React, non-plugin/non-API tool, or library, or frameworks, have you written?
What actual percentage of your work output comprises computationally hard problems?
If we're talking programming, that's real programming, the kind you should be comparing 'hard' art to.
The other patterns of AI that seem to be able to arrive at novel solutions basically use a brute force approach of predicting every outcome if it has perfect information or a brute force process where it tries everything until it finds the thing that "works". Both of those seem approaches seem problematic in the "real world". (though i would find convincing the argument that the billions of people all trying things act as a de facto brute force approach in practice)
For someone to be able to do a novel implementation in a field dominated by AI might be impossible, because core foundational skills cant get developed anymore by humans for them to achieve heights that the AI hasn't reached yet. We are now stuck, things cant really get "better", we just get maybe iterative improvements on how the AI implements the already arrived at solutions.
TLDR, lets sic the AI on making a new Javascript framework and see what happens :)
Anyone finding their own artistic voice with the tools, I respect that, those people are artists - but training with the aim to create derivative models, that should be called out.
A derivative work is a creative expression based on another work that receives its own copyright protection. It's very unlikely that AI weights would be considered a creative expression, and would thus not be considered a derivative work. At this point, you probably can't copyright your AI weights.
An AI might create work that could be considered derivative if it were the creative output of a human, but it's not a human, and thus the outputs are unlikely to be considered derivative works, though they may be infringing.
This was my reply: https://news.ycombinator.com/item?id=34005604
I also agree that artist employment isn't sacred, but after extensive use of the generation tools I don't see them replacing anything but the lowest end of the industry, where they just need something to fill a space. The tools can give you something that matches a prompt, but they're only really good if you don't have strong opinions about details, which most middle tier customers will.
My probably perverse takeaway is that Barbara Streisand might have been wrong: people who need people (to appreciate their work) may not be the luckiest people in the world. One can enjoy one’s accomplishments without needing to have everyone else appreciate them. Or you can find other people with similar interests, and enjoy shared appreciation.
In the extreme, the need for external validation seems to lead to people like Trump and Musk. Perhaps a shift in how we view this would be beneficial for society?
I don't mean this in a "people love work, actually", hooray-capitalism sense (LOL, god no), but the sense that humans tend to be happier and more content when they're helpful to those around them. It used to be a lot easier to provide that kind of value through creative and self-expressive efforts, than it is now. Any true need for artists and creative work (and, for the most part, craftspeople) at the scale of friend & family circles or towns or whatever, is all but completely gone.
I still stand behind my main point, which is that some of these jobs will be automated before others. Apparently the skill set differences between different kinds of programmers even wider than I thought it was. So instead of talking about whether AI will/won't automate programming in general, it's more productive to discuss which kind of programming AI will automate first.
So AI puts artists out of a job and in some utopian vision, one day puts programmers out of a job, and nobody has jobs and that's what we should want, right, so why are you complaining about your personal suffering on the inevitable march of progress?
There is little to no worthwhile discussion from those same people about if the Puritanical worldview of work-to-live will be addressed, or how billionaires/capitalists/holders-of-the-resources respond to a world where no one has jobs, an income stream, and thus money to buy their products. Because Capitalist Realism has permeated, and we can no longer imagine a plausibly possible future that isn't increasingly technofeudalist. Welcome back to Dune?
Both personal autonomy and private property are social constructs we agree are valuable. Stealing a car and raping a person are things we've identified as unacceptable and codified into law.
And in stark contrast, intellectual property is something we've identified as being valuable to extend limited protections to in order to incentivize creative and technological development. It is not a sacred right, it's a gambit.
It's us saying, "We identify that if we have no IP protection whatsoever, many people will have no incentive to create, and nobody will ever have an incentive to share. Therefore, we will create some protection in these specific ways in order to spur on creativity and development."
There's no (or very little) ethics to it. We've created a system not out of respect for people's connections to their creations, but in order to entice them to create so we can ultimately expropriate it for society as a whole. And that system affords protection in particular ways. Any usage that is permitted by the system is not only not unethical, it is the system working.
For people caught in that kind of situation, progress sucks.
If you can't support yourself for whatever reason, you rely on others to do that work on your behalf. Social animals, wolves for example, try to provide for their sick and handicapped, but that's only after their own needs are met first.
We have physical needs just like other members of the natural world - food for example, if we can't provide food for ourselves, we'll starve to death just like an animal. Why bother judging this situation as good or bad when it's not something that can be changed.
A few people engaged in “hand ringing” but not deep, regular discourse on the evolving nature of what we want “tech” and “programming” to be going forward.
Despite delivering transformative social shifts, even this last decade, where is the collective reflection?
If the original is a creative expression, then recording it using some different tech is still a creative expression. I don't see the qualitative difference between a bunch of numbers that constitutes weights in a neural net, and a bunch of numbers that constitute bytes in a compressed image file, if both can be used to recreate the original with minor deviations (like compression artifacts in the latter case).
Isnt that the case in every field in technology? Way back engineers used to know how circuits worked. Now network engineers never deal with actual circuits themselves. Way back back programmers had to do a lot of things manually. Now the underlying stack automates much of that. On top of TCP/IP, we laid the WWW, then we laid web apps, then we laid CMSes, then we came to such a point that CMSes like WordPress has their own plugins, and the very INDIVIDUAL plugins themselves became expertise fields. When looking for someone to work on a Woocommerce store, people dont look for WordPress developers, or plugin developers. They look for 'Woocommerce developers'. WP became so big that every facet of it became specializations in itself.
Same for everything else in tech: We create a technology, which enables people to build stuff on it, then people build so much stuff that each of those became individual worlds in themselves. Then people standardize that layer and then move on to building next level up. It goes infinitely upwards.
It doesn’t really matter to humanity if strong people can still win fights, but it might matter if artists and designers who do produce great, original work stop being produced. It probably even matters to the AI models because that forms part of their input.
Case in point: https://stackoverflow.com/help/gpt-policy
> This trust is broken when users copy and paste information into answers without validating that the answer provided by GPT is correct, ensuring that the sources used in the answer are properly cited (a service GPT does not provide), and verifying that the answer provided by GPT clearly and concisely answers the question asked.
Most programmers are working in business-focused jobs. I don't think many of us, in grade school, said "I sure hope I can program business logic all day when I grow up." So I think the passion for 90% of people writing code is really about getting a paycheck. Then they use that paycheck to do what they're really passionate about in their personal life.
So I completely agree that people passionate about coding might want to write that code by hand, I just don't think that group accounts for most people writing code professionally.
Art is really not cheap. I think people think about how little artists generate in income and assume that means art is cheap, but non-mass-produced art is pretty much inaccessible for the vast majority of people.
There's a very real chance that adding these costs on top will drive development away from the sort that pays the people who lose out. For example, attempting to require licensing for images may simply push model training towards public domain materials. Then the models still work and the usable commercial art is still generated cheaply, but there are no living artists getting paid.
We should not blithely assume an ideal option that makes everyone happy is readily available or even at all. The core incentive of a lot of users is to spend less on commercial imagery. The core incentive of artists is to get paid at least as much as before. We should take seriously the possibility that there is not a medium in there that satisfies everyone.
It makes sense. My own experience driving a non-Tesla car the speed limit nearly always, is that other drivers will try to pressure you to do dangerous stuff so they can get where they're going a few seconds faster. I sometimes give into that pressure, but the AI doesn't feel that pressure at all. So if you're paying attention and see the AI not giving into that pressure, the tendency is to take manual control so you can. But that's not safer--quite the opposite. That's an example of the AI driving better than the human.
On the opposite end of the social anxiety spectrum, there's a genre of pornography where people are having sex in the driver's seats of Teslas while the AI is driving. They certainly aren't intervening 3 times in 20 minutes, and so far I don't know of any of these people getting in car accidents.