zlacker

[return to "Who knew the first AI battles would be fought by artists?"]
1. meebob+kc[view] [source] 2022-12-15 13:03:10
>>dredmo+(OP)
I've been finding that the strangest part of discussions around art AI among technical people is the complete lack of identification or empathy: it seems to me that most computer programmers should be just as afraid as artists, in the face of technology like this!!! I am a failed artist (read, I studied painting in school and tried to make a go at being a commercial artist in animation and couldn't make the cut), and so I decided to do something easier and became a computer programmer, working for FAANG and other large companies and making absurd (to me!!) amounts of cash. In my humble estimation, making art is vastly more difficult than the huge majority of computer programming that is done. Art AI is terrifying if you want to make art for a living- and, if AI is able to do these astonishingly difficult things, why shouldn't it, with some finagling, also be able to do the dumb, simple things most programmers do for their jobs?

The lack of empathy is incredibly depressing...

◧◩
2. ben_w+Dg[view] [source] 2022-12-15 13:24:52
>>meebob+kc
I'm mostly seeing software developers looking at the textual equivalent, GPT-3, and giving a spectrum of responses from "This is fantastic! Take my money so I can use it to help me with my work!" to "Meh, buggy code, worse than dealing with a junior dev."

I think the two biggest differences between art AI and code AI are that (a) code that's only 95% right is just wrong, whereas art can be very wrong before a client even notices [0]; and (b) we've been expecting this for ages already, to the extent that many of us are cynical and jaded about what the newest AI can do.

[0] for example, I was recently in the Cambridge University Press Bookshop, and they sell gift maps of the city. The background of the poster advertising these is pixelated and has JPEG artefacts.

It's highly regarded, and the shop has existed since 1581, and yet they have what I think is an amateur-hour advert on their walls.

◧◩◪
3. edanm+Ao[view] [source] 2022-12-15 14:05:58
>>ben_w+Dg
> code that's only 95% right is just wrong,

I know what you mean, but thinking about it critically, this is just wrong. All software has bugs in it. Small bugs, big bugs, critical bugs, security bugs, everything. No code is immune. The largest software used by millions every day has bugs. Library code that has existed and been in use for 30 years has bugs.

I don't think you were actually thinking of this in your comparison, but I think it's actually a great analogy - code, like art, can be 95% complete, and that's usually enough. (For art, looks good and is what I wanted is enough, for code, does what I want right now, nevermind edge cases is enough.)

◧◩◪◨
4. Curiou+2F[view] [source] 2022-12-15 15:09:45
>>edanm+Ao
Two issues. First, when a human gets something 5% wrong, it's more likely to be a corner case or similar "right most of the time" scenario, whereas when AI gets something 5% wrong, it's likely to look almost right but never produce correct output. Second, when a human writes something wrong they have familiarity with the code and can more easily identify the problem and fix it, whereas fixing AI code (either via human or AI) is more likely to be fraught.
◧◩◪◨⬒
5. edanm+YM[view] [source] 2022-12-15 15:37:02
>>Curiou+2F
You (and everyone else) seem to be making the classic "mistake" of looking at an early version and not appreciating that things improve. Ten years ago, AI-generated art was at 50%. 2 years ago, 80%. Now it's at 95% and winning competitions.

I have no idea if the AI that's getting code 80% right today will get it 95% right in two years, but given current progress, I wouldn't bet against it. I don't think there's any fundamental reason it can't produce better code than I can, at least not at the "write a function that does X" level.

Whole systems are a way harder problem that I wouldn't even think of making guesses about.

◧◩◪◨⬒⬓
6. yamtad+ah1[view] [source] 2022-12-15 17:44:07
>>edanm+YM
To be fair to those assumptions, there've been a lot of cases of machine-learning (among other tech) looking very promising, and advancing so quickly that a huge revolution seems imminent—then stalling out at a local maximum for a really long time.
[go to top]