zlacker

[return to "Who knew the first AI battles would be fought by artists?"]
1. meebob+kc[view] [source] 2022-12-15 13:03:10
>>dredmo+(OP)
I've been finding that the strangest part of discussions around art AI among technical people is the complete lack of identification or empathy: it seems to me that most computer programmers should be just as afraid as artists, in the face of technology like this!!! I am a failed artist (read, I studied painting in school and tried to make a go at being a commercial artist in animation and couldn't make the cut), and so I decided to do something easier and became a computer programmer, working for FAANG and other large companies and making absurd (to me!!) amounts of cash. In my humble estimation, making art is vastly more difficult than the huge majority of computer programming that is done. Art AI is terrifying if you want to make art for a living- and, if AI is able to do these astonishingly difficult things, why shouldn't it, with some finagling, also be able to do the dumb, simple things most programmers do for their jobs?

The lack of empathy is incredibly depressing...

◧◩
2. ben_w+Dg[view] [source] 2022-12-15 13:24:52
>>meebob+kc
I'm mostly seeing software developers looking at the textual equivalent, GPT-3, and giving a spectrum of responses from "This is fantastic! Take my money so I can use it to help me with my work!" to "Meh, buggy code, worse than dealing with a junior dev."

I think the two biggest differences between art AI and code AI are that (a) code that's only 95% right is just wrong, whereas art can be very wrong before a client even notices [0]; and (b) we've been expecting this for ages already, to the extent that many of us are cynical and jaded about what the newest AI can do.

[0] for example, I was recently in the Cambridge University Press Bookshop, and they sell gift maps of the city. The background of the poster advertising these is pixelated and has JPEG artefacts.

It's highly regarded, and the shop has existed since 1581, and yet they have what I think is an amateur-hour advert on their walls.

◧◩◪
3. edanm+Ao[view] [source] 2022-12-15 14:05:58
>>ben_w+Dg
> code that's only 95% right is just wrong,

I know what you mean, but thinking about it critically, this is just wrong. All software has bugs in it. Small bugs, big bugs, critical bugs, security bugs, everything. No code is immune. The largest software used by millions every day has bugs. Library code that has existed and been in use for 30 years has bugs.

I don't think you were actually thinking of this in your comparison, but I think it's actually a great analogy - code, like art, can be 95% complete, and that's usually enough. (For art, looks good and is what I wanted is enough, for code, does what I want right now, nevermind edge cases is enough.)

◧◩◪◨
4. Goblin+zx[view] [source] 2022-12-15 14:42:23
>>edanm+Ao
And GPT can't fix a bug, it can only generate new text that will have a different collection of bugs. The catch is that programming isn't text generation. But AI should be able to make good actually intelligent fuzzers, that should be realistic and useful.
◧◩◪◨⬒
5. Ajedi3+HC[view] [source] 2022-12-15 15:00:20
>>Goblin+zx
> GPT can't fix a bug

It can't? I could've sworn I've seen (cherry-picked) examples of it doing exactly that, when prompted. It even explains what the bug is and why the fix works.

[go to top]