zlacker

[return to "Who knew the first AI battles would be fought by artists?"]
1. meebob+kc[view] [source] 2022-12-15 13:03:10
>>dredmo+(OP)
I've been finding that the strangest part of discussions around art AI among technical people is the complete lack of identification or empathy: it seems to me that most computer programmers should be just as afraid as artists, in the face of technology like this!!! I am a failed artist (read, I studied painting in school and tried to make a go at being a commercial artist in animation and couldn't make the cut), and so I decided to do something easier and became a computer programmer, working for FAANG and other large companies and making absurd (to me!!) amounts of cash. In my humble estimation, making art is vastly more difficult than the huge majority of computer programming that is done. Art AI is terrifying if you want to make art for a living- and, if AI is able to do these astonishingly difficult things, why shouldn't it, with some finagling, also be able to do the dumb, simple things most programmers do for their jobs?

The lack of empathy is incredibly depressing...

◧◩
2. ben_w+Dg[view] [source] 2022-12-15 13:24:52
>>meebob+kc
I'm mostly seeing software developers looking at the textual equivalent, GPT-3, and giving a spectrum of responses from "This is fantastic! Take my money so I can use it to help me with my work!" to "Meh, buggy code, worse than dealing with a junior dev."

I think the two biggest differences between art AI and code AI are that (a) code that's only 95% right is just wrong, whereas art can be very wrong before a client even notices [0]; and (b) we've been expecting this for ages already, to the extent that many of us are cynical and jaded about what the newest AI can do.

[0] for example, I was recently in the Cambridge University Press Bookshop, and they sell gift maps of the city. The background of the poster advertising these is pixelated and has JPEG artefacts.

It's highly regarded, and the shop has existed since 1581, and yet they have what I think is an amateur-hour advert on their walls.

◧◩◪
3. edanm+Ao[view] [source] 2022-12-15 14:05:58
>>ben_w+Dg
> code that's only 95% right is just wrong,

I know what you mean, but thinking about it critically, this is just wrong. All software has bugs in it. Small bugs, big bugs, critical bugs, security bugs, everything. No code is immune. The largest software used by millions every day has bugs. Library code that has existed and been in use for 30 years has bugs.

I don't think you were actually thinking of this in your comparison, but I think it's actually a great analogy - code, like art, can be 95% complete, and that's usually enough. (For art, looks good and is what I wanted is enough, for code, does what I want right now, nevermind edge cases is enough.)

◧◩◪◨
4. CapmCr+ur[view] [source] 2022-12-15 14:17:53
>>edanm+Ao
This depends entirely on _how_ the code is wrong. I asked chatGPT to write me code in python that would calculate SHAP values when given a sklearn model the other day. It returned code that ran, and even _looked_ like it did the right thing at a cursory glance. But I've written SHAP a package before, and there were several manipulations it got wrong. I mean completely wrong. You would never have known the code was wrong unless you knew how to write the code in the first place.

To me, code that is 95% correct will either fail catastrophically or give very wrong results. Imagine if the code you wrote was off 5% for every number it was supposed to generate. Code that is 99.99% correct will introduce subtle bugs.

* No shade to chatGPT, writing a function that calculates shap values is tough lol, I just wanted to see what it could do. I do think that, given time, it'll be able to write a days worth of high quality code in a few seconds.

◧◩◪◨⬒
5. Workac+Pu[view] [source] 2022-12-15 14:32:19
>>CapmCr+ur
The thing about ChatGPT is that it warning shot. And all these people I see talking about it, laughing about how the shooter missed them.

Clearly ChatGPT is going to improve, and AI development is moving at a breakneck pace and accelerating. Dinging it for totally fumbling 5% or 10% of written code is completely missing the forest for the trees.

◧◩◪◨⬒⬓
6. woeiru+j91[view] [source] 2022-12-15 17:08:15
>>Workac+Pu
Yeah, but people were also saying this about self-driving cars, and guess what that long tail is super long, and its also far fatter than we expected. 10 years ago people were saying AI was coming for taxi drivers, and as far as I can tell we're still 10 years away.

I'm nonplussed by ChatGPT because the hype around it is largely the same as was for Github Copilot and Copilot fizzled badly. (Full disclosure: I pay for Copilot because it is somewhat useful).

[go to top]