zlacker

[parent] [thread] 77 comments
1. ben_w+(OP)[view] [source] 2022-12-15 13:24:52
I'm mostly seeing software developers looking at the textual equivalent, GPT-3, and giving a spectrum of responses from "This is fantastic! Take my money so I can use it to help me with my work!" to "Meh, buggy code, worse than dealing with a junior dev."

I think the two biggest differences between art AI and code AI are that (a) code that's only 95% right is just wrong, whereas art can be very wrong before a client even notices [0]; and (b) we've been expecting this for ages already, to the extent that many of us are cynical and jaded about what the newest AI can do.

[0] for example, I was recently in the Cambridge University Press Bookshop, and they sell gift maps of the city. The background of the poster advertising these is pixelated and has JPEG artefacts.

It's highly regarded, and the shop has existed since 1581, and yet they have what I think is an amateur-hour advert on their walls.

replies(8): >>meebob+L1 >>jimmas+K2 >>edanm+X7 >>edanm+f8 >>Kye+yg >>itroni+aP1 >>asdf12+GQ1 >>tooman+tR1
2. meebob+L1[view] [source] 2022-12-15 13:33:44
>>ben_w+(OP)
I do appreciate that the way in which a piece of code "works" and the way in which an piece of art "works" is in some ways totally different- but, I also think that in many cases, notably automated systems that create reports or dashboards, they aren't so far apart. In the end, the result just has to seem right. Even in computer programming, amateur hour level correctness isn't so uncommon, I would say.

I would personally be astonished if any of the distributed systems I've worked on in my career were even close to 95% correct, haha.

replies(2): >>lycopo+f5 >>azorna+Q7
3. jimmas+K2[view] [source] 2022-12-15 13:40:13
>>ben_w+(OP)
> code that's only 95% right is just wrong

It's still worth it on the whole but I have already gotten caught up on subtly wrong Copilot code a few times.

◧◩
4. lycopo+f5[view] [source] [discussion] 2022-12-15 13:53:23
>>meebob+L1
Understanding what you are plotting and displaying in the dashboard is the complicated part, not writing the dashboard. Programmers are not very afraid of AI because it is still just a glorified fronted to stackoverflow, and SO has not destroyed the demand for programmers so far. Also, understanding the subtle logical bugs and errors introduced by such boilerplate AI-tools requires no less expertise than knowing how write the code upfront. Debugging is not a very popular activity among programmers for a reason.

It may be that one day AI will also make their creators obsolete. But at that point so many professions will be replaced by it already, that we will live in a massively changed society where talking about the "job" has no meaning anymore.

◧◩
5. azorna+Q7[view] [source] [discussion] 2022-12-15 14:05:22
>>meebob+L1
A misleading dashboard is a really really bad. This is absolutely not something where I would be happy to give it to an AI to do just because "no one will notice". The fact that no one will notice errors until it's too late is why dashboards need extra effort by their author to actually test the thing.

If you want to give programming work to an AI, give it the things where incorrect behaviour is going to be really obvious, so that it can be fixed. Don't give it the stuff where everyone will just naively trust the computer without thinking about it.

6. edanm+X7[view] [source] 2022-12-15 14:05:58
>>ben_w+(OP)
> code that's only 95% right is just wrong,

I know what you mean, but thinking about it critically, this is just wrong. All software has bugs in it. Small bugs, big bugs, critical bugs, security bugs, everything. No code is immune. The largest software used by millions every day has bugs. Library code that has existed and been in use for 30 years has bugs.

I don't think you were actually thinking of this in your comparison, but I think it's actually a great analogy - code, like art, can be 95% complete, and that's usually enough. (For art, looks good and is what I wanted is enough, for code, does what I want right now, nevermind edge cases is enough.)

replies(8): >>CapmCr+Ra >>mejuto+9b >>Goblin+Wg >>Curiou+po >>snicke+8c1 >>tables+pA1 >>mr_toa+hL1 >>scotty+Qz3
7. edanm+f8[view] [source] 2022-12-15 14:07:15
>>ben_w+(OP)
EDIT: I posted this comment twice by accident! This comment has more details but the other more answers, so please check the other one!

> code that's only 95% right is just wrong,

I know what you mean, but thinking about it critically, this is just wrong. All software has bugs in it. Small bugs, big bugs, critical bugs, security bugs, everything. No code is immune. The largest software used by millions every day has bugs. Library code that has existed and been in use for 30 years has bugs.

I don't think you were actually thinking of this in your comparison, but I think it's actually a great analogy - code, like art, can be 95% complete, and that's usually enough. (For art, looks good and is what I wanted is enough, for code, does what I want right now, nevermind edge cases is enough.)

The reason ChatGPT isn't threatening programmers is for other reasons. Firstly, it's code isn't 95% good, it's like 80% good.

Secondly, we do a lot more than write one-off pieces of code. We write much, much larger systems, and the connections between different pieces of code, even on a function-to-function level, are very complex.

replies(1): >>yourap+ic
◧◩
8. CapmCr+Ra[view] [source] [discussion] 2022-12-15 14:17:53
>>edanm+X7
This depends entirely on _how_ the code is wrong. I asked chatGPT to write me code in python that would calculate SHAP values when given a sklearn model the other day. It returned code that ran, and even _looked_ like it did the right thing at a cursory glance. But I've written SHAP a package before, and there were several manipulations it got wrong. I mean completely wrong. You would never have known the code was wrong unless you knew how to write the code in the first place.

To me, code that is 95% correct will either fail catastrophically or give very wrong results. Imagine if the code you wrote was off 5% for every number it was supposed to generate. Code that is 99.99% correct will introduce subtle bugs.

* No shade to chatGPT, writing a function that calculates shap values is tough lol, I just wanted to see what it could do. I do think that, given time, it'll be able to write a days worth of high quality code in a few seconds.

replies(4): >>nmfish+Qc >>Workac+ce >>KIFulg+n01 >>maland+0e1
◧◩
9. mejuto+9b[view] [source] [discussion] 2022-12-15 14:19:32
>>edanm+X7
I agree with you. Even software that had no bugs today (if that is possible) could start having bugs tomorrow, as the environment changes (new law, new hardware, etc.)
◧◩
10. yourap+ic[view] [source] [discussion] 2022-12-15 14:24:37
>>edanm+f8
> The reason ChatGPT isn't threatening programmers is for other reasons. Firstly, it's code isn't 95% good, it's like 80% good.

The role that is possibly highly streamlined with a near-future ChatGPT/CoPilot are requirements-gathering business analysts, but developers at Staff level on up sits closer to requiring AGI to even become 30% good. We'll likely see a bifurcation/barbell: Moravec's Paradox on one end, AGI on the other.

An LLM that can transcribe a verbal discussion directly with a domain expert for a particular business process with high fidelity, give a precis of domain jargon to a developer in a sidebar, extracts out further jargon created by the conversation, summarize the discussion into documentation, and extract how the how's and why's like a judicious editor might at 80% fidelity, then put out semi-working code at even 50% fidelity, that works 24x7x365 and automatically incorporates everything from GitHub it created for you before and that your team polished into working code and final documentation?

I have clients who would pay for an initial deployment of that for an appliance/container head end of that which transits the processing through the vendor SaaS' GPU farm but holds the model data at rest within their network / cloud account boundary. Being able to condense weeks or even months of work by a team into several hours that requires say a team to tighten and polish it up by a handful of developers would be interesting to explore as a new way to work.

◧◩◪
11. nmfish+Qc[view] [source] [discussion] 2022-12-15 14:27:33
>>CapmCr+Ra
Over the weekend I tried to tease out a sed command that would fix an uber simple compiler error from ChatGPT [0]. I gave up after 4 or 5 tries - while it got the root cause correct ("." instead of "->" because the property was a pointer), it just couldn't figure out the right sed command. That's such a simple task, its failure doesn't inspire confidence in getting more complicated things correct.

This is the main reason I haven't actually incorporated any AI tools into my daily programming yet - I'm mindful that I might end up spending more time tracking down issues in the auto-generated code than I saved using it in the first place.

[0] You can see the results here https://twitter.com/NickFisherAU/status/1601838829882986496

◧◩◪
12. Workac+ce[view] [source] [discussion] 2022-12-15 14:32:19
>>CapmCr+Ra
The thing about ChatGPT is that it warning shot. And all these people I see talking about it, laughing about how the shooter missed them.

Clearly ChatGPT is going to improve, and AI development is moving at a breakneck pace and accelerating. Dinging it for totally fumbling 5% or 10% of written code is completely missing the forest for the trees.

replies(6): >>jhbadg+Ho >>throwa+vz >>woeiru+GS >>tarran+ha1 >>idontp+vb1 >>allisd+yE3
13. Kye+yg[view] [source] 2022-12-15 14:41:17
>>ben_w+(OP)
>> "I think the two biggest differences between art AI and code AI are that (a) code that's only 95% right is just wrong, whereas art can be very wrong before a client even notices [0];"

Art can also be extremely wrong in a way everyone notices and still be highly successful. For example: Rob Liefeld.

replies(1): >>jhbadg+ws
◧◩
14. Goblin+Wg[view] [source] [discussion] 2022-12-15 14:42:23
>>edanm+X7
And GPT can't fix a bug, it can only generate new text that will have a different collection of bugs. The catch is that programming isn't text generation. But AI should be able to make good actually intelligent fuzzers, that should be realistic and useful.
replies(5): >>mlboss+Pj >>Ajedi3+4m >>tintor+Un >>alar44+4k1 >>Unposs+702
◧◩◪
15. mlboss+Pj[view] [source] [discussion] 2022-12-15 14:52:26
>>Goblin+Wg
It is only a matter of time. It can understand error stacktrace and suggest a fix. Somebody has to plug it to IDE then it will start converting requirements to code.
◧◩◪
16. Ajedi3+4m[view] [source] [discussion] 2022-12-15 15:00:20
>>Goblin+Wg
> GPT can't fix a bug

It can't? I could've sworn I've seen (cherry-picked) examples of it doing exactly that, when prompted. It even explains what the bug is and why the fix works.

replies(2): >>ipaddr+Z71 >>soerxp+VN1
◧◩◪
17. tintor+Un[view] [source] [discussion] 2022-12-15 15:07:46
>>Goblin+Wg
It can, in some cases. Have you tried it?
◧◩
18. Curiou+po[view] [source] [discussion] 2022-12-15 15:09:45
>>edanm+X7
Two issues. First, when a human gets something 5% wrong, it's more likely to be a corner case or similar "right most of the time" scenario, whereas when AI gets something 5% wrong, it's likely to look almost right but never produce correct output. Second, when a human writes something wrong they have familiarity with the code and can more easily identify the problem and fix it, whereas fixing AI code (either via human or AI) is more likely to be fraught.
replies(1): >>edanm+lw
◧◩◪◨
19. jhbadg+Ho[view] [source] [discussion] 2022-12-15 15:10:54
>>Workac+ce
Sure, it will improve, but I think a lot of people think "Hey, it almost looks human quality now! Just a bit more tweaking and it will be human quality or better!". But a more likely case is that the relatively simple statistical modeling tools (which are very different from how our brains work, not that we fully understand how our brains work) that chatGPT uses have a limit to how well they work and they will hit a plateau (and are probably near it now). I'm not one of those people who believe strong AI is impossible, but I have a feeling that strong AI will take more than that just manipulating a text corpus.
replies(1): >>ben_w+qQ
◧◩
20. jhbadg+ws[view] [source] [discussion] 2022-12-15 15:24:37
>>Kye+yg
And in the same way as Liefeld has a problem drawing hands! Maybe he was actually ahead of us all and had an AI art tool before the rest of us.
◧◩◪
21. edanm+lw[view] [source] [discussion] 2022-12-15 15:37:02
>>Curiou+po
You (and everyone else) seem to be making the classic "mistake" of looking at an early version and not appreciating that things improve. Ten years ago, AI-generated art was at 50%. 2 years ago, 80%. Now it's at 95% and winning competitions.

I have no idea if the AI that's getting code 80% right today will get it 95% right in two years, but given current progress, I wouldn't bet against it. I don't think there's any fundamental reason it can't produce better code than I can, at least not at the "write a function that does X" level.

Whole systems are a way harder problem that I wouldn't even think of making guesses about.

replies(4): >>yamtad+x01 >>ben_w+P01 >>marcos+kp2 >>muttle+bx4
◧◩◪◨
22. throwa+vz[view] [source] [discussion] 2022-12-15 15:48:56
>>Workac+ce
Anyone who has doubts has to look at the price. It’s free for now, and will be cheap enough when openai starts monetizing. Price wins over quality. It’s demonstrated time and time again.
replies(1): >>ben_w+aR
◧◩◪◨⬒
23. ben_w+qQ[view] [source] [discussion] 2022-12-15 16:59:21
>>jhbadg+Ho
I'd be surprised if it did only take text (or even language in general), but if it does only need that, then given how few parameters even big GPT-3 models have compared to humans, it will strongly imply that PETA was right all along.
◧◩◪◨⬒
24. ben_w+aR[view] [source] [discussion] 2022-12-15 17:02:03
>>throwa+vz
Depends on the details. Skip all the boring health and safety steps, you can make very cheap skyscrapers. They might fall down in a strong wind, but they'll be cheap.
replies(2): >>pixl97+j41 >>throwa+dv3
◧◩◪◨
25. woeiru+GS[view] [source] [discussion] 2022-12-15 17:08:15
>>Workac+ce
Yeah, but people were also saying this about self-driving cars, and guess what that long tail is super long, and its also far fatter than we expected. 10 years ago people were saying AI was coming for taxi drivers, and as far as I can tell we're still 10 years away.

I'm nonplussed by ChatGPT because the hype around it is largely the same as was for Github Copilot and Copilot fizzled badly. (Full disclosure: I pay for Copilot because it is somewhat useful).

replies(3): >>pleb_n+P71 >>kerkes+v22 >>edanm+pB3
◧◩◪
26. KIFulg+n01[view] [source] [discussion] 2022-12-15 17:43:11
>>CapmCr+Ra
I experienced ChatGPT confidently giving incorrect answers about the Schwarzchild radius of the black hole at the center of our galaxy, Saggitarius A-star. Both when asked about "the Scharzchild radius of a black hole with 4 million solar masses" (a calculation) and "the Scharzchild radius of Saggitarius A-star" (a simple lookup).

Both answers were orders of magnitude wrong, and vastly different from each other.

JS code suggested for a simple database connection had glaring SQL injection vulnerabilities.

I think it's an ok tool for discovering new libraries and getting oriented quickly to languages and coding domains you're unfamiliar with. But it's more like a forum post from a novice who read a tutorial and otherwise has little experience.

replies(1): >>mcguir+9M1
◧◩◪◨
27. yamtad+x01[view] [source] [discussion] 2022-12-15 17:44:07
>>edanm+lw
To be fair to those assumptions, there've been a lot of cases of machine-learning (among other tech) looking very promising, and advancing so quickly that a huge revolution seems imminent—then stalling out at a local maximum for a really long time.
◧◩◪◨
28. ben_w+P01[view] [source] [discussion] 2022-12-15 17:45:52
>>edanm+lw
It might improve like Go AI and shock everyone by beating the world expert at everything, or it might improve like Tesla FSD which is annoyingly harder than "make creative artwork".

There's no fundamental reason it can't be the world expert at everything, but that's not a reason to assume we know how to get there from here.

replies(2): >>namele+St1 >>rafael+V42
◧◩◪◨⬒⬓
29. pixl97+j41[view] [source] [discussion] 2022-12-15 18:03:08
>>ben_w+aR
After watching lots of videos from 3rd world countries where skyscrapers are built and then tore down a few years later, I think I know exactly how this is going to go.
◧◩◪◨⬒
30. pleb_n+P71[view] [source] [discussion] 2022-12-15 18:19:22
>>woeiru+GS
I wonder if some of this is the 80 20 rule. We're seeing the easy 80 percent of the solutions which has taken 20% of the time. We still have the hard 80% (or most of) to go for some of these new techs
replies(2): >>rightb+sz1 >>lostms+oM1
◧◩◪◨
31. ipaddr+Z71[view] [source] [discussion] 2022-12-15 18:20:06
>>Ajedi3+4m
Which examples the ones where they were right or wrong. It goes back to trusting the source not to introduce new ever evolving bugs.
◧◩◪◨
32. tarran+ha1[view] [source] [discussion] 2022-12-15 18:31:24
>>Workac+ce
The thing is though, it's trained on human text. And most humans are per difinition, very fallible. Unless someone made it so that it can never get trained on subtly wrong code, how will it ever improve? Imho AI can be great for suggestions as for which method to use (visual studio has this, and I think there is an extension for visual studio code for a couple of languages). I think fine grained things like this are very useful, but I think code snippets are just too coarse to actually be helpful.
replies(1): >>tintor+PG1
◧◩◪◨
33. idontp+vb1[view] [source] [discussion] 2022-12-15 18:38:02
>>Workac+ce
This is magical thinking, no different than a cult.

The fundamental design of transformer architecture isn't capable of what you think it is.

There are still radical, fundamental breakthroughs needed. It's not a matter of incremental improvement over time.

◧◩
34. snicke+8c1[view] [source] [discussion] 2022-12-15 18:41:30
>>edanm+X7
Maybe for certain domains it's okay to fail 5% of the time but a lot of code really does need to be perfect. You wouldn't be able to work with a filesystem that loses 5% of your files.
replies(1): >>mecsre+Kt1
◧◩◪
35. maland+0e1[view] [source] [discussion] 2022-12-15 18:48:47
>>CapmCr+Ra
Who is going to debug this code when it is wrong?

Whether 95% or 99.9% correct, when there is a serious bug, you're still going to need people that can fix the gap between almost correct and actually correct.

replies(1): >>cool_d+6G1
◧◩◪
36. alar44+4k1[view] [source] [discussion] 2022-12-15 19:18:03
>>Goblin+Wg
Yes it can, I've been using it for exactly that. "This code is supposed to do X but does Y or haz Z error fix the code."

Sure you can't stick an entire project in there, but if you know the problem is in class Baz, just toss in the relevant code and it does a pretty damn good job.

◧◩◪
37. mecsre+Kt1[view] [source] [discussion] 2022-12-15 20:00:10
>>snicke+8c1
Or a filesystem that loses all of your files 5% of the time.
replies(1): >>scarmi+m02
◧◩◪◨⬒
38. namele+St1[view] [source] [discussion] 2022-12-15 20:00:22
>>ben_w+P01
What scares me is a death of progress situation. Maybe it cant be an expert, but it can be good enough, and now the supply pipeline of people who could be experts basically gets shut off, because to become an expert you needed to do the work and gain the experiences that are now completely owned by AI.
replies(3): >>tintor+xH1 >>nonran+0I1 >>int_19+5k2
◧◩◪◨⬒⬓
39. rightb+sz1[view] [source] [discussion] 2022-12-15 20:24:37
>>pleb_n+P71
Replacing 80% of a truck driver's skill would suck but replacing 80% of our skill would be an OK programmer.
◧◩
40. tables+pA1[view] [source] [discussion] 2022-12-15 20:28:17
>>edanm+X7
>> code that's only 95% right is just wrong,

> I know what you mean, but thinking about it critically, this is just wrong. All software has bugs in it. Small bugs, big bugs, critical bugs, security bugs, everything. No code is immune. The largest software used by millions every day has bugs. Library code that has existed and been in use for 30 years has bugs.

All software has bugs, but it's usually far better that "95% right." Code that's only 95% right probably wouldn't pass half-ass testing or a couple of days of actual use.

◧◩◪◨
41. cool_d+6G1[view] [source] [discussion] 2022-12-15 20:53:34
>>maland+0e1
Sure, but how much of the total work time in software development is writing relatively straightforward, boilerplate type code that could reasonably be copied from the top answer from stackoverflow with variable names changed? Now maybe instead of 5 FTE equivalents doing that work, you just need the 1 guy to debug the AI's shot at it. Now 4 people are out of work, or applying to be the 1 guy at some other company.
replies(3): >>mcguir+CM1 >>woah+fW1 >>lmm+kc2
◧◩◪◨⬒
42. tintor+PG1[view] [source] [discussion] 2022-12-15 20:56:51
>>tarran+ha1
Improve itself through experimentation with reinforcement learning. This is how humans improve too. AlphaZero does it.
replies(1): >>lostms+QM1
◧◩◪◨⬒⬓
43. tintor+xH1[view] [source] [discussion] 2022-12-15 20:59:54
>>namele+St1
But it could also make it easier to train experts, by acting as a coach and teacher.
◧◩◪◨⬒⬓
44. nonran+0I1[view] [source] [discussion] 2022-12-15 21:01:47
>>namele+St1
Exactly this.

The problem of a vengeful god who demands the slaughter of infidels lies not in his existence or nonexistence, but peoples' belief in such a god.

Similarly, it does not matter whether AI works or it doesn't. It's irrelevant how good it actually is. What matters is whether people "believe" in it.

AI is not a technology, it's an ideology.

Given time it will fulfil it's own prophecy as "we who believe" steer the world toward that.

That's what's changing now. It's in the air.

The ruling classes (those who own capital and industry) are looking at this. The workers are looking too. Both of them see a new world approaching, and actually everyone is worried. What is under attack is not the jobs of the current generation, but the value of human skill itself, for all generations to come. And, yes, it's the tail of a trajectory we have been on for a long time.

It isn't the only way computers can be. There is IA instead of AI. But intelligence amplification goes against the principles of capital at this stage. Our trajectory has been to make people dumber in service of profit.

replies(3): >>Cadmiu+kU1 >>melago+wj2 >>int_19+tk2
◧◩
45. mr_toa+hL1[view] [source] [discussion] 2022-12-15 21:19:50
>>edanm+X7
When AI can debug its own code I’ll start looking for another career.
replies(1): >>ben_w+zz3
◧◩◪◨
46. mcguir+9M1[view] [source] [discussion] 2022-12-15 21:24:18
>>KIFulg+n01
My understanding is that ChatGPT (and similar things) are purely language models; they do not have any kind of "understanding" of anything like reality. Basically, they have a complex statistical model of how words are related.

I'm a bit surprised that it got a lookup wrong, but for any other domain, describing it as a "novice" is understating the situation a lot.

◧◩◪◨⬒⬓
47. lostms+oM1[view] [source] [discussion] 2022-12-15 21:25:16
>>pleb_n+P71
Considering the deep conv nets that melted the last AI winter happened in 2012, you are basically giving it 40 years till 100%.
◧◩◪◨⬒
48. mcguir+CM1[view] [source] [discussion] 2022-12-15 21:26:21
>>cool_d+6G1
Does anyone remember the old maxim, "Don't write code as cleverly as you can because it's harder to debug than it is to write and you won't be clever enough"?
◧◩◪◨⬒⬓
49. lostms+QM1[view] [source] [discussion] 2022-12-15 21:27:42
>>tintor+PG1
The amount of work in that area of research is substantial. You will see world shattering results in a few years.

Current SOTA: https://openai.com/blog/vpt/

◧◩◪◨
50. soerxp+VN1[view] [source] [discussion] 2022-12-15 21:33:56
>>Ajedi3+4m
Those are cherry picked, and most importantly, all of the examples where it can fix a bug are examples where it's working with a stack trace, or with an extremely small section of code (<200 lines). At what point will it be able to fix a bug in a 20,000 line codebase, with only "When the user does X, Y unintended consequence happens" to go off of?

It's obvious how an expert at regurgitating StackOverflow would be able to correct an NPE or an off-by-one error when given the exact line of code that error is on. Going any deeper, and actually being able to find a bug, requires understanding of the codebase as a whole and the ability to map the code to what the code actually does in real life. GPT has shown none of this.

"But it will get better over time" arguments fail for this because the thing that's needed is a fundamentally new ability, not just "the same but better." Understanding a codebase is a different thing from regurgitating StackOverflow. It's the same thing as saying in 1980, "We have bipedal robots that can hobble, so if we just improve on that enough we'll eventually have bipedal robots that beat humans at football."

51. itroni+aP1[view] [source] 2022-12-15 21:40:39
>>ben_w+(OP)
>> whereas art can be very wrong before a client even notices

No actually, that's not how that works. You're demonstrating the lack of empathy that the parent comment brings up as alarming.

Regarding programming, code that's only 95% right can just be run through code assist to fix everything.

replies(1): >>ben_w+jC3
52. asdf12+GQ1[view] [source] 2022-12-15 21:47:46
>>ben_w+(OP)
A lot depends what the business costs are of that wrong %5.

If the actual business costs are less than the price of a team of developers... welp, it was fun while it lasted.

53. tooman+tR1[view] [source] 2022-12-15 21:51:27
>>ben_w+(OP)
The other day I copied a question from leetcode and asked GPT to solve it. The solution had the correct structure to be interpreted by leetcode(Solution class, with the correct method name and signature, and with the same implementation of a linked list that leetcode would use). It made me feel like GPT was not implementing the solution for anything. Just copying and pasting some code it has read on the internet.
◧◩◪◨⬒⬓⬔
54. Cadmiu+kU1[view] [source] [discussion] 2022-12-15 22:05:49
>>nonran+0I1
> What is under attack is not the jobs of the current generation, but the value of human skill itself, for all generations to come. And, yes, it's the tail of a trajectory we have been on for a long time.

Wow, yes. This is exactly what I've been thinking but you summed it up more eloquently.

◧◩◪◨⬒
55. woah+fW1[view] [source] [discussion] 2022-12-15 22:17:38
>>cool_d+6G1
Or the company just delivers features when they are estimated to be done, instead of it taking 5 times longer than expected
◧◩◪
56. Unposs+702[view] [source] [discussion] 2022-12-15 22:41:53
>>Goblin+Wg
sure but now you only need testers and one coder to fix bugs, where you used to need testers and 20 coders. AI code generators are force multipliers, maybe not strict replacements. And the level of creativity to fix a bug relative to programming something wholly original is days apart.
◧◩◪◨
57. scarmi+m02[view] [source] [discussion] 2022-12-15 22:44:14
>>mecsre+Kt1
No need to rag on btrfs.
◧◩◪◨⬒
58. kerkes+v22[view] [source] [discussion] 2022-12-15 22:56:13
>>woeiru+GS
Tesla makes self-driving cars that drive better than humans. The reason you have to touch the steering wheel periodically is political/social, not technical. An acquaintance of mine read books while he commutes 90 minutes from Chattanooga to work in Atlanta once or twice a week. He's sitting in the driver's seat but he's certainly not driving.

The political/social factors which apply to the life-and-death decisions made driving a car, don't apply to whether one of the websites I work on works perfectly.

I'm 35, and I've paid to write code for about 15 years. To be honest, ChatGPT probably writes better code than I did at my first paid internship. It's got a ways to go to catch up with even a junior developer in my opinion, but it's only a matter of time.

And how much time? The expectation in the US is that my career will last until I'm 65ish. That's 30 years from now. Tesla has only been around 19 years and now makes self-driving cars.

So yeah, I'm not immediately worried that I'm going to lose my job to ChatGPT in the next year, but I am quite confident that my role will either cease existing or drastically change because of AI before the end of my career. The idea that we won't see AI replacing professional coders in the next 30 years strains credulity.

Luckily for me, I already have considered some career changes I'd want to do even if I weren't forced to by AI. But if folks my age were planning to finish out their careers in this field, they should come up with an alternative plan. And people starting this field are already in direct competition to stay ahead of AI.

replies(2): >>Panzer+qb2 >>prioms+ez2
◧◩◪◨⬒
59. rafael+V42[view] [source] [discussion] 2022-12-15 23:11:21
>>ben_w+P01
Tesla is limited by the processing power contained in the chip of each car. That's not the case for language models; they can get arbitrarily large without much problem with latency. If Tesla could train just one huge model in a data center and deliver it by API to every car I bet self driving cars would have already been a reality.
◧◩◪◨⬒⬓
60. Panzer+qb2[view] [source] [discussion] 2022-12-15 23:51:05
>>kerkes+v22
I'm doubtful - There's a pretty big difference between writing a basic function and even a small program, and that's all I've seen out of these kinds of AIs thus far, and it still gets those wrong regularly because it doesn't really understand what it's doing - just mixing and matching its training set.

Roads are extremely regular, as things go, and as soon as you are off the beaten path with those AIs start having trouble too.

It seems that in general that the long tail will be problematic for a while yet.

◧◩◪◨⬒
61. lmm+kc2[view] [source] [discussion] 2022-12-15 23:57:05
>>cool_d+6G1
> Sure, but how much of the total work time in software development is writing relatively straightforward, boilerplate type code that could reasonably be copied from the top answer from stackoverflow with variable names changed?

It may be a significant chunk of the butt-in-seat-time under our archaic 40hour/week paradigm, but it's not a significant chunk of the programmer's actual mental effort. You're not going to be able to get people to work 5x more intensely by automating the boring stuff, that was never the limiting factor.

◧◩◪◨⬒⬓⬔
62. melago+wj2[view] [source] [discussion] 2022-12-16 00:52:20
>>nonran+0I1
can't agree more! if anyone start to believe it, it will work in some terrible way, even there is only one algorithm in black box.
◧◩◪◨⬒⬓
63. int_19+5k2[view] [source] [discussion] 2022-12-16 00:57:28
>>namele+St1
https://en.wikipedia.org/wiki/Profession_(novella)
◧◩◪◨⬒⬓⬔
64. int_19+tk2[view] [source] [discussion] 2022-12-16 01:00:53
>>nonran+0I1
What's under attack is the notion that humans are special - that there's some kind of magic to them that is fundamentally impossible to replicate. No wonder there's a full-blown moral panic about this.
replies(2): >>nonran+9C3 >>namele+U44
◧◩◪◨
65. marcos+kp2[view] [source] [discussion] 2022-12-16 01:34:34
>>edanm+lw
The architecture behind the chatGPT and the other AIs that are making the news won't ever improve so it can correctly write non-trivial code. There is a fundamental reason for that.

Other architectures exist, but you can notice from the lack of people talking about them that they don't produce any output nearly as developed as the chatGPT kind. They will get there eventually, but that's not what we are seeing here.

replies(1): >>edanm+dt3
◧◩◪◨⬒⬓
66. prioms+ez2[view] [source] [discussion] 2022-12-16 02:39:16
>>kerkes+v22
I was of the impression that Tesla's self driving is still not fully reliable yet. For example a recent video shows a famous youtuber having to take manual control 3 times in a 20 min drive to work [0]. He mentioned how stressful it was compared to normal driving as well.

[0] https://www.youtube.com/watch?v=9nF0K2nJ7N8

replies(1): >>kerkes+ljI
◧◩◪◨⬒
67. edanm+dt3[view] [source] [discussion] 2022-12-16 09:13:16
>>marcos+kp2
> The architecture behind the chatGPT and the other AIs that are making the news won't ever improve so it can correctly write non-trivial code. There is a fundamental reason for that.

What is that?

replies(1): >>Curiou+1W3
◧◩◪◨⬒⬓
68. throwa+dv3[view] [source] [discussion] 2022-12-16 09:32:45
>>ben_w+aR
It does depend on the details. In special fields, like medical software, regulation might alter the market—although code even there is often revealed to be of poor quality.

But of all the examples of cheap and convenient beating quality: photography, film, music, et al, the many industries that digital technology has disrupted, newspapers are more analogous than builders. Software companies are publishers, like newspapers. And newspapers had entire building floors occupied by highly skilled mechanical typesetters, who have long been replaced. A handful of employees on a couple computers could do the job faster, more easily, and of good enough quality.

Software has already disrupted everything else, eventually it would disrupt the process of making software.

◧◩◪
69. ben_w+zz3[view] [source] [discussion] 2022-12-16 10:22:35
>>mr_toa+hL1
When it can do that, it's already too late.
◧◩
70. scotty+Qz3[view] [source] [discussion] 2022-12-16 10:25:09
>>edanm+X7
Fixing the last 5% requires that you understand 100% of all. And understanding is the main value added by programmer, not typing characters into text editor.
◧◩◪◨⬒
71. edanm+pB3[view] [source] [discussion] 2022-12-16 10:41:24
>>woeiru+GS
> [...] Copilot fizzled badly. (Full disclosure: I pay for Copilot because it is somewhat useful).

In what sense did Copilot fizzle badly? It's a tool that you incorporated into your workflow and that you pay money for.

Does it solve all programming? No, of course not, and it's far from there. I think even if improves a lot it will not be close to replacing a programmer.

But a tool that lets you write code 10x,100x faster is a big deal. I don't think we're far away from a world in which every programmer has to use AI to be somewhat proficient in their job.

◧◩◪◨⬒⬓⬔⧯
72. nonran+9C3[view] [source] [discussion] 2022-12-16 10:48:28
>>int_19+tk2
Agreed, but that train left the station in the late 1800s, driven by Darwin and Nietzsche. The intervening one and a half centuries haven't dislodged the "human spirit" in its secular form. We thought we'd overcome "gods". Now, out of discontent and self-loathing we're going to do what Freud warned against, and find a new external something to subjugate ourselves to. We simply refuse to shoulder the burden of being free.
◧◩
73. ben_w+jC3[view] [source] [discussion] 2022-12-16 10:50:52
>>itroni+aP1
Artists are, necessarily, perfectionists about their work — it's the only way to get better than the crude line drawings and wildly wrong anatomy that most people can do.

Frustratingly, most people don't fully appreciate the art, and are quite happy for artists to put in only 20% of the effort. Heck, old enough to remember people who regarded Quake as "photorealistic", some in a negative way saying this made it a terrible threat to the minds of children who might see the violence it depicted, and others in a positive way saying it was so good that Riven should've used that engine instead of being pre-rendered.

Bugs like this are easy to fix: `x = x – 4;` which should be `x = x - 4;`

Bugs like this, much harder:

    #define TOBYTE(x) (x) & 255
    #define SWAP(x,y) do { x^=y; y^=x; x^=y; } while (0)

    static unsigned char A[256];
    static int i=0, j=0;

    void init(char \*passphrase) {
        int passlen = strlen(passphrase);
        for (i=0; i<256; i++)
            A[i] = i;
        for (i=0; i<256; i++) {
            j = TOBYTE(j + A[TOBYTE(i)] + passphrase[j % passlen]);
            SWAP(A[TOBYTE(i)], A[j]);
        }
        i = 0; j = 0;
    }

    unsigned char encrypt_one_byte(unsigned char c) {
        int k;
        i = TOBYTE(i+1);
        j = TOBYTE(j + A[i]);
        SWAP(A[i], A[j]);
        k = TOBYTE(A[i] + A[j]);
        return c ^ A[k];
    }
◧◩◪◨
74. allisd+yE3[view] [source] [discussion] 2022-12-16 11:08:39
>>Workac+ce
Excellent summation. Majority of the software developers work on crud based frontend or backend development. When this thing's attention goes beyond the 4k tokens its limited to, there will be very less number of developers needed in general. Same way less number of artists or illustrators will be needed for making run of the mill marketing brochures.

I think majority wouldn't know what hit them when the time comes. My experience with chatgpt has been highly positive changing me from a skeptic to a believer. It takes a bit of skill to tune the prompts but I got it to write frontend, backend, unit test cases, automation test cases, generate test data flawlessly. I have seen and worked with much worse developers than what this current iteration is.

◧◩◪◨⬒⬓
75. Curiou+1W3[view] [source] [discussion] 2022-12-16 13:27:26
>>edanm+dt3
Probably because it doesn't maintain long term cohesion. Transformer models are great at producing things that look right over short distances, but as the output length increases it often becomes contradictory or nonsensical.

To get good output on larger scales we're going to need a model that is hierarchical with longer term self attention.

◧◩◪◨⬒⬓⬔⧯
76. namele+U44[view] [source] [discussion] 2022-12-16 14:29:09
>>int_19+tk2
Maybe AI can replicate everything humans can do. But this technology isnt that. It just mass reads and replicates what humans have already done, but actual novel implementations seem out of its grasp. (for now) The art scene is freaking out because a lot of art is basically derivative already, but everyone pretended it was not. Coders already knew and admitted they stole all the time.

The other patterns of AI that seem to be able to arrive at novel solutions basically use a brute force approach of predicting every outcome if it has perfect information or a brute force process where it tries everything until it finds the thing that "works". Both of those seem approaches seem problematic in the "real world". (though i would find convincing the argument that the billions of people all trying things act as a de facto brute force approach in practice)

For someone to be able to do a novel implementation in a field dominated by AI might be impossible, because core foundational skills cant get developed anymore by humans for them to achieve heights that the AI hasn't reached yet. We are now stuck, things cant really get "better", we just get maybe iterative improvements on how the AI implements the already arrived at solutions.

TLDR, lets sic the AI on making a new Javascript framework and see what happens :)

◧◩◪◨
77. muttle+bx4[view] [source] [discussion] 2022-12-16 16:35:33
>>edanm+lw
Whole systems from a single prompt are probably a ways away, but I was able to get further than I expected by asking it what classes would makeup the task I was trying to do and then having it write those classes.
◧◩◪◨⬒⬓⬔
78. kerkes+ljI[view] [source] [discussion] 2022-12-29 04:14:33
>>prioms+ez2
If you watch the video you linked, he admits he's not taking manual control because it's unsafe--it's because he's embarrassed. It's hard to tell from the video, but it seems like the choices he makes out of embarrassment are actually more risky than what the Tesla was going to do.

It makes sense. My own experience driving a non-Tesla car the speed limit nearly always, is that other drivers will try to pressure you to do dangerous stuff so they can get where they're going a few seconds faster. I sometimes give into that pressure, but the AI doesn't feel that pressure at all. So if you're paying attention and see the AI not giving into that pressure, the tendency is to take manual control so you can. But that's not safer--quite the opposite. That's an example of the AI driving better than the human.

On the opposite end of the social anxiety spectrum, there's a genre of pornography where people are having sex in the driver's seats of Teslas while the AI is driving. They certainly aren't intervening 3 times in 20 minutes, and so far I don't know of any of these people getting in car accidents.

[go to top]