zlacker

[parent] [thread] 7 comments
1. Goblin+(OP)[view] [source] 2022-12-15 14:42:23
And GPT can't fix a bug, it can only generate new text that will have a different collection of bugs. The catch is that programming isn't text generation. But AI should be able to make good actually intelligent fuzzers, that should be realistic and useful.
replies(5): >>mlboss+T2 >>Ajedi3+85 >>tintor+Y6 >>alar44+831 >>Unposs+bJ1
2. mlboss+T2[view] [source] 2022-12-15 14:52:26
>>Goblin+(OP)
It is only a matter of time. It can understand error stacktrace and suggest a fix. Somebody has to plug it to IDE then it will start converting requirements to code.
3. Ajedi3+85[view] [source] 2022-12-15 15:00:20
>>Goblin+(OP)
> GPT can't fix a bug

It can't? I could've sworn I've seen (cherry-picked) examples of it doing exactly that, when prompted. It even explains what the bug is and why the fix works.

replies(2): >>ipaddr+3R >>soerxp+Zw1
4. tintor+Y6[view] [source] 2022-12-15 15:07:46
>>Goblin+(OP)
It can, in some cases. Have you tried it?
◧◩
5. ipaddr+3R[view] [source] [discussion] 2022-12-15 18:20:06
>>Ajedi3+85
Which examples the ones where they were right or wrong. It goes back to trusting the source not to introduce new ever evolving bugs.
6. alar44+831[view] [source] 2022-12-15 19:18:03
>>Goblin+(OP)
Yes it can, I've been using it for exactly that. "This code is supposed to do X but does Y or haz Z error fix the code."

Sure you can't stick an entire project in there, but if you know the problem is in class Baz, just toss in the relevant code and it does a pretty damn good job.

◧◩
7. soerxp+Zw1[view] [source] [discussion] 2022-12-15 21:33:56
>>Ajedi3+85
Those are cherry picked, and most importantly, all of the examples where it can fix a bug are examples where it's working with a stack trace, or with an extremely small section of code (<200 lines). At what point will it be able to fix a bug in a 20,000 line codebase, with only "When the user does X, Y unintended consequence happens" to go off of?

It's obvious how an expert at regurgitating StackOverflow would be able to correct an NPE or an off-by-one error when given the exact line of code that error is on. Going any deeper, and actually being able to find a bug, requires understanding of the codebase as a whole and the ability to map the code to what the code actually does in real life. GPT has shown none of this.

"But it will get better over time" arguments fail for this because the thing that's needed is a fundamentally new ability, not just "the same but better." Understanding a codebase is a different thing from regurgitating StackOverflow. It's the same thing as saying in 1980, "We have bipedal robots that can hobble, so if we just improve on that enough we'll eventually have bipedal robots that beat humans at football."

8. Unposs+bJ1[view] [source] 2022-12-15 22:41:53
>>Goblin+(OP)
sure but now you only need testers and one coder to fix bugs, where you used to need testers and 20 coders. AI code generators are force multipliers, maybe not strict replacements. And the level of creativity to fix a bug relative to programming something wholly original is days apart.
[go to top]