zlacker

[parent] [thread] 10 comments
1. anothe+(OP)[view] [source] 2024-02-13 20:47:50
Short answer: Rather than fully writing code, GPT-4 Turbo often inserts comments like "... finish implementing function here ...". I made a benchmark based on asking it to refactor code that provokes and quantifies that behavior.

Longer answer:

I found that I could provoke lazy coding by giving GPT-4 Turbo refactoring tasks, where I ask it to refactor a large method out of a large class. I analyzed 9 popular open source python repos and found 89 such methods that were conceptually easy to refactor, and built them into a benchmark [0].

GPT succeeds on this task if it can remove the method from its original class and add it to the top level of the file with appropriate changes to the size of the abstract syntax tree. By checking that the size of the AST hasn't changed much, we can infer that GPT didn't replace a bunch of code with a comment like "... insert original method here...". The benchmark also gathers other laziness metrics like counting the number of new comments that contain "...". These metrics correlate well with the AST size tests.

[0] https://github.com/paul-gauthier/refactor-benchmark

replies(2): >>Taylor+5a >>Shamel+wE
2. Taylor+5a[view] [source] 2024-02-13 21:43:46
>>anothe+(OP)
I have a bunch of code I need to refactor, and also write tests for. (I guess I should make the tests before the refactor). How do you do a refactor with GPT-4? Do you just dump the file in to the chat window? I also pay for github copilot, but not GPT-4. Can I use copilot for this?

Any advice appreciated!

replies(1): >>rkuyke+Mc
◧◩
3. rkuyke+Mc[view] [source] [discussion] 2024-02-13 21:58:20
>>Taylor+5a
> Do you just dump the file in to the chat window?

Yes, along with what you want it to do.

> I also pay for github copilot, but not GPT-4. Can I use copilot for this?

Not that I know of. CoPilot is good at generating new code but can't change existing code.

replies(2): >>redbla+be >>jjwise+xe
◧◩◪
4. redbla+be[view] [source] [discussion] 2024-02-13 22:06:15
>>rkuyke+Mc
Copilot will change existing code. (though I find it's often not very good at it) I frequently highlight a section of code that has an issue, press ctrl-i and type something like "/fix SomeError: You did it wrong"
◧◩◪
5. jjwise+xe[view] [source] [discussion] 2024-02-13 22:09:27
>>rkuyke+Mc
GitHub Copilot Chat (which is part of Copilot) can change existing code. The UI is that you select some code, then tell it what you want. It returns a diff that you can accept or reject. https://docs.github.com/en/copilot/github-copilot-chat/about...
6. Shamel+wE[view] [source] 2024-02-14 01:13:09
>>anothe+(OP)
I use gpt4-turbo through the api many times a day for coding. I have encountered this behavior maybe once or twice period. It was never an issue that didn’t make sense as essentially the model summarizing and/or assuming some shared knowledge (that was indeed known to me).

This, and people generally saying that chatGPT has been intentionally degraded, are just super strange for me. I believe it’s happening but it’s making me question my sanity. What am I doing to get decent outputs? Am I simply not as picky? I treat every conversion as though it needs to be vetted because it does regardless of how good the model is. I only trust output from the model that I am a subject matter expert on or in a closely adjacent field. Otherwise I treat it much like an internet comment - useful for surfacing curiosities but requires vetting.

replies(3): >>hacker+Od1 >>xdesha+8H2 >>xionpl+fi7
◧◩
7. hacker+Od1[view] [source] [discussion] 2024-02-14 07:11:47
>>Shamel+wE
> I use gpt4-turbo through the api many times a day for coding.

Why this instead of GPT-4 through the web app? And how do you actually use it for coding, do you copy and paste your question into a python script, which then calls the OpenAI API and spits out the response?

replies(2): >>neongr+4i1 >>Shamel+zy1
◧◩◪
8. neongr+4i1[view] [source] [discussion] 2024-02-14 08:01:08
>>hacker+Od1
Not the op, but I also use it through API (specifically MacGPT). My initial justification was that I would save by only paying for what I use, instead of a flat $20/mo, but now it looks like I’m not even saving much.
◧◩◪
9. Shamel+zy1[view] [source] [discussion] 2024-02-14 11:18:12
>>hacker+Od1
I use it fairly similarly via a discord bot I've written. This lets me share usage w/ some friends (although has some limitations compared to the openai chatGPT app).
◧◩
10. xdesha+8H2[view] [source] [discussion] 2024-02-14 18:03:46
>>Shamel+wE
I whenever the chatGPT gets lazy with the coding for example //make sure to implement search function .... I feed its own comments and code as prompt :you make sure to implement the search function and so on has been working for me
◧◩
11. xionpl+fi7[view] [source] [discussion] 2024-02-15 23:17:45
>>Shamel+wE
IMO it is because there is a huge stochastic element to all this.

If we were all flipping coins there would be people claiming that coins only come up tails. There would be nothing they were doing though to make the coin come up tails. That is just the streak they had.

Some days I get lucky with chatGPT4 and some days I don't.

It is also ridiculous how we talk about this as if all subjects and context presented to chatGPT4 are going to be uniform in output. One word difference in your own prompt might change things completely while trying to accomplish exactly the same thing. Now scale that to all the people talking about chatGPT with everyone using it for something different.

[go to top]