zlacker

[parent] [thread] 14 comments
1. puika+(OP)[view] [source] 2025-05-06 15:51:07
I have the same issue plus unnecessary refactorings (that break functionality). it doesn't matter if I write a whole paragraph in the chat or the prompt explaining I don't want it to change anything else apart from what is required to fulfill my very specific request. It will just go rogue and massacre the entirety of the file.
replies(4): >>fkyour+Q >>dherik+V >>mgw+Y >>buggle+e3
2. fkyour+Q[view] [source] 2025-05-06 15:54:35
>>puika+(OP)
Where/how do you use it? I've only tried this model through GitHub Copilot in VS Code and I haven't experienced much changing of random things.
replies(1): >>diggan+86
3. dherik+V[view] [source] 2025-05-06 15:54:53
>>puika+(OP)
I have the exactly same issue using it with Aider.
4. mgw+Y[view] [source] 2025-05-06 15:55:01
>>puika+(OP)
This has also been my biggest gripe with Gemini 2.5 Pro. While it is fantastic at one-shotting major new features, when wanting to make smaller iterative changes, it always does big refactors at the same time. I haven't found a way to change that behavior through changes in my prompts.

Claude 3.7 Sonnet is much more restrained and does smaller changes.

replies(4): >>crypto+F3 >>nolist+a9 >>fwip+Gr >>polyan+wc2
5. buggle+e3[view] [source] 2025-05-06 16:05:31
>>puika+(OP)
This is generally controllable with prompting. I usually include something like, “be excessively cautious and conservative in refactoring, only implementing the desired changes” to avoid.
◧◩
6. crypto+F3[view] [source] [discussion] 2025-05-06 16:07:24
>>mgw+Y
This exact problem is something I’m hoping to fix with a tool that parses the source to AST and then has the LLM write code to modify the AST (which you then run to get your changes) rather than output code directly.

I’ve started in a narrow niche of python/flask webapps and constrained to that stack for now, but if you’re interested I’ve just opened it for signups: https://codeplusequalsai.com

Would love feedback! Especially if you see promising results in not getting huge refactors out of small change requests!

(Edit: I also blogged about how the AST idea works in case you're just that curious: https://codeplusequalsai.com/static/blog/prompting_llms_to_m...)

replies(3): >>jtwale+fd >>HenriN+4i >>tough+pi
◧◩
7. diggan+86[view] [source] [discussion] 2025-05-06 16:21:48
>>fkyour+Q
I've used it via Google's own AI studio and via my own library/program using the API and finally via Aider. All of them lead to the same outcome, large chunks of changes to a lot of unrelated things ("helpful" refactors that I didn't ask for) and tons of unnecessary comments everywhere (like those comments you ask junior devs to stop making). No amount of prompting seems to address either problems.
◧◩
8. nolist+a9[view] [source] [discussion] 2025-05-06 16:40:35
>>mgw+Y
Can't you just commit the relevant parts? The git index is made for this sort of thing.
replies(1): >>tasuki+Lk
◧◩◪
9. jtwale+fd[view] [source] [discussion] 2025-05-06 17:02:47
>>crypto+F3
Having the LLM modify the AST seems like a great idea. Constraining an LLM to only generate valid code would be super interesting too. Hope this works out!
◧◩◪
10. HenriN+4i[view] [source] [discussion] 2025-05-06 17:32:51
>>crypto+F3
Interesting idea. But LLMs are trained on vast amount of "code as text" and tiny fraction of "code as AST"; wouldn't that significantly hurt the result quality?
replies(1): >>crypto+0m
◧◩◪
11. tough+pi[view] [source] [discussion] 2025-05-06 17:34:05
>>crypto+F3
Interesting, i started playing with ts-morph and neo4j to parse TypeScript codebases.

simonw has symbex which could be useful for you for python

◧◩◪
12. tasuki+Lk[view] [source] [discussion] 2025-05-06 17:46:51
>>nolist+a9
It's not always trivial to find the relevant 5 line change in a diff of 200 lines...
◧◩◪◨
13. crypto+0m[view] [source] [discussion] 2025-05-06 17:55:28
>>HenriN+4i
Thanks and yeah that is a concern; however I have been getting quite good results from this AST approach, at least for building medium-complexity webapps. On the other hand though, this wasn't always true...the only OpenAI model that really works well is o3 series. Older models do write AST code but fail to do a good job because of the exact issue you mention, I suspect!
◧◩
14. fwip+Gr[view] [source] [discussion] 2025-05-06 18:33:29
>>mgw+Y
Really? I haven't tried Gemini 2.5 yet, but my main complaint with Claude 3.7 is this exact behavior - creating 200+ line diffs when I asked it to fix one function.
◧◩
15. polyan+wc2[view] [source] [discussion] 2025-05-07 12:48:05
>>mgw+Y
Asking it explicitly once (not necessarily every new prompt in context) to keep output minimal and strive to do nothing more than it is told works for me.
[go to top]