To me this still feels like the wrong way to interact with a coding agent. Does this lead people to success? I've never seen it not go off the rails in some way unless you provide clear boundaries as to what the scope of the expected change is. It's gonna write code if you don't even want it to yet, it's gonna write the test first or the logic first, whichever you don't want it to do. It'll be much too verbose or much too hacky, etc.
> gh-address-comments address comments
Inspiring stuff. I would love to be the one writing GH comments here. /s
But maybe there's a complementary gh-leave-comments to have it review PRs for you too.
First phase: Plan. Mandatory to complete, as well as get AI feedback from a separate context or model. Iterate until complete.
Only then move on to the Second Phase: make edits.
Better planning == Better execution
Weaker models give your experience, or when using a 100% LLM codebase I think it can end up in a hall of mirrors.
Now I have an idea to try, have a 2nd LLM processing pass that normalizes the vibe-code to some personal style and standard to break it out of the Stack Overflow snippet maze it can get itself in.
With Codex, I increasingly can skip the plan step, and it just toils along until it has finished the issue. It can be more "lazy" at times and ask before going ahead more often, but usually in a reasonable scope (and sometimes at points where I think other services would have gone ahead on a wrong tangent and burnt more tokens of their more limited usage).
I wouldn't be surprised that with the next 1-2 model iterations a plan step won't be worth the effort anymore, given a good enough initial written issue.