> This seems like it's fixing the symptom rather than the underlying issue?
This is also my experience when you haven't setup a proper system prompt to address this for everything an LLM does. Funniest PRs are the ones that "resolves" test failures by removing/commenting out the test cases, or change the assertions. Googles and Microsofts models seems more likely to do this than OpenAIs and Anthropics models, I wonder if there is some difference in their internal processes that are leaking through here?
The same PR as the quote above continues with 3 more messages before the human seemingly gives up:
> please take a look
> Your new tests aren't being run because the new file wasn't added to the csproj
> Your added tests are failing.
I can't imagine how the people who have to deal with this are feeling. It's like you have a junior developer except they don't even read what you're telling them, and have 0 agency to understand what they're actually doing.
Another PR: https://github.com/dotnet/runtime/pull/115732/files
How are people reviewing that? 90% of the page height is taken up by "Check failure", can hardly see the code/diff at all. And as a cherry on top, the unit test has a comment that say "Test expressions mentioned in the issue". This whole thing would be fucking hilarious if I didn't feel so bad for the humans who are on the other side of this.
Typically, you wouldn't bother manually reviewing something until the automated checks have passed.
I'd rather hop in and get them on the right path rather than letting them struggle alone, particularly if they're struggling.
If it's another senior developer though I'd happily leave them to it to get the unit tests all passing before I take a proper look at their work.
But as a general principle, please at least get a PR through formatting checks before assigning it to a person.