Maybe I'm doing it wrong, but I seem to have settled on the following general algorithm:
* ask the agent to green-field a new major feature.
* watch the agent spin until it is satisfied with its work.
* run the feature. Find that it does not work, or at least has major deficiencies [1]
* cycle through multiple independent iterations with the agent, doing something resembling "code review", fixing deficiencies one at a time [2]
* eventually get to a point where I have to re-write major pieces of the code to extract the agent from some major ditch it has driven into, leading to a failure to make forward progress.
Repeat.
It's not that the things are useless or "a fad" -- they're clearly very useful. But the people who are claiming that programmers are going to be put out of business by bots are either a) talking their book, or b) extrapolating wildly into the unknown future. And while I am open to the argument that (b) might be true, what I am observing in practice is that the rate of improvement is slowing rapidly, and/or the remaining problems are getting much harder to solve.
[1] I will freely grant that at least some of these major deficiencies typically result from my inability / unwillingness to write a detailed enough spec for the robot to follow, or anticipate every possible problem with the spec I did bother to write. T'was ever thus...
[2] This problem is fractal. However, it's at least fun, in that I get to yell at the robot in a way that I never could with a real junior engineer. One Weird Fact about working with today's agents is that if you threaten them, they seem to do better work.