At that stage, the real value will lie in the remaining 10%—the part that requires human judgment, creativity, or architectural thinking. The rest will be seen as routine: simple instructions, redundant CRUD operations, boilerplate, and glue code.
If we focus only on the end result, human will inevitably write less code overall. And writing less code means fewer programming jobs.
Call me naive, but you'd think that these specifically want to demonstrate how well their product works. Making an effort to distinguish PRs that are largely the work of their own agents. Yet, I am not seeing that.
I have no doubt that people find use in some aspects of these tools. Though I personally more subscribe to the interactive rubber ducky usage of them. But 90% from where I am standing seems like a very, very far way off.
People don't like working for free, either by themselves or with an AI agent.
2) Did you stop reading after that sentence? Because there is a whole lot more that follows, specifically:
> If I need to target it even more directly, why am I a not seeing hints of this being applied on code agent repositories? Call me naive, but you'd think that these specifically want to demonstrate how well their product works. Making an effort to distinguish PRs that are largely the work of their own agents. Yet, I am not seeing that.
As I already said, I see a distinct lack of such labeled activity on open source ai code tools.
You'd think that those projects creating agentic tooling would want to show how effective they are. In fact, I would expect the people behind such projects to be all over threads like this pointing to tangible PRs, commits and other tasks these agents can apparently do so well.
Yet, all I am getting as pushback is vague handwaving "trust me, I am seeing it" claims. Even the blog post itself is nothing but that.