> The AI has suggested a solution, but the added code is arguably useless or wrong. There is a huge decision space to consider, but the AI tool has picked one set of decisions, without any rationale for this decision.
> [...]
> Programming is about lots of decisions, large and small. Architecture decisions. Data validation decisions. Button color decisions.
> Some decisions are inconsequential and can be safely outsourced. There is indeed a ton of boilerplate involved in software development, and writing boilerplate-heavy code involves near zero decisions.
> But other decisions do matter.
(from https://lukasatkinson.de/2025/net-negative-cursor/)
Proponents of AI coding often talk about boilerplate as if that's what we spend most of our time on, but boilerplate is a cinch. You copy/paste, change a few fields, and maybe run a macro on it. Or you abstract it away entirely. As for the "agent" thing, typing git fetch, git commit, git rebase takes up even less of my time than boilerplate.
Most of what we write is not highly creative, but it is load-bearing, and it's full of choices. Most of our time is spent making those choices, not typing out the words. The problem isn't hallucination, it's the plain bad code that I'm going to have to rewrite. Why not just write it right myself the first time? People say "it's like a junior developer," but do they have any idea how much time I've spent trying to coax junior developers into doing things the right way rather than just doing them myself? I don't want to waste time mentoring my tools.
Coming at this from a computer-science or PLT perspective, this idea of an "abstract, repeatable meta-boilerplate" is exactly the payoff we expect from language features like strong type systems. Part of the point of rigorous languages is to create these kinds of patterns. You had total expressiveness back in assembly language! Repeatable rigor is most of the point of modern languages.
Not everyone is just cranking out hacked together MVPs for startups
Do you not realize there are many many other fields and domains of programming?
Not everyone has the same use case as you
That's what libraries and frameworks are here for. And that's why no experienced engineers consider those an issue. What's truly important is the business logic, then you find a set of libraries that solves the common use cases and you write the rest. Sometimes you're in some novel space that doesn't have libraries (new programming language), but you still have specs and reference implementation that helps you out.
The actual boilerplate is when you have to write code twice because the language ecosystem don't have good macros à la lisp so you can invent some metastuff for the problem at end. (think writing routers for express.js)
There are obviously still things it can’t do. But the gap between “I haven’t been able to get a tool to work” and “you’re wrong about the tool being useful” is large.
The idea that you can't specify the load bearing pillars of your structure to the AI, or that it couldn't figure them out by specifying the right requirements/constraints, will not age well.
Now here’s the fun part: In a really restrictive enterprise environment where you’ve got unit tests with 85% code coverage requirements, linters and static typing, these AI programming assistants actually perform even better than they do when given a more “greenfield” MVP-ish assignment with lots of room for misinterpretation. The constant “slamming into guardrails” keeps them from hallucinating and causes them to correct themselves when they do.
The more annoying boxes your job makes you tick, the more parts of the process that make you go “ugh, right, that”, the more AI programming assistants can help you.
I bit the bullet last week and tried to force myself to use a solution built end to end by AI. By the time I’d finished asking it to make changes (about 25 in total), I would’ve had a much nicer time doing it myself.
The thing in question was admittedly partially specified. It was a yaml-based testing tool for running some scenarios involving load tests before and after injecting some faults in the application. I gave it the yaml schema up front, and it did a sensible job as a first pass. But then I was in the position of reading what it wrote, seeing some implicit requirements I’d not specified, and asking for those.
Had I written it myself from the start, those implicit requirements would’ve been more natural to think about in the progression of iterating on the tool. But in this workflow, I just couldn’t get in a flow state - the process felt very unnatural, not unlike how it would’ve been to ask a junior to do it and taking 25 rounds of code review. And that has always been a miserable task, difficult to force oneself to stay engaged with. By the end I was much happier making manual tweaks and wish I’d have written it myself from the start.
I use R a little more than I should, given the simplicity of my work. Claude writes better R quicker than I can. I double check what it's doing. But it's easier to double check it used twang correctly than spend five trying to remember how to use the weird package that does propensity scoring [1].
I'm sure data analysis will still sort of be a thing. But it's just not as useful anymore in the form of a human being for most commercial applications at sub-enterprise scale.
[1] https://cran.r-project.org/web/packages/twang/index.html
If one copy-pastes a routine to make a modified version (that’s used), code coverage goes UP. Sounds like a win win for many…
Later, someone consolidates the two near identical routines during a proper refactoring. They can even add unit tests. Guess what? Code coverage goes DOWN!
Sure, having untested un-executed code is a truly horrible thing. But focusing on coverage can be worse…
There's always some people that will resist to the bitter end, but I expect them to be few and far between.
> (from https://lukasatkinson.de/2025/net-negative-cursor/)
looks inside
complaining about Rust code
I work in finance, I have for almost 20 years now. There are things in finance you do once every 5 years, like setting up a data source like Bloomberg in a new programming language. Now you know from the last time you did it that it’s a pain, you need to use a very low level api, handling all the tiny messages yourself, building up the response as it comes from the source in unordered packets. It’s asynchronous, there is a message queue, and what I specialize in is maths.
Now I could spend hours reading documents, putting crap together, and finally come up with some half baked code that ignores most possible error points.
Or I could use ChatGPT and leverage the fact that hundreds of implementations of the same module exist out there. And make something that just works.
That is the first ever coding question I asked an LLM and it literally saved me days of trial and error for something where my added value is next to zero.
Similarly I use LLMs a lot for small tasks that are in fact fairly difficult, and that don’t add any value to the solution. Things like converting data structures in an efficient way using Python idioms, or JavaScript 2023 features, that there is no way I can keep up with.
And if we accept that inevitability, it becomes a self-fulfilling prophecy. The fact that some people _want_ us to give in is a reason to keep resisting.
“Isomorphic” is a word that describes a mapping (or a transformation) that preserves some properties that we believe to be important.
The word you’re looking for is probably “similar” not “isomorphic”. It sure as hell doesn’t sound as fancy though.
What is which this new paradigm where we act like everything is easily measurable and every measure is perfectly aligned with what we want to measure. We know these things aren't possible. It doesn't take much thought to verify this. Do you believe you are smart enough to test all possible issues? No one is. There'd be no CVEs if you could and we'd have solved all of physics centuries ago
Like everything else about the "GenAI" fad, it boils down to extractively exploiting goodwill and despoiling the commons in order to convert VC dollars into penny-shavings.
Bizarrely though, it seems to be limited to grep for the moment, doesn't work with LSP yet.
Assuming something like "a REST endpoint which takes a few request parameters, makes a DB query, and returns the response" fits what you're describing, you can absolutely copy/paste a similar endpoint, change the parameters and the database query, and rename a couple variables—all of which takes a matter of moments.
Naturally code that is being copy-pasted wholesale with few changes is ripe to be abstracted away, but patterns are still going to show up no matter what.
The comment on the right says it'll help the user with protocol versioning. This is not how you do that...
Issues like that are simple and just create debt. Sure, it "works" now but who writes code not knowing that we're going to change things next week or next month. It's the whole reason we use objects and functions in the first place!
If the AI agent future is so inevitable, then why do people waste so much oxygen insisting upon its inevitability? Just wait for it in silence. It certainly isn't here yet.
But English is a subjective and fuzzy language, and the AI typically can't intuit the more subtle points of what you need. In my experience a model's output always needs further prompting. If only there were a formal, rigorous language to express business logic in! Some sort of "programming language."
It'll even write basic unit tests for your CRUD API while it's at it.
I guess in this case the morphism is the similar or same prompt to generate f g h j.
And the less instantly I can write it, the more petty nuances there are to deal with—things like non-trivial validation, a new database query function, a header that I need to access—the more ways an LLM will get it subtly wrong.
If I treat it as more than a fancy autocomplete, I have to spend all my time cleaning up after it. And if I do treat it as fancy autocomplete, it doesn't save that much time over judicious copy/pasting.
These aren't Rust-specific syntax foibles. It's not a borrow-checker mistake or anything. These are basic CS fundamentals that it's thoughtlessly fumbling.
I disagree on the "can't". LLMs seem no better or worse than humans at making assumptions when given a description of needs, which shouldn't be surprising since they infer such things from examples of humans doing the same thing. In principle, there's nothing preventing a targeted programming system from asking clarifying questions.
> In my experience a model's output always needs further prompting.
Yes, and the early days of all tooling were crude. Don't underestimate the march of progress.
That's not what I see the parent comment saying. They're not saying that LLMs can't use frameworks, they're saying that if you have rote solutions that you are being forced to write over and over and over again, you shouldn't be using an LLM to automate it, you should use a framework and get that code out of your project.
And at that point, you won't have a ton of boilerplate to write.
The two sides to this I see online are between the people who think we need a way to automate boilerplate and setup code, and the people who want to eliminate boilerplate (not just the copy-paste kind, but also the "ugh, I've got to do this thing again that I've done 20 times" kind).
Ideally:
> a common set of rote solutions to isomorphic problems
Should not be a thing you have to write very often (or if it is, you should have tools that make it as quick to implement as it would be to type a prompt into an LLM). If that kind of rote repetitive problem solving is a huge part of your job, then to borrow your phrasing: the language or the tools you're using have let you down.