Coming at this from a computer-science or PLT perspective, this idea of an "abstract, repeatable meta-boilerplate" is exactly the payoff we expect from language features like strong type systems. Part of the point of rigorous languages is to create these kinds of patterns. You had total expressiveness back in assembly language! Repeatable rigor is most of the point of modern languages.
Not everyone is just cranking out hacked together MVPs for startups
Do you not realize there are many many other fields and domains of programming?
Not everyone has the same use case as you
That's what libraries and frameworks are here for. And that's why no experienced engineers consider those an issue. What's truly important is the business logic, then you find a set of libraries that solves the common use cases and you write the rest. Sometimes you're in some novel space that doesn't have libraries (new programming language), but you still have specs and reference implementation that helps you out.
The actual boilerplate is when you have to write code twice because the language ecosystem don't have good macros à la lisp so you can invent some metastuff for the problem at end. (think writing routers for express.js)
There are obviously still things it can’t do. But the gap between “I haven’t been able to get a tool to work” and “you’re wrong about the tool being useful” is large.
Now here’s the fun part: In a really restrictive enterprise environment where you’ve got unit tests with 85% code coverage requirements, linters and static typing, these AI programming assistants actually perform even better than they do when given a more “greenfield” MVP-ish assignment with lots of room for misinterpretation. The constant “slamming into guardrails” keeps them from hallucinating and causes them to correct themselves when they do.
The more annoying boxes your job makes you tick, the more parts of the process that make you go “ugh, right, that”, the more AI programming assistants can help you.
I bit the bullet last week and tried to force myself to use a solution built end to end by AI. By the time I’d finished asking it to make changes (about 25 in total), I would’ve had a much nicer time doing it myself.
The thing in question was admittedly partially specified. It was a yaml-based testing tool for running some scenarios involving load tests before and after injecting some faults in the application. I gave it the yaml schema up front, and it did a sensible job as a first pass. But then I was in the position of reading what it wrote, seeing some implicit requirements I’d not specified, and asking for those.
Had I written it myself from the start, those implicit requirements would’ve been more natural to think about in the progression of iterating on the tool. But in this workflow, I just couldn’t get in a flow state - the process felt very unnatural, not unlike how it would’ve been to ask a junior to do it and taking 25 rounds of code review. And that has always been a miserable task, difficult to force oneself to stay engaged with. By the end I was much happier making manual tweaks and wish I’d have written it myself from the start.
If one copy-pastes a routine to make a modified version (that’s used), code coverage goes UP. Sounds like a win win for many…
Later, someone consolidates the two near identical routines during a proper refactoring. They can even add unit tests. Guess what? Code coverage goes DOWN!
Sure, having untested un-executed code is a truly horrible thing. But focusing on coverage can be worse…
There's always some people that will resist to the bitter end, but I expect them to be few and far between.
And if we accept that inevitability, it becomes a self-fulfilling prophecy. The fact that some people _want_ us to give in is a reason to keep resisting.
“Isomorphic” is a word that describes a mapping (or a transformation) that preserves some properties that we believe to be important.
The word you’re looking for is probably “similar” not “isomorphic”. It sure as hell doesn’t sound as fancy though.
What is which this new paradigm where we act like everything is easily measurable and every measure is perfectly aligned with what we want to measure. We know these things aren't possible. It doesn't take much thought to verify this. Do you believe you are smart enough to test all possible issues? No one is. There'd be no CVEs if you could and we'd have solved all of physics centuries ago
Assuming something like "a REST endpoint which takes a few request parameters, makes a DB query, and returns the response" fits what you're describing, you can absolutely copy/paste a similar endpoint, change the parameters and the database query, and rename a couple variables—all of which takes a matter of moments.
Naturally code that is being copy-pasted wholesale with few changes is ripe to be abstracted away, but patterns are still going to show up no matter what.
If the AI agent future is so inevitable, then why do people waste so much oxygen insisting upon its inevitability? Just wait for it in silence. It certainly isn't here yet.
It'll even write basic unit tests for your CRUD API while it's at it.
I guess in this case the morphism is the similar or same prompt to generate f g h j.
And the less instantly I can write it, the more petty nuances there are to deal with—things like non-trivial validation, a new database query function, a header that I need to access—the more ways an LLM will get it subtly wrong.
If I treat it as more than a fancy autocomplete, I have to spend all my time cleaning up after it. And if I do treat it as fancy autocomplete, it doesn't save that much time over judicious copy/pasting.
That's not what I see the parent comment saying. They're not saying that LLMs can't use frameworks, they're saying that if you have rote solutions that you are being forced to write over and over and over again, you shouldn't be using an LLM to automate it, you should use a framework and get that code out of your project.
And at that point, you won't have a ton of boilerplate to write.
The two sides to this I see online are between the people who think we need a way to automate boilerplate and setup code, and the people who want to eliminate boilerplate (not just the copy-paste kind, but also the "ugh, I've got to do this thing again that I've done 20 times" kind).
Ideally:
> a common set of rote solutions to isomorphic problems
Should not be a thing you have to write very often (or if it is, you should have tools that make it as quick to implement as it would be to type a prompt into an LLM). If that kind of rote repetitive problem solving is a huge part of your job, then to borrow your phrasing: the language or the tools you're using have let you down.