The million dollar (perhaps literally) question is – could @kentonv have written this library quicker by himself without any AI help?
I *think* the answer to this is clearly no: or at least, given what we can accomplish today with the tools we have now, and that we are still collectively learning how to effectively use this, there's no way it won't be faster (with effective use) in another 3-6 months to fully-code new solutions with AI. I think it requires a lot of work: well-documented, well-structured codebases with fast built-in feedback loops (good linting/unit tests etc.), but we're heading there no
My problem is that (in my experience anyways) this is slower than me just writing the code myself. That's why AI is not a useful tool right now. They only get it right sometimes so it winds up being easier to just do it yourself in the first place. As the saying goes: bad help is worse than no help at all, and AI is bad help right now.
If a robot assembles cars at lightning speed... but occasionally misaligns a bolt, and your only safeguard is a visual inspection afterward, some defects will roll off the assembly line. Human coders prevent many bugs by thinking during assembly.
But what if you only need 2 kentonv's instead of 20 at the end? Do you assume we'll find enough new tasks that will occupy the other 18? I think that's the question.
And the author is implementing a fairly technical project in this case. How about routine LoB app development?
Nobody is claiming that human's won't have jobs simply because "we have accomplished everything this is to do". It's that humans will offer zero economic value compared to AI because AI gets so good and so cheap.
If there is some magic $10k AI that can fully replace a $200k software engineer then I'd love to see it. Until that happens this entire discussion is science fiction.
This is likely where all this will end up. I have doubts that AI will replace all engineers, but I have no doubt in my mind that we'll certainly need a lot less engineers.
A not so dissimilar thing happened in the sysadmin world (my career) when everything transitioned from ClickOps to the cloud & Infrastructure as Code. Infrastructure that needed 10 sysadmins to manage now only needed 1 or 2 infrastructure folks.
The role still exists, but the quantity needed is drastically reduced. The work that I do now by myself would have needed an entire team before AWS/Ansible/Terraform, etc.
The theory of enshittification says that "business person pressing a few buttons" approach will be pursued, even if it lowers quality, to save costs, at least until that approach undermines quality so much that it undermines the business model. However, nobody knows how much quality tradeoff tolerance is there to mine.
Assuming you want a strong mental model of what the code does and how it works (which you'd use in conversations with stakeholders and architecture discussions for example), writing the code manually, with perhaps minor completion-like AI assistance, may be the optimal approach.
IMHO more rigorous test automation (including fuzzing and related techniques) is needed. Actually that holds whether AI is involved or not, but probably more so if it is.
How much experience do you have writing code vs how much experience do you have prompting using AI though? You have to factor in that these tools are new and everybody is still figuring out how to use them effectively.
You acting like those two scenarios are the same is disingenuous. Fuck that.
And where are supposed to get experienced engineers if replaced all Jr Devs with AI? There is a ton of benefit from drudgery of writing classes even if seems like grunt work at the time.
It may not manifest as job loss yet, but the market response to changes is a whole other thing. For one, it's likely to first manifest as slowing down hiring relative to amount of projects being started and then released. Software is a growing market after all.
I estimate it would have taken a few weeks, maybe months to write by hand.
That said, this is a pretty ideal use case: implementing a well-known standard on a well-known platform with a clear API spec.
In my attempts to make changes to the Workers Runtime itself using AI, I've generally not felt like it saved much time. Though, people who don't know the codebase as well as I do have reported it helped them a lot.
I have found AI incredibly useful when I jump into other people's complex codebases, that I'm not familiar with. I now feel like I'm comfortable doing that, since AI can help me find my way around very quickly, whereas previously I generally shied away from jumping in and would instead try to get someone on the team to make whatever change I needed.
My problem I guess is that maybe this is just Dunning-Kruger esq. When you don't know what you don't know you get the impression it's smart. When you do, you think it's rubbish.
Like when you see a media report on a subject you know about and you see it's inaccurate but then somehow still trust the media on a subject you're a non-expert on.
Before the end of zero interest rate policy, all the sysadmins I knew who the made the transition to devops were never stuck looking for a job for long.
I'm far from an AI true believer but come on -- human coders write bugs, tons and tons of bugs. According to Peopleware, software has "an average defect density of one to three defects per hundred lines of code"!
But if the time it takes an engineer to build any one thing goes down, now there are a lot more things that are cost effective.
Consider niche use cases. Every company tends to have custom processes and workflows. Think about being an accountant at one company vs. another -- while a lot of the job is the same, there will always be parts that are significantly different. Those bespoke processes often involve manual labor because off-the-shelf accounting software cannot add custom features for every company.
But what if it could? What if an engineer working with AI could knock out customer-specific features 10x as fast as they could in the past. Now it actually makes sense to build those features, to improve the productivity of each company's accounting department.
It's hard to say if demand for engineers will go down or up. I'm not pretending to know for sure. But I can see a possibility that we actually have way more developers in coming years!
With AI, there's no real expertise involved in saying "well, it was very stupid 5 years ago, now it's starting to seem smart, if we extrapolate it's going to be smarter than me in 5 years." But no one really knows what level of effort is required to make it smarter than me. No one is an expert in something that doesn't exist yet.
(Though I think it's true of engineering too. We all have our own weird team-specific processes for code reviews and CI and deployments which could probably use better automation.)
But even where lots of customization exists today (such as in engineering!), more is always possible. It's always just a question of whether the automation saves as much time as it took to build. If the automations can be built faster, then it makes sense to build more of them.
Why? Inevitably, I changed positions / jobs / platforms, and all that effort was lost / inapplicable, and I had to relearn to use the stock settings anyway.
Now, I understand that some companies have different setups, but it might just make more sense to change the company's accounting procedures (if possible) to conform to most accounting software defaults, rather than invest heavily in modifying the setup, unless you're a huge conglomerate and can keep people on staff. Why? Because someone, somewhere will have to maintain those changes. Sure, you can then hire someone else to update those changes - but guess what? Most likely, unless they open-source their changes, no LLM will have seen those changes, and even if they are allowed to fine-tune on it, they'll have seen exactly ONE instance of these changes. Odds they'll get everything right, AND the person using the LLM will recognize when it doesn't go right? Oh right, they invested in hundreds of unit tests to ensure everything works as expected even with changes, and I'm the tooth fairy..
That's definitely an interesting area, but I think we'll actually see (maybe) individual employees solving some of these problems on their own without involving IT/the dev team.
We kind of see it already - a lot of these problem spaces are being solved with complex Excel workflows, crappy Access databases, etc. because the team needed their problem solved now, and resources couldn't be given to them.
Maybe AI is the answer to that so that instead of building a house of cards on Excel, these non-tech teams can have something a little more robust.
It's interesting you mentioned accounting, because that's the one department/area I see taking off and running with it the most. They are already the department that's effectively programming already with Excel workflows & DSLs in whatever ERP du jour.
So it doesn't necessarily open up more dev jobs, but maybe fulfills the old the mantra of "everyone will become a programmer." and we see more advanced computing become a commodity thanks to AI - much like everyone can click their way through an office suite with little experience or training, everyone will be able to use AI to automate large chunks of their job or departmental processes.
I don't actually think this is going to take the form of LLMs implementing custom patches to off-the-shelf software. I think instead it's going to look like LLMs writing code that uses APIs offered by off-the-shelf software to script specific workflows.
I agree, but in my book, those employees are now developers. And so by that definition, there will be a lot more developers.
Will we see more or fewer people whose primary job is software development? That's harder to answer. I do think we'll see a lot more consultant-type roles, with experienced software developers helping other people write their own personal automations.
I’m going to take a very close look at your code base :)
[0] https://github.com/colibri-hq/colibri/blob/next/packages/oau...
Why keep legacy structures, with luxuries like POs or PMs if AI becomes powerful as you say - it'll just be 'one man startups' for better or worse.
Any empire-building VP should probably fear the wishful AI future they're praying for!
I don't think this is a fair assessment give the summary of the commit history https://pastebin.com/bG0j2ube shows your work started on 2025-02-27 and started trailing off at 2025-03-20 as others joined in. Minor changes continue to present.
> That said, this is a pretty ideal use case: implementing a well-known standard on a well-known platform with a clear API spec.
Still, this allowed you to complete in a month what may have taken two. That's a remarkable feat considering the time and value of someone of your caliber.
What's open source for if not allowing 2 developers to achieve projects that previously would have taken 20?
Gell-Mann Amnesia https://en.m.wikipedia.org/wiki/Gell-Mann_amnesia_effect
Would someone of author's caliber even be working on trivial slog item like Oauth2 implementation, if not for the novel development approach he wanted to attempt here ?
For the kind of regular jobs a engineer typically is expected to do, would it give 100% productivity jump ?
Will AI be able translate all that into rust?
It’s an exaggeration I know, but you get the point.
This OAuth library is a core component of the Workers Remote MCP framework, which we managed to ship the day before the Remote MCP standard dropped.
And because we were there and ready for customers right at the beginning, a whole lot of people ended up building their MCP servers on us, including some big names:
https://blog.cloudflare.com/mcp-demo-day/
(Also if I had spent a month on this instead of a few days, that would be a month I wasn't spending on other things, and I have kind of a lot to do...)
What other tools could do that?
This library is not the only thing I was working on, nor even the main thing. As the lead engineer of Cloudflare Workers I have quite a few other things demanding my time.
When you are not introducing a new pattern in the code structure, it's mostly copy-paste and then edit.
But it's also extremely rare, so a pretty high bar to be able to benefit from tools like AI.
There is a middle ground: software engineers being kicked out because now some business person can hand over the task of building the entire OAuth infrastructure to a single inexperienced developer with a Claude account.
In my experience, the only times LLMs slow down your task is when you don't use them effectively. For example, if you provide barely any context or feedback and you prompt a LLM to write you the world, of course it will output unusable results, primarily because it will be forced to interpolate and extrapolate through the missing context.
If you take the time to learn how to gently prompt a LLM into doing what you need, you'll find out it makes you far more productive.
LLMs don't change that. If a business does not have the budget for a software engineer, LLMs won't make up budget headroom for it either. What LLMs do is allow engineers to iterate faster, and work on more tasks. This means less jobs.
Software is often not the bottleneck. If instead of 10 engineers you just need the one, the company will shed headcount it doesn't need. This might mean, for example, that instead of 10 developers and a software testing engineer, now a team changes to perhaps add testers while firing half of the developers.
Type systems, LSPs, tests, formatters, Rust’s borrow checker, logs and traces, source control are examples of things that make experts go faster. This space is hardly neglected (but could always be better).
It is really nice to see LLMs helping on all skill levels.
I think that the skills required are highly overblown.
The user should be aware of what each model excels at, its context size, temperature, and other parameters; how to communicate well, set system prompts and phrase tasks in a clear, succinct yet informative way; how to refocus the session when it veers off track; keep up to date with the latest (<~6mo) concepts and tooling, and so on.
All of this is trivial for a competent software engineer. The idea that it requires some specialized training that couldn't be attained by experimentation and reading a blog post is absurd. "Prompt engineering" just isn't a thing.
Your analysis is far too superficial to extract anything meaningful. I know for a fact that I have small projects that took me only a couple of days to get done which have a commit history ranging a few months. Also, software is never done. There's always room to refactor, and LLMs turn that into trivial problems. Lastly, is your project still under development if your commits are README updates, linter runs, and renaming variables?
There is a reason why commit history is not used to track productivity.
I see your point. Indeed there are two completely different points of view regarding the output of LLMs:
* Hey, I managed to vibecode my way into a fully working web service with a React SPA after a couple of prompts, and a full automated test suite to boot.
* This project is nowhere as clean as I would have written it, and doesn't even follow my pet coding conventions.
One side lauds LLMs, the other complains they output mainly crap.
The truth of the matter is that the vast majority of software engineers write crap code, as the definition of "crap code" is "something I would have done differently". Opinionated engineers look at the output of LLMs and accuse it of being crap code. Eppur si muove.
in software atleast but if you involve in hardware. good things AI cant just replace you outright
Code I know nothing about? AI is very helpful there
There was another article posted somewhere that made a parallel between the AI hype and no-code, outsourcing and other waves that have come.
I think these discussions need to start from another point. The techniques changed radically, and so did the way problems are tackled. It's not that a software engineer is/was unable to deliver a project with/without LLMs. That's a red herring. The key aspects are things like the overall quality of the work being delivered vs how much time it took to reach that level of quality.
For example, one of the primary ways a LLM is used is not to write code at all: it's to explain to you what you are looking at. Whether it's used as a Google substitute or a rubber duck, developers are able to reason with existing projects and even explore approaches and strategies to tackle problem like they were never able to do so. You no longer need to book meetings with a principal engineer to as questions: you just drop a line in Copilot Chat and ask away.
Another critical aspect is that LLMs help you explore options faster, and iterate over them. This allows you to figure out what approach works best for your scenario and adapt to emerging requirements without having to even chat with anyone. This means that, within the timeframe you would deliver the first iteration of a MVP, you can very easily deliver a much more stable project.
We assume quite a bit about the challenge when we say it’s getting feature out.
It’s sort of like saying we can sprint faster with these tools, when the race is a marathon.
Or a better example is Coke vs Pepsi.
How do LLMs impact long term project, firm, process viability ?
Not a problem. The industry has evolved to tolerate buggy code that barely works. In fact, in some circles that's what's already expected from the baseline. LLMs change nothing in this regard. In fact, they arguably improve upon this problem as it becomes trivial to implement extensive automated test suites.
> What if subtle bugs are introduced that the inexperienced developer didn't catch until it went out into production?
That's what is happening in the real world without LLMs entering the picture.
Nevertheless, I don't think they are trying to frame it that way, either. The point is that making software development easier can actually increase the demand of software engineers in some cases (where projects that were previously not considered due to budget constraints are now feasible).
I've seen firsthand what happens to large software projects that collapse under their own weight of tech debt. The software literally could not function as intended - customers were lost, the product went under. Low quality being "expected" (which isn't true in my experience, either) is irrelevant when the software doesn't work at all.
The chances of all of that happening are a lot higher with a lone inexperienced engineer at the wheel. You still need experienced engineers to maintain your software, period.
> That's what is happening in the real world without LLMs entering the picture.
The difference is that most firms have experienced software engineers to fix those defects.
I think you have multiple offers of that very AI dangling in front of you, but you might be refusing to acknowledge them. One of the problems is the way you opt to frame the issue. Does "replacing" means firing the guy hoping to replace him with a Slack webhook? Or does it mean your team decides they don't need the same headcount of medior/senior engineers because a team of junior engineers mentored by someone focusing on quality ends up being more productive?
You might seek comfort in your conspiracy theories, but back in the real world the likes of me were already quite capable of creating complete and fully working projects from scratch using yesterday's LLMs.
We are talking about afternoons where you grab your coffee, saying to yourself "let's see what this vibecode thing is all about", and challenging yourself to create projects from scratch using nothing but a definition of done, LLM prompts, and a free-tier LLM configured to run in agent mode.
What, then?
You then can proceed to nitpick about code quality and bugs, but I can also say the same thing about your work, which you take far longer to deliver.
You did. You explicitly asserted the following.
> If a business has the budget for 1 or 2 engineers though, they might be able to task them with work that previously required 5-10 engineers (...).
In your own words, a project that would take 5-10 engineers is now feasible to be tackled with 1 or 2. Your own words.
> (...) The point is that making software development easier can actually increase the demand of software engineers in some cases (...)
I think that's somewhere between unrealistic and wishful thinking. Even in your problem statement, "making software development easier" lowers demand. Even if you argue that some positions might open where none existed before, the truth of the matter is that at the core of your scenario lies a drop in demand for software engineers. Shops who currently employ engineers won't need to retain as many to maintain their current level of productivity.
This makes sense. Are there codebases where you find this doesn't work as well, either from the codebase's min required context size or the code patterns not being in the training data?
That statement != lower demand for software engineers.
If a firm needs to perform project X that previously cost 10 engineers to do, but they only have the budget for 2, they will not tackle that project. Engineers used = 0.
However, if due to productivity enhancements with AI, the project can now be done with just 2 engineers, the company can now afford to tackle the project. Engineers used = 2.
That is the point that the person you were originally replying to was making.
> Even in your problem statement, "making software development easier" lowers demand.
Incorrect, as shown above.
> Even if you argue that some positions might open where none existed before, the truth of the matter is that at the core of your scenario lies a drop in demand for software engineers.
I see what you are trying to say, but it's not that clear cut. The fact is, no one knows what will actually happen to software engineering demand in the long run. Some scenarios will increase demand for engineers, others will decrease it. No one knows what the net demand will be, everyone is only guessing at this point.
Why would a human review the code in a few years when AI is far better than the average senior developer? Wouldn't that be as stupid as a human reviewing stockfish's moves in Chess?
Yep, fully agree. We're going through this ourselves at $CURRENT_JOB, where the instability of the platform and product as a whole due to the immensely bad decisions made in the project's past is leading to massive churn from every single customer other than the smallest ones that make us no money anyway.
And it's not just the customers, the devs are feeling it too. There's constant fires and breakages all over the place because management doesn't care to give us any time to focus on quality, and people (myself included) are getting tired of having to read through some 10kLOC monstrosity that not even God Himself could understand, and it's made worse by the clueless management saying "Have you tried having AI find the bugs for you?" like a bunch of brainless sheep being injected with that sweet ol' VC hype machine.
Sure, people will put up with some bugs from time to time, and I'm not even saying I could've or do make perfect choices as well. But there's only so many times people will put up with a broken experience before they cut ties and quit, and in this vibe-coded hallucination world we're entering, are people really going to be okay with the products they use day-in, day-out changing behavior drastically every single day based on whatever the AI decided to hallucinate this time around to "fix" that 1 persistent bug that can't seem to die?
But what has happened instead is that we are now building much more buildings and much more complex ones than we ever would have even conceived of back then. The Three Gorges dam required the work of thousands or even tens of thousands of people when it was built, and it would have required the work of millions in the year 1000. But it didn't actually generate millions of jobs in the year 1000: it was in fact never even conceived of as a possibility, much less attempted.
Of course, the opposite can also happen. The number of carpenters has reduced to almost nothing, when it used to be a major profession, and there are many other professions that have entirely disappeared.
0 on that Project, but those 2 engineers will still be used on a different Project that needs just 2 Engineers.
BUT a company that sees that project as a critical part of the bussines and MUST tackle that project, will only need the 2 engineers in the payroll. Or hire just 2 instead of 10.
Engineers not hired = 8
Or.. maybe they don't really need that project that needs 10 engineers. They are ok as they are today, but they realize that with AI, they don't need those 2 engineers anymore to produce the same output, probably can be handled by just one with AI assistance.
Engineers fired = 1
The million dollar question is, what are the unintended, unpredicted consequences of developing this way?
If AI allows me to write code 10x faster, I might end up with 10x more code. Has our ability to review it gotten equally fast? Will the number of bugs multiply? Will there be new classes of bugs? Will we now hire 1 person where we hired 5 before? If that happens, will the 1 person leaving the company become a disaster? How will hiring work (cuz we have such a stellar track record at that...)? Will the changing economics of creating software now make SaaS no longer viable? Or will it make traditional commercial software companies no longer viable? Will the entire global economy change, the way it did with the rise of the first tech industry? Are we seeing a rebirth?
We won't know for sure what the consequences are for a while. But there will be consequences.
> Another critical aspect is that LLMs help you explore options faster, and iterate over them. This allows you to figure out what approach works best for your scenario and adapt to emerging requirements without having to even chat with anyone. This means that, within the timeframe you would deliver the first iteration of a MVP, you can very easily deliver a much more stable project
This is certainly a part of it, but I do wonder that even if an LLM “learned” the conventions and preferences of an engineer and spit out “perfectly styled” code, would it be treated as such? I’d wager (a small amount) that it wouldn’t, because part of enjoying the code - for me - is _knowing_ the code. “I wrote it this way because I tried X, then Y, then saw I could do Z, and now I’m familiar with the code in a way that’s more intimate.” Unfamiliar code rarely looks like _really good_, in my opinion.
Not necessarily. The reality is, whatever some people can do individually, if they team up, they can do more together. The teams and small startups will remain for now, and so will big companies.
I do imagine however that the internal structure will change. As the AI gets better and able to do more independently, people will shift from pair programming to more of a PM role (this is happening now), and this I imagine will quickly collapse further.
Even today, LLMs seem more suited for project management than doing actual coding - it's just the space in-between that's the problem. I.e. LLMs can code great in the small, and can break down work very well, but keeping the changes consistent and following the plan is where they still struggle. As that gap closes, I'm not really sure how the team composition would look like. But I don't doubt there'd still be teams.
It took time as different firms adapted to adopt computer technologies in their various business needs and workflows. It's hard to precisely predict how labor roles will change with each revolutionary technology.
https://github.com/jdbohrman-tech/hermetic-mls https://github.com/jdbohrman-tech/roselite
I think it's funny that Roselite caused a huge meltdown to the Veilid team simply because they have a weird adamancy to no AI assistance. They even called it "plagiarism"
Maybe because (and I'm quoting that article) it is still lacking in what it should have that you managed to accomplish this task in "few days" instead of "a few weeks, maybe months".
Maybe the bottleneck was not your typing speed, but the [specific knowledge] to build that system. Because if you know something well enough, you can build it way faster, like rebuilding something from scratch, you will be faster as you already know the paths. In which case, my question would be: would not be writing this as fast, or maybe at least more secure and reasonable, if you had the complete knowledge of the system first.
Because contrary to LLMs, humans can actually improve and learn when they do things, and they don't whey they don't do things. Not knowing the code to the full extent is worth the time "gained" by using the LLM to write it?
I think it's very hard to estimate those other aspects of the thing.