So far in my experience watching small to medium sized companies try to use it for real work, it has been occasionally useful for exploring apis, odd bits of knowledge etc, but overall wasted more time than it has saved. I see very few signs of progress.
The time has come for llm users to put up or shut up - if it’s so great, stop telling us and show and use the code it generated on its own.
What else are you looking for?
If you’re selling shovels to gold miners, you don’t need to demonstrate the shovel - you just need decent marketing to convince people there’s gold in them thar hills.
Whats nuts is watching all these people shill for something that we all have used to mediocre results. Obviously Fly.io benefits if people start hosting tons of slopped together AI projects on their platform.
Its kinda sad to watch what I thought was a good company shill for AI. Even if they are not directly getting money from some PR contract.
We must not be prompting hard enough....
LLMs are very useful. I use them as a better way to search the web, generate some code that I know I can debug but don’t want to write and as a way to conversationally interact with data.
The problem is the hype machine has set expectations so high and refused criticism to the point where LLMs can’t possibly measure up. This creates the divide we see here.
There's still a significant barrier to entry to get involved with blockchain and most people don't even know what it is.
LLMs on the other hand have very low barrier to at least use- one can just go to google, ChatGPT etc and use it and see its effectiveness. There's a reason why in the last year, a significant portion of school students are now using LLMs to cheat. Blockchains still don't have that kind of utilization.
I’m open to that happening. I mean them showing me. I’m less open to the Nth “aww shucks, the very few doubters that are left at this point are about to get a rude awakening” FOMO concern trolling. I mean I guess it’s nice for me that you are so concerned about my well-being, soon to be suffering-being?
Now, AI can do a lot of things. Don’t get me wrong. It has probably written a million variations on the above sentiment.
Honestly I think that makes the argument stronger though that it’s unfortunate they jumped on.
But compared to using Kagi, I've found found LLMs end up wasting more of my time by returning a superficial survey with frequent oversights and mistakes. At the final tally I've still found it faster to just do it myself.
I will say I do love LLMs for getting a better idea of what to search for, and for picking details out of larger blocks.
There are LOADS of people who need "a program" but aren't equipped to write code or hire an SWE that are empowered by this. And example: last week, I saw a PM vibe code several different applications to demo what might get built after it gets prioritized by SWEs
In many romance languages, eulogy doesn't have the funeral connotation, only the high praise one - so the GP may be a native speaker of a romance language who didn't realize this meaning is less common in English.
I don't think this follows. Anyone can see that 10-ton excavator is hundreds or even thousands of times more efficient than a man with a shovel. That doesn't mean you can start a company up staffed only with excavators. Firstly you obviously need people operating the excavator. Secondly the excavator is incredibly efficient at moving lots of dirt around, but no crew could perform any non-trivial job without all the tasks that the excavator is not good out - planning, loading/unloading, prepping the site, fine work (shovelling dirt around pipes and wires), etc.
AI is a tool. It will mean companies can run much leaner. This doesn't imply they can do everything a company needs to do.
I ask this because it reads like you have a specific challenge in mind when it comes to generative AI and it sounds like anything short of "proof of the unlimited powers" will fall short of being deemed "useful".
Here's the deal: Reasonable people aren't claiming this stuff is a silver bullet or a panacea. They're not even suggesting it should be used without supervision. It's useful when used by people who understand its limitations and leverage its strengths.
If you want to see how it's been used by someone who was happy with the results, and is willing to share their results, you can scroll down a few stories on the front-page and check the commit history of this project:
https://github.com/cloudflare/workers-oauth-provider/commits...
Now here's the deal: These people aren't trying to prove anything to you. They're just sharing the results of an experiment where a very talented developer used these tools to build something useful.
So let me ask you this: Can we at least agree that these tools can be of some use to talented developers?
I think what's happening is two groups using "productivity" to mean completely different things: "I can implement 5x more code changes" vs "I generate 5x more business value." Both experiences are real, but they're not the same thing.
https://peoplesgrocers.com/en/writing/ai-productivity-parado...
With no disrespect meant, if you’re unable to find utility in these tools, then you aren’t using them correctly.
It’d be like insisting llms will replace authors of novels. In some sense they could but there are serious shortcomings and things like agents etc just don’t fix them.
Having something else write a lot of the boring code that you'll need and then you finish up the final touches, that's amazing and a huge accelerator (so they claim).
The claim is not "AI will replace us all", the claim of the parent article is "AI is a big deal and will change how we work, the same way IDEs/copy-paste/autocomplete/online documentation have radically changed our work."
With that as a metric, 1 Senior + 4 juniors cannot build the company with the scope you are describing.
A 50-eng company might have 1 CTO, 5 staff, 15 Seniors, and 29 juniors. So the proposition is you could cut the company in ~half but would still require the most-expensive aspects of running a company.
Are there any examples of businesses deploying production-ready, nontrivial code changes without a human spending a comparable (or much greater) amount of time as they’d have needed to with the existing SOTA dev tooling outside of LLMs?
That’s my interpretation of the question at hand. In my experience, LLMs have been very useful for developers who don’t know where to start on a particular task, or need to generate some trivial boilerplate code. But on nearly every occasion of the former, the code/scripts need to be heavily audited and revised by an experienced engineer before it’s ready to deploy for real.
this sort of post is the start of next phase in the battle for mindshare
the tools are at the very best mediocre replacements for google, and the people with a vested interest in promoting them know this, so they switch to attacking critics of the approach
> Its kinda sad to watch what I thought was a good company shill for AI.
yeah, I was sad too, then I scrolled up and saw the author. double sadness.
This is true, LLMs can speed up development (some asterisks are required here, but that is generally true).
That said, I've seen, mainly here on HN, so many people hyping it up way beyond this. I've got into arguments here with people claiming it codes at "junior level". Which is an absurd level of bullshit.
He spent a large tranche of the article specifically hanging a lantern on how mediocre the output is.
>by creating an AI only company
He specifically says that you need to review the code over and over and over.
This does not counter what GP said. Using LLM as a code assistant is not the same as "I don't need to hire developers because LLMs code in their place"
Sure, they might help you onboard into a complex codebase, but that's about it.
They help in breadth, not depth, really. And to be clear, to me that's extremely helpful, cause working on "depth" is fun and invigorating, while working on "breadth" is more often than not a slog, which I'm happy to have Claude Code write up a draft for in 15 minutes, review, do a bunch of tweaks, and be done with.
Honestly, I think part of the decline of Google Search is because it's trying to increase the amount of AI in search.
On the flip side, it has allowed me to accomplish many lower-complexity backlog projects that I just wouldn’t have even attempted before. It expands productivity on the low end.
I’ve also used it many times to take on quality-of-life tasks that just would have been skipped before (like wrapping utility scripts in a helpful, documented command-line tool).
What I’m interested in really is just case studies with prompts and code - that’s a lot more interesting for hackers IMO than hype.
This is such an outlandish claim, to the point where I call it plain bullshit.
LLMs are useful in a completely different way that a Junior developer is. It is an apples and oranges comparison.
LLMs does things in some way that it helps me beyong what a Junior would. It also is completely useless to perform many tasks that a Junior developer can.
If capabilities don’t improve it’s not replacing anyone, if they do improve and it can write good code, people can learn from reading that.
I don’t see a pathway to improvement though given how these models work.
The 10x engineer is really good at deducing the next most important thing to do is and doing it quickly. This involves quickly moving past 100's of design decisions in a week to deliver something quickly. It requires you to think partly like a product manager and partly like a senior engineer but that's the game and LLM's are zero help there.
Most engineering productivity is probably locked up in this. So yes, LLM's probably help a lot, just not in the way that would show on some Jira board?
*One could claim that doing this slow work gives the brain a break to then be good at strategizing the higher order more important work. Not sure.
I recently used Claude Code to develop & merge an optimization that will save about $4,000 a month. It was relatively simple but tedious, so I probably wouldn't have done it on my own. I don't even expect most of my coworkers to notice.
(If it is then damn, I've been leaving a ton of money on the table.)
You set up a strawman (AI only companies, agents doing everything on their own) which is irrelevant to the point the article is making. One excerpt:
> Almost nothing it spits out for me merges without edits. I’m sure there’s a skill to getting a SOTA model to one-shot a feature-plus-merge! But I don’t care. I like moving the code around and chuckling to myself while I delete all the stupid comments. I have to read the code line-by-line anyways.
I think this article is very on point, I relate with basically every paragraph. It's not a panacea, it's not a 10x improvement by any means, but it's a very meaningful improvement to both productivity (less than 2x I'd say, which would already be a ton) and fun for me. As I've mentioned in the past here
> I feel like there’s also a meaningful split of software engineers into those who primarily enjoy the process of crafting code itself, and those that primarily enjoy building stuff, treating the code more as a means to an end (even if they enjoy the process of writing code!). The former will likely not have fun with AI, and will likely be increasingly less happy with how all of this evolves over time. The latter I expect are and will mostly be elated.
which is a point the article makes too (tables), in a slightly different way.
Also, to be clear, I agree that 90% of the marketing around AI is overblown BS. But that's again beside the point, and the article is making no outlandish claims of that kind.
Overall, I hope this article (as intended) will make more people lose their dismissiveness and wake up their curiosity, as I expect the future of those is akin to that of people today saying they're "not really good at computers". It's a paradigm-shift, and it takes getting used to and productive in, as some imo smart people are mentioning even in this thread[0].
[0]: >>44164039
Also often it takes a senior dev _more_ time to _explain_ to a junior what needs to be done than it takes to do it himself. What LLMs give us is the ability to generate a feature about as fast as we can type up the instructions we would have, pre-AI, given to a junior dev.
I implemented the OAuth2.0 protocol in 3 different languages without a 3rd party library - entire spec implemented by hand. This was like ~2015 when many of the libraries that exist today didn't back then. I did this as a junior developer for multiple enterprise applications. At the end of the day it's not really that impressive.
Nobody is saying it's "unlimited powers", that's your exaggeration.
And what you're proposing about an "AI only company" seems to be based on your misunderstanding.
What this article is saying is, you need the same number of senior developers, but now each one is essentially assisted by a few junior developers virtually for free.
That's huge. But saying you want to see an "AI only company" as "proof" has nothing to do with that.
And what you're describing -- "occasionally useful for exploring apis, odd bits of knowledge etc, but overall wasted more time than it has saved" -- is exactly what the author explicitly addresses at the top:
> If you were trying and failing to use an LLM for code 6 months ago, you’re not doing what most serious LLM-assisted coders are doing. People coding with LLMs today use agents...
The entire article is about how to use LLM's effectively. What kind of "proof" do you really want, when the article explains it all awfully clearly?
I don't understand why you think "the code needs to be audited and revised" is a failure.
Nothing in the OP relies on it being possible for LLMs to build and deploy software unsupervised. It really seems like a non sequitur to me, to ask for proof of this.
Generative AI is too much of a blank canvas at the moment, and one that is always shifting. It's up to the user to find all the use cases, and even then in my experience it's just as likely to send me on a wild goose chase as it is to instantly solve my problem.
I use AI to chew through tedious work all the time. In fact, I let an agent do some work just before I checked HN to read your claim that it can't do that. Everyone at my job does the same, perhaps modulo checking HN. But there's no 'unlimited power' to show you - we're just about 30% faster than we used to be.
Well, in this case they’re busy writing articles trying to convince us, instead of proving stuff to us.
https://github.com/Atlas-Authority/mpac-ui-improved https://moduscreate.com/blog/forum-monitoring-is-essential-b... (Pardon how marketing keyword stuffed the final post)
And here's the difference between someone like me and an LLM: I can learn and retain information. If you don't understand this, you don't have a correct understanding of LLMs.
I don't know if that's what fly.io is going for here, but their competitors are explicitly leaning into that angle so it's not that implausible. Vercel is even vertically integrating the slop-to-prod pipeline with v0.
The next few paragraphs basically say "the tool run arbitrary programs on your machine, pull in arbitrary files, and use that to run more arbitrary commands" and then blames you for thinking that is a bad sequence of events.
In the best possible light I (an AI-neutral reader) can paint this rant on a hosting-company blog (why publish this?) is that 1) allowing random textbots to execute programs on your work computer is good (disagree), 2) those chatbots do, in fact, occasionally say enough correct-ish things that they are probably worth your company paying $20+/month for your access (agree).
_People_ are getting outsized value from AI in the ways they apply it. Photographs come from the photographer, not the camera.
It is us, the users of the LLMs, that need to learn from those mistakes.
If you prompt an LLM and it makes a mistake, you have to learn not to prompt it in the same way in the future.
It takes a lot of time and experimentation to find the prompting patterns that work.
My current favorite tactic is to dump sizable amounts of example code into the models every time I use them. I find this works extremely well. I will take code that I wrote previously that accomplishes a similar task, drop that in and describe what I want it to build next.
Tfa makes this argument too then later says:
> All this is to say: I write some Rust. I like it fine. If LLMs and Rust aren’t working for you, I feel you. But if that’s your whole thing, we’re not having the same argument
So reasonable people admit that the utility depends on the use case.. then at the same time say you must be an idiot if you aren’t using the tools. But.. this isn’t actually a reasonable position.
Part of the issue here may be that so many programmers have no idea what programmers do outside of their niche, and how diverse programming actually is.
The typical rebuttals of how “not everyone is doing cliche CRUD web dev” is just the beginning. Author mentions kernel dev, but then probably extrapolated to C dev in general. But that would be insane, just think about the training sets for Linux kernel dev vs everything else..
It’s dumb to have everyone double down on polarizing simplistic pro/con camps, and it’s rare to see people even asking “what kind of work are you trying to do” before the same old pro/con arguments start flying again.
This has been my experience at well - AI coding tools are like a very persistent junior-- that loves reading specs and documentation. The problem for AI companies is "automated burndown of your low-complexity backlog items" isn't a moneymaker, even though that's what we have. So they have to sell a dream that may be realized, or may not.
The benchmark project in the article is the perfect candidate for AI: well defined requirements with precise technical terms (RFCs), little room for undefined behavior and tons of reference implementations. This is an atypical project. I am confident AI agent write an HTTP2 server, but it will also repeatedly fail to write sensible tests for human/business processes that a junior would excel at.
This article and vocal supporters are not being reasonable at all, they make a not so between-the-lines separation between skeptics (which are nuts) and supporters ("My smartest friends are blowing it off." in a smug "I'm smarter than my smarter friends").
I mean, come on.
I’m happy to have read this, which is reason enough to publish it - but also it’s clearly generating debate so it seems like a very good thing to have published.
In a single Saturday the LLM delivered the feature to my spec, passing my initial test cases, adding more tests, etc…
I went to bed that night feeling viscerally in my bones I was pairing with and guiding a senior engineer not a junior. The feature was delivered in one day and would have taken me a week to do myself.
I think stories like the Cloudflare story are happening all over right now. Staff level engineers are testing hypotheses and being surprised at the results.
Oauth 2.0 doesn’t really matter. If you can guide the model and clearly express requirements, boundaries, and context, then it’s likely to be very useful and valuable in its current form.
Imagine a senior IC staffed with 4 juniors, and they spend 2 hours with each every day. Then the junior is left with 6 hours to think through what they were taught/told. This is very similar to LLM development except instead of context switching 3 times each day, the senior can skip over the 6 hours of independent time the junior required to absorb the changes. But it still takes the same amount of time to deliver the 4 separate projects.
I find the existence of LLM development deeply troubling for a long list of reasons. But refuting the claim that an LLM is similar in many ways to a junior dev is unsubstantiated
>It also is completely useless to perform many tasks that a Junior developer can.
And there are many things one junior could be helpful with that a different junior would be useless at.
Are you saying the CEO of Anthropic isn't reasonable? or Klarna?
There's certainly a lot of code that needs to be written in companies that is simple and straightforward and where LLMs are absolutely capable of generating code as good as your average junior/intermediate developer would have written.
And of course there are higher complexity tasks where the LLM will completely face plant.
So the smart company chooses carefully where to apply the LLM and possibly does get 5x more code that is "better" in the sense that there's 5x more straightforward tickets closed/shipped, which is better than if they had less tickets closed/shipped.
However, the expansion in scope that senior developers can tackle now will take away work that would ordinarily be given to juniors.
Why would we do this? Wouldn’t it be better to do this silently and reap the benefits?
Maybe you just have that dream job where you only have to think hard thoughts. But that's just not the norm, even at a bleeding edge startup.
Surely you can see how insanely biased all of their statements would be. They are literally selling the shovels in this gold rush.
Anything they say will be in service of promoting AI, even the bad/cautionary stuff because they know there's an audience who will take it the other way (or will choose to jump in to not be left behind), and also news is news, it keeps people talking about AI.
b) Even if “everyone is using it” it doesn’t mean it is useful. The usage could be adequately explained by e.g. marketing, being forced on them by management/policy, etc. Not everything with high usage is useful. I can e.g. quickly think of chewing gum (which is also used by a lot of developers), or the ANSI standard keyboard (as opposed to the ISO standard keyboard).
A big part of my skepticism is this offloading of responsibility: you can use an AI tool to write large quantities of shitty code and make yourself look superficially productive at the cost of the reviewer. I don't want to review 13 PRs, all of which are secretly AI but pretend to be junior dev output, none of which solve any of the most pressing business problems because they're just pointless noise from the bowels of our backlog, and have that be my day's work.
Such gatekeeping is a distraction from my actual job, which is to turn vague problem descriptions into an actionable spec by wrangling with the business and doing research, and then fix them. The wrangling sees a 0% boost from AI, the research is only sped up slightly, and yeah, maybe the "fixing problems" part of the job will be faster! That's only a fraction of the average day for me, though. If an LLM makes the code I need to review worse, or if it makes people spend time on the kind of busywork that ended up 500 items down in our backlog instead of looking for more impactful tasks, then it's a net negative.
I think what you're missing is the risk, real or imagined, of AI generating 5x more code changes that have overall negative business value. Code's a liability. Changes to it are a risk.
Thus, I find LLMs quite useful when trying to find info on niches that are close to a very popular topic, but different in some key way that's hard to express in search terms that won't get ignored.
In decades of programming I’ve written very little tedious code, but that’s as much about the projects I’ve worked on as approach I use.
There are zero "safe" tools where you don't control the inputs.
"Python, create an xarray with two dimensions from a pandas df"
It gave me a few lines of example code which was enough for me to figure out where I had messed up the syntax in my own code.
I have seen one of my junior coworkers copy+paste entire chunks of code from chatbot conversations and to be honest what he has produced is underwhelming the code is poorly structured difficult to reason about I have low confidence he understands what the bot has produced (and why it did things the way it did) and I don't have high confidence we'd be able to trust the accuracy of the figures this code was outputting.
> The excitement and enthusiasm of Gold Washing still continues—increases. (1848)
This is not how Juniors work. I don't know what else to say. It is just not true.
I don't give juniors a prompt and let them to implement code for a few hours. They work as any other developer, just generally in features and/or tickets of more limited scope. At least initially. This is not what LLMs do
> But refuting the claim that an LLM is similar in many ways to a junior dev is unsubstantiated
I sometimes get the feeling I talk to people who never worked in a real professional setting.
A LLM can do things that Juniors can't. When I bounce around ideas for implementing a certain feature, when I explore libraries or frameworks I am unfamiliar with, when I ask it to review pieces of code looking for improvements, when I get it to generate boring glue code, scaffolding, unit tests. All those things are helpful, and make LLMs an excellent code assistant in a way that Juniors are not.
But it is completely unable to properly do things without me giving very precise instructions of what it needs to code. The less precise I am, the worse its output. It is very happy to generate completely bullshit code that kinda looks like it does what I need but not really. I constantly need to tweak what it generates, and although it saves my time as it outputs a lot of code in little time, the results are very unreliable to meaningfully act with any sort of independence.
> And there are many things one junior could be helpful with that a different junior would be useless at
Which completely fails to address the point I am making.
That maybbe true, and would be an interesting topic do discuss if people actually spoke in such a way.
"Developers are now more productive in a way that many projects may need less developers to keep up productivity levels" is not that catchy to generate hype however.
Its like I can't just switch our whole 1-million line codebase on a dime
These articles act like everyone is just cranking out shitty new webapps, as if every software job is the same as the author's
My intuition is the tail of low value "changes/edits" will skew fairly code size neutral.
A concrete example from this week "adding robust error handling" in TypeScript.
I ask the LLM to look at these files. See how there is a big try catch, and now I have the code working, there are two pretty different failure domains inside. Can you split up the try catch (which means hoisting some variable declarations outside the block scope).
This is a cursor rule for me `@split-failure-domain.mdc` because of how often this comes up (make some RPCs then validate desired state transition)
Then I update the placeholder comment with my prediction of the failure rate.
I "changed" the code, but the diff is +9/-6.
When I'm working on the higher complexity problems I tend to be closer to the edge of my understanding. Once I get a solution, very often I can simplify the code. There are many many ways to write the same exact program. Fewer make the essential complexity obvious. And when you shift things around in exactly the kind of mechanical transformation way that LLMs can speed up... then your diff is not that big. Might be negative.
Vim and bash solved that for me a long time ago in a more reliable and efficient way (and it's certainly not the only tool capable of that).
> the same way IDEs/copy-paste/autocomplete/online documentation have radically changed our work
I was there before and went in the autocomplete/lsp thing pretty late (because Vim didn't have good lsp support for a long time, and Vim without it was still making me more efficient than any other IDE with it). Those things didn't radically change our work as you claim, it just made us a bit more productive.
A - normal, conventional senior dev work flow
B - A non-traditional but plausible senior dev workflow
C - Senior dev with LLM
I'm not claiming A = B = C, just that the step from one to another is relatively small when compared to something like a linter or other tool that accelerates development in some way.
> Which completely fails to address the point I am making.
If an LLM can do 20% more things than a junior dev can but also cannot do a different 20% of things a junior dev can, then the LLM is similar the a junior. And when comparing two juniors within a given field, it's entirely likely the above logic could apply. E.g. one junior may not be able to write SQL queries while the other does not understand the DOM. An LLM on the other hand is "kinda ok" at everything, but cannot engage in 'real' conversations about system architecture or reason through user experiences reliably. So it can do 20% more, and also 20% less. Just as a junior dev.
No one is claiming A = C, so don't keep punching the wind. They are claiming A ~= C.
If i wasn't experienced in computer science this would all fall apart however i do have to fix almost all the code, but spending 10 mins fixing something is better than 3 days figuring it out in the first place (again this might be more unique to my coding and learning style)
This is not how it works. I am only punching wind because your arguments are as solid as air.
Comparing an LLM to a Junior Dev is like comparing a Secretary to an Answering Machine - both can technically answer calls, so they must be somewhat similar? What a load of hot, steaming bullshit.
> No one is claiming A = C, so don't keep punching the wind. They are claiming A ~= C.
And I am claiming that A != C. I was not even arguing against they are equal, I am arguing against them being similar in any way.
I maintain what I said before, I sincerely doubt the competence and experience in real life professional setting of anyone claiming LLMs are in anyway remotely similar to junior devs
I honestly found the article to be an insufferably glib and swaggering piece that was written to maximize engagement rather than to engage the subject seriously.
The author clearly values maximizing perceived value with the least amount of effort.
Frankly, I’m tired of reading articles by people who can’t be bothered to present the arguments of the people they’re disagreeing with honestly and I just gave up halfway reading it because it was so grating.
But that said, let me reiterate a couple important points from my post:
> With no disrespect meant
I’m not calling anybody an idiot because they aren’t using an LLM. I’m sharing my honest opinion that they’re not using it correctly, but that’s very different than calling them an idiot.
> if you’re unable to find utility in these tools
This is a bit lawyerly, but note my carefully generic wording here: “find utility”. If you’re a Rust developer who doesn’t like the Rust output from your LLM, sure - but that’s not 100% of the job.
You’ll also touch bash scripts, make files, YAML, JSON or TOML config, write bug reports/feature requests, discuss architectural ideas and coding patterns, look through stack traces/dumps/error logs, or whatever else.
My point is that it is exceedingly unlikely that there is nothing an LLM can do to help your work, even if it’s not good at writing code in your domain.
Hence the statement that if you cannot find utility, you’re not using it correctly. It takes time to learn how to use these tools effectively, even in domains they excel in.
The number of secretaries declines after answering machines became more prevalent.
You can keep throwing ad hominin if you think it's helping change reality. I wish we weren't headed this way, I really do, but we are. So we might as well confront reality. Catch you in 3 years when whatever happens - happens.
Citation needed
> Catch you in 3 years when whatever happens - happens.
I wish I had 1 euro for whenever crypto shills told me those exact words regarding the inevitably of cryptocurrency replacing actual currency.
Note that I said 1 euro and not 1 of some shitcoint.
Look at stock market courses for instance. They are endlessly prevalent, an eternally green scam. People spend thousands to lose even more money all the time. Sunk cost fallacy is very hard for a lot of people to overcome. Scammers count on it. There is literally millions to be made in these scams if you have zero moral fiber and zero shame.
We are in a golden age of such scams. Not my quote but one article I read said something like business students right now are putting Adam Neumann's picture on their dorm walls to aspire to be like him...
Thats the crux.
copious tests - That don't work but no one cares.
documentation - That no one has or ever will read, and is hilariously inaccurate.
There is a lot of software pre AI that is churned out because some manager wanted exactly what they wanted but it had no purpose or need. I expect that to explode in the coming years for sure. I'm not afraid of AI, its merely ok, another tool is all.
It will allow companies to dig themselves very deep holes. Devs wise to the game will be able to charge astronomical fees to empty the pools filled with the AI sewage they have been filled with.
tptacek has always come across arrogant, juvenile, opinionated, and difficult to work with.
Some other threads of conversation get intertwined here with concerns about delusional management making decisions to cut staff and reduce hiring for junior positions, on the strength of the promises by AI vendors and their paid/voluntary shills
For many like me who have encouraged sharp young people learn computers, we are watching their spirits crushed by this narrative and have a strong urge to push back — we still need new humans to learn how computer systems actually work, and if nobody is willing to pay them for work because an LLM outperforms them on those menial “rite-of-passage” types of software construction, we will find ourselves in a bad place
Do I think those rise to "case studies"? No. But to another commenters point, detailed and rigorous case studies have always been hard to come by for any productivity process or technology.
I also think that article is hype, but it's not true that it's vague.