So what happens is a corporation ends up spending a lot of money for a square tool that they have to hammer into a circle hole. They do it because the alternative is worse.
AI coding does not allow you to build anything even mildly complex with no programmers yet. But it does reduced by an order of magnitude the amount of money you need to spend on programming a solution that would work better.
Another thing AI enables is significantly lower switching costs. A friend of mine owned an in person and online retailer that was early to the game, having come online in the late 90s. I remember asking him, sometime around 2010, when his Store had become very difficult to use, why he didn’t switch to a more modern selling platform, and the answer was that it would have taken him years to get his inventory moved from one system to another. Modern AI probably could’ve done almost all of the work for him.
I can’t even imagine what would happen if somebody like Ford wanted to get off of their SAP or Oracle solution. A lot of these products don’t withhold access to your data but they also won’t provide it to you in any format that could be used without a ton of work that until recently would’ve required a large number of man hours
no way. We're not talking a standalone AI created program for a single end-user, but entire integrated e-commerce enterprise system that needs to work at scale and volume. Way harder.
Could you share any data on this? Are there any case studies you could reference or at least personal experience? One order of magnitude is 10x improvement in cost, right?
Their initial answer/efforts seem to be a qualified but very qualified "Possibly" (hah).
They talked of pattern matching and recognition being a very strong point, but yeah, the edge cases tripping things up, whether corrupt data or something very obscure.
Somewhat like the study of MRIs and CTs of people who had no cancer diagnosis but would later go on to develop cancer (i.e. they were sick enough that imaging and testing was being ordered but there were no/insufficient markers for a radiologist/oncologist to make the diagnosis, but in short order they did develop those markers). AI was very good at analyzing the data set and with high accuracy saying "this person likely went on to have cancer", but couldn't tell you why or what it found.
We are currently sunsetting our use of Webflow for content management and hosting, and are replacing it with our own solution which Cursor & Claude Opus helped us build in around 10 days:
Same story with data models, let's say you have the same data (customer contact details) in slightly different formats in 5 different data models. Which one is correct? Why are the others different?
Ultimately someone has to solve this mystery and that often means pulling people together from different parts of the business, so they can eventually reach consensus on how to move forward.
So, basically you made a replacement for webflow for your use case in 10 days, right?
There is only one program that offers this ability, but you need to pay for the entire software suite, and the process is painfully convoluted anyway. We went from doing maybe 2-3 files a day to do doing 2-3 files an hour.
I have repeated ad-nausea that the magic of LLMs is the ability to built the exact tool you need for the exact job you are doing. No need for the expensive and complex 750k LOC full tool shed software suite.
When an AI can email/message all the key people that have the institutional knowledge, ask them the right discovery questions (probably in a few rounds and working out which bits are human "hallucinations" that don't make sense). Collect that information and use it to create a solution. Then human jobs are in real trouble.
Until that AI is just a productivity boost for us.
is this like a meta-joke?
> I have a prime example of this were my company was able to save $250/usr/mo for 3 users by having Claude build a custom tool for updating ancient (80's era) proprietary manufacturing files to modern ones.
The funny thing about examples like this is that they mostly show how dumb and inefficient the market is with many things. This has been possible for a long time with, you know, people, just a little more expensive than a Claude subscription, but would have paid for itself many times over through the years.
Now with Claude, it's easy to make a quick and dirty tool to do this without derailing other efforts, so it gets done.
How is an AI supposed to create documentation, except the most useless box-ticking kind? It only sees the existing implementation, so the best it can do is describe what you can already see (maybe with some stupid guesses added in).
IMHO, if you're going to use AI to "write documentation," that's disposable text and not for distribution. Let the next guy generate his own, and he'll be under no illusions about where the text he's reading came from.
If you're going to write documentation to distribute, you had better type out words from your own damn mind based on your own damn understanding with your own damn hands. Sure, use an LLM to help understand something, but if you personally don't understand, you're in no position to document anything.
However possible it was to do this work in the past, it is now much easier to do it. When something is easier it happens more often.
No one is arguing it was impossible to do before. There's a lot of complexity and management attention and testing and programmer costs involved in building something in house such that you need a very obvious ROI before you attempt it especially since in house efforts can fail.
I wonder how much of the benefit of AI is just companies permitting it to bypass their process overhead. (And how many will soon be discovering why that process overhead was there)
One thing that's interesting is that their original Salesforce implementations were so badly done that I could imagine them being done with an LLM. The evergreen stream of work that requires human precision (so far, anyway) is all of the integration work that comes afterwards.
You are assuming that corporations have the capability to design the software they need.
There are many benefits to SaaS software, and some significant costs (e.g. integration).
One major benefit of SaaS is domain knowledge and most people underestimate the complexity of even well known domains (e.g. accounts).
Companies also underestimate the difficulty of aligning diverging political needs within the business, and they underestimate the expense of distraction on a non-core area that there is no business advantage to becoming competent at. As a vendor sometimes our job was simply to be the least worst solution.
At least that's what I saw.
Experience shows that that's the case at least 50% of the time
There are plenty of workers who refuse to answer questions from a human until it’s escalated far enough up the chain to affect their paycheck / reputation. I’m sure that the intelligence is artificial will only multiply the disdain / noncompliance.
But then maybe there will be strategies for masking from where requests are coming, like a system that anonymizes all requests for information. Even so, I feel like there would still be a way that people would ping / walk up to their colleague in meatspace and say “hey that request came from me, thanks!”
The paid program can do it because it can accept these files as an input, and then you can use the general toolset to work towards the same goal. But the program is clunky an convoluted as hell.
To give an example, imagine you had tens of thousands of pictures of people posing, and you needed to change everyone's eye color based on the shirt color they were wearing.
You can do this in Photoshop, but it's a tedious process and you don't need all $250/mo of Photoshop to do it.
Instead make a program that auto grabs the shirt color, auto zooms in on the pupils, shows a side window of where the object detection is registering, and tees up the human worker to quickly shade in the pupils.
Dramatically faster, dramatically cheaper, tuned exactly for the specific task you need to do.
I know we are in a bubble here, but AI has definitely made its way out of silicon valley.
That's a task that I could automate as a developer, but other than LLM "vibe coding", I don't know that there's a good way for a lay person to automate it.
So, sure, some products will go the way of the dodo and some will not.
And the big advantage for us is two things: Our content marketers now have a "Cursor-light" experience when creating landingpages, as this is a "text-to-landingpage" LLM-powered tool with a chat interface from their point of view; no fumbling around in the Webflow WYSIWYG interface anymore.
And from the software engineering department's point of view, the results of the work done by the content marketers are simply changes/PR in a git repository, which we can work on in the IDE of our choice — again, no fumbling around in the Webflow WYSIWYG interface anymore.
1. Simple CRUD apps
2. Long-tail / low-TAM apps
Because neither of these make economic sense for commercial companies to develop targeted products for.Consequently, you got "bundled" generalized apps that sort of did what you wanted (GP's example) or fly-by-night one-off solutions that haven't been updated in decades.
The more interesting questions are (a) who is going to develop these new solutions and (b) who is going to maintain these new solutions? In-house dev/SRE or newly more-efficient (even cheaper) outsourced? I'd bet on in-housing, as requirements discovery / business problem debugging is going to quickly dominate delivery/update time. It already did and that was before we boosted simple app productivity.
Contracting my cost is the difference in costs for a contract across companies versus a purely internal project. This could involve the lawyers on both sides, the time taken to negotiate which party is responsible for what deliverable / risk, the cost to enforce the contract, the time taken for negotiations/ iterations, etc.
One efficient company doing it internally is obviously efficient. Two inefficient companies negotiating a contract is obviously inefficient. The interesting questions are the other 2 quadrants, where the answer may change between the LLM case and non-LLM case.
And now there’s an example in the codebase of what not to do, and other AI sessions will see it, and follow that pattern blindly, and… well, we all know where this goes.
I’m working with a company now that thinks that AI is great until you need to deploy to Prod. Probably true in some cases, especially for tools built with Prod environments as targets.
But I’m using Claude Code for a tool that doesn’t absolutely require that sort of environment. It helps a company map data (insurance risk exposure data) to a predefined intermediate layout and column schema.
I know that I’ll run into resistance once I say “this could be deployed to Prod” but I think AI is a major win for Prod-like things.
My professional world largely lives in spreadsheets and relational databases. Neither going anywhere anytime soon. And spreadsheets are the currency of the business and industry in so many ways. They are very prod-like in my opinion.
Agreed absolutely, but that's also what I'm talking about. It's very clear it was a bad tradeoff. Not only $250/month x three seats, but also apparently whatever the opportunity cost just of personnel tied up doing "2-3 files a day" when they could have been doing "2-3 files an hour".
Even if we take at face value that there are no "programmers" at this company (with an employee commenting on hacker news, someone using Claude to iterate on a GUI frontend for this converter, and apparently enough confidence in Claude's output to move their production system to it), there are a million people you could have hired over the last decade to throw together a file conversion utility.
And this happens all the time in companies where they don't realize which side of https://xkcd.com/1205/ they're on.
It's great if, like personal projects people never get started on, AI shoves them over the edge and gets them to do it, but we can also be honest that they were being pretty dumb for continually spending that money in the first place.
I mean, I'm absolutely familiar with how company decision making and inertia can lead to these things happening, it happens constantly, and the best time to plant a tree is today and all that, but the ex post facto rationalizations ring pretty hollow when the solution was apparently vibecoded with no programmers at the company, immediately saved them $750 a month and improved their throughput by 8x.
Clearly it was a very bad call not to have someone spend a couple of days looking into the feasibility of this 10 years ago.
see, i actually read and monitor the outputs. i check them against my own internal knowledge. i trial the results with real trouble shooting and real bug fixes/feature requests.
when its wrong, i fix it. when its right, great we now have documentation where none existed before.
dogfood the documentation and you'll know if its worth using or not.
he AI is there to do the easy part; scan a giant spaghetti bowl and label each noodle. The humans job is to attach descriptions to those noodles.
Sometimes I forget that people on this site simply assume the worst in any given situation.
I wouldn't.
Knowing what to build is part that many businesses struggle with.
As much as consultants are lambasted, my experience of companies is that they struggle to develop or maintain anything in-house - even where it should theoretically make economic sense. >>46864857
AI is incapable of capturing human context that 99.999% of the time exists in people's brains, not code or context. This is why it is crucial that humans write for humans, not an LLM that puts out docs that have the aesthetics of looking acceptable.
I imagine "vibe coder" will eventually coalesce with business requirements analyst, into a sort of "LOB developer-lite." I.e. every low-code products' undelivered citizen developer dream.
You need someone in the technical details and with some developer background (thinking through edge cases is a hard skill requirement), and you need someone with the process analysis and documentation skills (as well as the ability to push back / simplify requirements where it makes sense).
External developers/consultants are typically terrible at the requirements discovery and specification stage, because they're not embedded day-to-day with the business. Ergo, you get stupid feature decisions because someone left a sentence off a doc.
From your other comment, I think you're thinking about more complex / core / feature-rich solutions than I am. I agree those may remain SaaS / outsourced.
But there's no way in hell dirt-simple CRUD and "I am the only person in the world who has this need" solutions stay out of house.