I'll add a personal anecdote - 2 years ago, I wrote a SwiftUI app by myself (bare you, I'm mostly an infrastructure/backend guy with some expertise in front end, where I get the general stuff, but never really made anything big out of it other than stuff on LAMPP back in 2000s) and it took me a few weeks to get it to do what I want to do, with bare minimum of features. As I was playtesting my app, I kept writing a wishlist of features for myself, and later when I put it on AppStore, people around the world would email me asking for some other features. But life, work and etc. would get into way, and I would have no time to actually do them, as some of the features would take me days/weeks.
Fast forward to 2 weeks ago, at this point I'm very familiar with Claude Code, how to steer multiple agents at a time, quick review its outputs, stitch things together in my head, and ask for right things. I've completed almost all of the features, rewrote the app, and it's already been submitted to AppStore. The code isn't perfect, but it's also not that bad. Honestly, it's probably better from what I would've written myself. It's an app that can be memory intensive in some parts, and it's been doing well from my testings. On top of it, since I've been steering 2-3 agents actively myself, I have the entire codebase in my mind. I also have overwhelming amount of more notes what I would do better and etc.
My point is, if you have enough expertise and experience, you'll be able to "stitch things together" cleaner than others with no expertise. This also means, user acquisition, marketing and data will be more valuable than the product itself, since it'll be easier to develop competing products. Finding users for your product will be the hard part. Which kinda sucks, if I'll be honest, but it is what it is.
I've had the same experience as you. I've applied it to old projects which I have some frame of reference for and it's like a 200x speed boost. Just absolutely insane - that sort of speed can overcome a lot of other shortcomings.
I don’t see how we get there, though, at least in the short term. We’re still living in the heavily-corporate-subsidized AI world with usage-based pricing shenanigans abound. Even if frontier models providers find a path to profitability (which is a big “if”), there’s no way the price is gonna go anywhere but up. It’s moviepass on steroids.
Consumer hardware capable of running open models that compete with frontier models is still a long ways away.
Plus, and maybe it’s just my personal cynicism showing, but when did tech ever reduce pricing while maintaining quality on a provided service in the long run? In an industry laser focused on profit, I just don’t see how something so many believe to be a revolutionary force in the market will be given away for less than it is today.
Billions are being invested with the expectation that it will fetch much more revenue than it’s generating today.
Since we have version control, you can restart anywhere if you think it's a good place to fork from. I like greenfield development, but I suspect that there are going to be a lot more forks from now on, much like the game modding scene.
We're also seeing significant price reductions every year for LLM's. Not for frontier models, but you can get the equivalent of last year's model for cheaper. Hard to tell from the outside, but I don't think it's all subsidized?
I think maybe people over-updated on Bitcoin mining. Most tech is not inherently expensive.
If training of new models ceased, and hardware was just dedicated to inference, what would that do to prices and speed? It's not clear to me how much inference is actually being subsidized over the actual cost to run the hardware to do it. If there's good data on that I'd love to learn more though.
That's an old world that we experienced in 2000s, and maybe in early 2010s, where we cared about the quality on a provided service in the long run. For anything web-app-general-stuff related, that's long gone, as everyone (reads: mostly everyone) has very short attention span, and what is needed is "if the thing i desire can be done right now". In long run? Who cares. I keep seeing this in every day life, at work, discussions with my previous clients and etc.
Once again, I wish it wasn't true, but nothing is pointing that it's not true.
I'm a full stack dev, and solo, so I write data schema, backends and frontends at the same time, usually flipping between them to test parts of new features. As far as AI use, I'm really just at the level of using a single Claude agent in an IDE - and only occasionally, because it writes a lot of nonsense. So maybe I'm missing out on the benefits of multiple agents. But where I currently see value in it is in writing (1) boilerplate and (b) sugar - where it has full access to a large and stable codebase. Where I think it fails is in writing overarching logical structures, especially early on in a project. It isn't good at writing elegant code with a clear view of how data, back and front should work together. When I've tried to start projects from scratch with Claude, it feels like I'm fighting against its micro-view of each piece of code, where it's unable to gain a macro-view of how to orchestrate the whole system.
So like, maybe a bottomless wallet and a dozen agents would help with that, but there isn't so much room for errors or bugs in my work code as there is in my fun/play/casual game code. As a result I'm not really seeing that much value in it for paid work.
Or, if it does _now_, how long it'll be before it' will work well using downloadable models that'll run on, say, a new car's worth of Mac Studios with a bunch of RAM in them to allow a small fleet of 70B and 120B models (or larger) to run locally? Perhaps even specialised models for each of the roles this uses?
There's little evidence this is true. Even OpenAI who is spending more than anyone is only losing money because of the free version of ChatGPT. Anthropic says they will be profitable next year.
> Plus, and maybe it’s just my personal cynicism showing, but when did tech ever reduce pricing while maintaining quality on a provided service in the long run? In an industry laser focused on profit, I just don’t see how something so many believe to be a revolutionary force in the market will be given away for less than it is today.
Really?
I mean I guess I'm showing my age but the idea I can get a VM for a couple of dollars a month and expect it to be reliable make me love the world I live in. But I guess when I started working there was no cloud and to get root on a server meant investing thousands of dollars.
Companies with money-making businesses are gonna find themselves in an interesting spot when the "vibe juniors" are the vast majority of the people they can find to hire. New ways will be needed to reduce the risk.
...go to jail?
But how many of those providers are too subsidizing their offering through investment capital? I don't know offhand of anyone in this space that is running at or close to breakeven.
It feels very much like the early days of streaming when you could watch everything with a single Netflix account. Those days are long gone and never coming back.
According to Ed Zitron, Anthropic spent more than it's total revenue in the first 9 months of 2025 on AWS alone: $2.66 billion on AWS compute on an estimated $2.55 billion in revenue. That's just AWS, not payroll, not other software or hardware spend. He's regularly reporting concrete numbers that look horrible for the industry while hyperscalers and foundation model companies continue to make general statements while refusing to get specific or release real revenue figures. If you only listen to what the CEOs are saying, then sure it sounds great.
Anthropic also said that AI would be writing 95% of code in 3 months or something, however many months ago that was.
If your end goal is to produce some usable product, then the implementation details matter less. Does it work? Yes? OK then maybe dont wrestle with the agent over specific libraries or coding patterns.
Yes, but it's unclear how much of that is training costs vs operational costs. They are very different things.