zlacker

[return to "Welcome to Gas Town"]
1. tokioy+2Ac[view] [source] 2026-01-06 00:41:28
>>gmays+(OP)
Everyone keeps being angry at me when I mention that the way things are going, future development will just be based on "did something wrong while writing code? all good, throw everything out and rewrite, keep pulling the level of the slot machine and eventually it'll work". It's a fair tactic, and it might work if we make the coding agents cheap enough.

I'll add a personal anecdote - 2 years ago, I wrote a SwiftUI app by myself (bare you, I'm mostly an infrastructure/backend guy with some expertise in front end, where I get the general stuff, but never really made anything big out of it other than stuff on LAMPP back in 2000s) and it took me a few weeks to get it to do what I want to do, with bare minimum of features. As I was playtesting my app, I kept writing a wishlist of features for myself, and later when I put it on AppStore, people around the world would email me asking for some other features. But life, work and etc. would get into way, and I would have no time to actually do them, as some of the features would take me days/weeks.

Fast forward to 2 weeks ago, at this point I'm very familiar with Claude Code, how to steer multiple agents at a time, quick review its outputs, stitch things together in my head, and ask for right things. I've completed almost all of the features, rewrote the app, and it's already been submitted to AppStore. The code isn't perfect, but it's also not that bad. Honestly, it's probably better from what I would've written myself. It's an app that can be memory intensive in some parts, and it's been doing well from my testings. On top of it, since I've been steering 2-3 agents actively myself, I have the entire codebase in my mind. I also have overwhelming amount of more notes what I would do better and etc.

My point is, if you have enough expertise and experience, you'll be able to "stitch things together" cleaner than others with no expertise. This also means, user acquisition, marketing and data will be more valuable than the product itself, since it'll be easier to develop competing products. Finding users for your product will be the hard part. Which kinda sucks, if I'll be honest, but it is what it is.

◧◩
2. Coding+5Hc[view] [source] 2026-01-06 01:41:22
>>tokioy+2Ac
> It's a fair tactic, and it might work if we make the coding agents cheap enough.

I don’t see how we get there, though, at least in the short term. We’re still living in the heavily-corporate-subsidized AI world with usage-based pricing shenanigans abound. Even if frontier models providers find a path to profitability (which is a big “if”), there’s no way the price is gonna go anywhere but up. It’s moviepass on steroids.

Consumer hardware capable of running open models that compete with frontier models is still a long ways away.

Plus, and maybe it’s just my personal cynicism showing, but when did tech ever reduce pricing while maintaining quality on a provided service in the long run? In an industry laser focused on profit, I just don’t see how something so many believe to be a revolutionary force in the market will be given away for less than it is today.

Billions are being invested with the expectation that it will fetch much more revenue than it’s generating today.

◧◩◪
3. margal+9Mc[view] [source] 2026-01-06 02:32:42
>>Coding+5Hc
Many of the billions being invested are for the power bill of training of new models. Not to mention the hardware needed to do so. Any hardware training a new model, isn't being used for inference.

If training of new models ceased, and hardware was just dedicated to inference, what would that do to prices and speed? It's not clear to me how much inference is actually being subsidized over the actual cost to run the hardware to do it. If there's good data on that I'd love to learn more though.

[go to top]