zlacker

Tell HN: I write and ship code ~20–50x faster than I did 5 years ago

submitted by EGreg+(OP) on 2025-12-30 19:19:55 | 64 points 103 comments
[source] [go to bottom]

I’ve been meaning to write this up because it’s been surprisingly repeatable, and I wish someone had described it to me earlier.

Over the last year or so, my development speed relative to my own baseline from ~2019 is easily 20x, sometimes more. Not because I type faster, or because I cut corners, but because I changed how I use AI.

The short version: I don’t use AI inside my editor. I use two AIs in parallel, in the browser, with full context.

Here’s the setup.

I keep two tabs open:

One AI that acts as a “builder”. It gets a lot of context and does the heavy lifting.

One AI that acts as a reviewer. It only sees diffs and tries to find mistakes.

That’s it. No plugins, no special tooling. Just browser tabs and a terminal.

The important part is context. Instead of asking for snippets, I paste entire files or modules and explain the goal. I ask the AI to explain the approach first, including tradeoffs, before it writes code. That forces me to stay in control of architecture instead of accepting a blob I don’t understand.

A typical flow looks like this:

1. Paste several related files (often across languages).

2. Describe the change I want and ask for an explanation of options. Read and summarize concepts, wikipedia, etc.

3. Pick an approach. Have extensive conversations about trade-offs, concepts, adversarial security etc. Find ways to do things that the OS allows.

4. Let the AI implement it across all files.

5. Copy the diff into the second AI and ask it to look for regressions, missing arguments, or subtle breakage.

6. Fix whatever it finds.

Ship.

The second AI catches a lot of things I would otherwise miss when moving fast. Things like “you changed this call signature but didn’t update one caller” or “this default value subtly changed behavior”.

What surprised me is how much faster cross-stack work gets. Stuff that used to stall because it crossed boundaries (Swift → Obj-C → JS, or backend → frontend) becomes straightforward because the AI can reason across all of it at once.

I’m intentionally strict about “surgical edits”. I don’t let the AI rewrite files unless that’s explicitly the task. I ask for exact lines to add or change. That keeps diffs small and reviewable.

This is very different from autocomplete-style tools. Those are great for local edits, but they still keep you as the integrator across files. This approach flips that: you stay the architect and reviewer, the AI does the integration work, and a second AI sanity-checks it.

Costs me about $40/month total. The real cost is discipline: always providing context, always reviewing diffs, and never pasting code you don’t understand.

I’m sharing this because it’s been a genuine step-change for me, not a gimmick. Happy to answer questions about limits, failure modes, or where this breaks down.

Here is a wiki-type overview I put together for our developers on our team: https://community.intercoin.app/t/ai-assisted-development-playbook-how-we-ship-faster-without-breaking-things/2950


NOTE: showing posts with links only show all posts
◧◩
2. EGreg+tg[view] [source] [discussion] 2025-12-30 20:56:43
>>chrisj+76
That could be said about compiling higher-level languages instead of rolling your own assembly and garbage collector. It's just working on a higher level. You're a lot more productive with, say, PHP than you are writing assembly.

I architect of it and go through many iterations. The machine makes mistakes, when I test I have to come back and work through the issues. I often correct the machine about stuff it doesn't know, or missed due to its training.

And ultimately I'm responsible for the code quality, I'm still in the loop all the time. But rather than writing everything by hand, following documentation and make a mistake, I have the machine do the code generation and edits for a lot of the code. There are still mistakes that need to be corrected until everything works, but the loop is a lot faster.

For example, I was able to port our MySQL adapter to PostGres AND Sqlite, something that I had been putting off for years, in about 3-5 hours total, including testing and bugfixes and massive refactoring. And it's still not in the main branch because there is more testing I want to have done before it's merged: https://github.com/Qbix/Platform/tree/refactor/DbQuery/platf...

Here is my first speedrun: https://www.youtube.com/watch?v=Yg6UFyIPYNY

9. ryandv+6Oj[view] [source] 2026-01-06 14:54:29
>>EGreg+(OP)
I don't know. You may as well say that after reading Uncle Bob's Clean Code and adding 50 layers of indirection, you are now writing at "enterprise scale." Perhaps you even hired an Agile SCUM consultant, and now look at your velocity (at least they're measuring something)!

Use my abstract factory factories and inversion of control containers. With Haskell your entire solution is just a 20-line mapreduce in a monad transformer stack over IO. In J, it's 20 characters.

I don't see how AI differs. Rather, the last study of significance found that devs were gaslighting themselves into believing they were more productive, when the data actually bore the opposite conclusion [0].

[0] >>44522772

◧◩
17. ryandv+0Qj[view] [source] [discussion] 2026-01-06 15:03:28
>>63stac+6Pj
Yes... I've asked for the same - show us the goods with a Destroy All Software style screencast; otherwise the default position is that this entire HN post is just more AI generated hallucination.

Nobody's taken me up on this offer yet. [0]

[0] >>46325469

◧◩
23. EGreg+z5k[view] [source] [discussion] 2026-01-06 16:05:23
>>63stac+6Pj
Sure. Here is my latest 4-hour speedrun: https://www.youtube.com/watch?v=Yg6UFyIPYNY
◧◩
24. EGreg+D5k[view] [source] [discussion] 2026-01-06 16:05:49
>>german+vNj
I already built 2 companies and building a third.

https://linkedin.com/in/magarshak

◧◩
26. EGreg+Y5k[view] [source] [discussion] 2026-01-06 16:07:18
>>hu3+fNj
Here is what it looks like, if I were to livestream it for 4 hours: https://www.youtube.com/watch?v=Yg6UFyIPYNY
◧◩
28. EGreg+V6k[view] [source] [discussion] 2026-01-06 16:11:14
>>eloisi+QMj
Yes, you can follow my code from 5 and 10 years ago here:

https://github.com/Qbix/Platform-History-v1

https://github.com/Qbix/Platform-History-v2

And you can see the latest code here:

https://github.com/Qbix

Documentation can be created a lot faster, including for normies:

https://community.qbix.com/t/membership-plans-and-discounts/...

My favorite part of AI is red-teaming and finding bugs. Just copypaste diffs and ask it for regressions. Press it over and over until it can't find any.

Here is a speedrun from a few days ago:

https://www.youtube.com/watch?v=Yg6UFyIPYNY

◧◩
30. EGreg+U7k[view] [source] [discussion] 2026-01-06 16:15:07
>>voidUp+gPj
Cherrypicking the most tedious parts, like boilerplate to get up and running, or porting code to other adapters (making mysqlite and postgres adapters for instance)

This was done in about 3 hours for instance: https://github.com/Qbix/Platform/tree/refactor/DbQuery/platf...

You can see the speed for yourself. Here is my first speedrun livestreamed: https://www.youtube.com/watch?v=Yg6UFyIPYNY

43. corysa+CPk[view] [source] 2026-01-06 19:05:54
>>EGreg+(OP)
This is similar to the approach touted by Steve Yegge in this interview https://www.youtube.com/watch?v=zuJyJP517Uw

I really appreciate that he is up-front about "Yes. Vibe coding has lots of dangerous problems that you must learn to control if you are to go whole-hog like this."

Has anyone read his Vibe Coding book? The Amazon reviews make it sound like it's heavy on inspiration but light on techniques.

◧◩
44. ninju+KPk[view] [source] [discussion] 2026-01-06 19:06:31
>>HellDu+Gcj
If you have the changes as a GitHub PR you can add .diff to the end of the URL to get a single page of all the changes

Citation: https://stackoverflow.com/a/6188624

61. useful+D2l[view] [source] 2026-01-06 19:58:56
>>EGreg+(OP)
>>46510369
[go to top]