frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

I write and ship code ~20–50x faster than I did 5 years ago

3•EGreg•2h ago
I’ve been meaning to write this up because it’s been surprisingly repeatable, and I wish someone had described it to me earlier.

Over the last year or so, my development speed relative to my own baseline from ~2019 is easily 20x, sometimes more. Not because I type faster, or because I cut corners, but because I changed how I use AI.

The short version: I don’t use AI inside my editor. I use two AIs in parallel, in the browser, with full context.

Here’s the setup.

I keep two tabs open:

One AI that acts as a “builder”. It gets a lot of context and does the heavy lifting.

One AI that acts as a reviewer. It only sees diffs and tries to find mistakes.

That’s it. No plugins, no special tooling. Just browser tabs and a terminal.

The important part is context. Instead of asking for snippets, I paste entire files or modules and explain the goal. I ask the AI to explain the approach first, including tradeoffs, before it writes code. That forces me to stay in control of architecture instead of accepting a blob I don’t understand.

A typical flow looks like this:

1. Paste several related files (often across languages).

2. Describe the change I want and ask for an explanation of options. Read and summarize concepts, wikipedia, etc.

3. Pick an approach. Have extensive conversations about trade-offs, concepts, adversarial security etc. Find ways to do things that the OS allows.

4. Let the AI implement it across all files.

5. Copy the diff into the second AI and ask it to look for regressions, missing arguments, or subtle breakage.

6. Fix whatever it finds.

Ship.

The second AI catches a lot of things I would otherwise miss when moving fast. Things like “you changed this call signature but didn’t update one caller” or “this default value subtly changed behavior”.

What surprised me is how much faster cross-stack work gets. Stuff that used to stall because it crossed boundaries (Swift → Obj-C → JS, or backend → frontend) becomes straightforward because the AI can reason across all of it at once.

I’m intentionally strict about “surgical edits”. I don’t let the AI rewrite files unless that’s explicitly the task. I ask for exact lines to add or change. That keeps diffs small and reviewable.

This is very different from autocomplete-style tools. Those are great for local edits, but they still keep you as the integrator across files. This approach flips that: you stay the architect and reviewer, the AI does the integration work, and a second AI sanity-checks it.

Costs me about $40/month total. The real cost is discipline: always providing context, always reviewing diffs, and never pasting code you don’t understand.

I’m sharing this because it’s been a genuine step-change for me, not a gimmick. Happy to answer questions about limits, failure modes, or where this breaks down.

Here is a wiki-type overview I put together for our developers on our team: https://community.intercoin.app/t/ai-assisted-development-playbook-how-we-ship-faster-without-breaking-things/2950

Comments

chrisjj•1h ago
But you aren't writing code. You are getting a machine to do it.
EGreg•35m ago
That could be said about compiling higher-level languages instead of rolling your own assembly and garbage collector. It's just working on a higher level. You're a lot more productive with, say, PHP than you are writing assembly.

I architect of it and go through many iterations. The machine makes mistakes, when I test I have to come back and work through the issues. I often correct the machine about stuff it doesn't know, or missed due to its training.

And ultimately I'm responsible for the code quality, I'm still in the loop all the time. But rather than writing everything by hand, following documentation and make a mistake, I have the machine do the code generation and edits for a lot of the code. There are still mistakes that need to be corrected until everything works, but the loop is a lot faster.

For example, I was able to port our MySQL adapter to PostGres AND Sqlite, something that I had been putting off for years, in about 3-5 hours total, including testing and bugfixes and massive refactoring. And it's still not in the main branch because there is more testing I want to have done before it's merged: https://github.com/Qbix/Platform/tree/refactor/DbQuery/platf...