Over the last year or so, my development speed relative to my own baseline from ~2019 is easily 20x, sometimes more. Not because I type faster, or because I cut corners, but because I changed how I use AI.
The short version: I don’t use AI inside my editor. I use two AIs in parallel, in the browser, with full context.
Here’s the setup.
I keep two tabs open:
One AI that acts as a “builder”. It gets a lot of context and does the heavy lifting.
One AI that acts as a reviewer. It only sees diffs and tries to find mistakes.
That’s it. No plugins, no special tooling. Just browser tabs and a terminal.
The important part is context. Instead of asking for snippets, I paste entire files or modules and explain the goal. I ask the AI to explain the approach first, including tradeoffs, before it writes code. That forces me to stay in control of architecture instead of accepting a blob I don’t understand.
A typical flow looks like this:
1. Paste several related files (often across languages).
2. Describe the change I want and ask for an explanation of options. Read and summarize concepts, wikipedia, etc.
3. Pick an approach. Have extensive conversations about trade-offs, concepts, adversarial security etc. Find ways to do things that the OS allows.
4. Let the AI implement it across all files.
5. Copy the diff into the second AI and ask it to look for regressions, missing arguments, or subtle breakage.
6. Fix whatever it finds.
Ship.
The second AI catches a lot of things I would otherwise miss when moving fast. Things like “you changed this call signature but didn’t update one caller” or “this default value subtly changed behavior”.
What surprised me is how much faster cross-stack work gets. Stuff that used to stall because it crossed boundaries (Swift → Obj-C → JS, or backend → frontend) becomes straightforward because the AI can reason across all of it at once.
I’m intentionally strict about “surgical edits”. I don’t let the AI rewrite files unless that’s explicitly the task. I ask for exact lines to add or change. That keeps diffs small and reviewable.
This is very different from autocomplete-style tools. Those are great for local edits, but they still keep you as the integrator across files. This approach flips that: you stay the architect and reviewer, the AI does the integration work, and a second AI sanity-checks it.
Costs me about $40/month total. The real cost is discipline: always providing context, always reviewing diffs, and never pasting code you don’t understand.
I’m sharing this because it’s been a genuine step-change for me, not a gimmick. Happy to answer questions about limits, failure modes, or where this breaks down.
Here is a wiki-type overview I put together for our developers on our team: https://community.intercoin.app/t/ai-assisted-development-playbook-how-we-ship-faster-without-breaking-things/2950
chrisjj•1w ago
EGreg•1w ago
I architect of it and go through many iterations. The machine makes mistakes, when I test I have to come back and work through the issues. I often correct the machine about stuff it doesn't know, or missed due to its training.
And ultimately I'm responsible for the code quality, I'm still in the loop all the time. But rather than writing everything by hand, following documentation and make a mistake, I have the machine do the code generation and edits for a lot of the code. There are still mistakes that need to be corrected until everything works, but the loop is a lot faster.
For example, I was able to port our MySQL adapter to PostGres AND Sqlite, something that I had been putting off for years, in about 3-5 hours total, including testing and bugfixes and massive refactoring. And it's still not in the main branch because there is more testing I want to have done before it's merged: https://github.com/Qbix/Platform/tree/refactor/DbQuery/platf...
Here is my first speedrun: https://www.youtube.com/watch?v=Yg6UFyIPYNY
chrisjj•1w ago
You write the program as source code.
Prompting an LLM to cobble together lines from other people's work is not writing a program.
readthenotes1•1d ago
His language is LLM prompts. If he can check them into git and get reasonably consistent results if he ran the prompts multiple times, just like we expect from our JavaScript or C or assembly or machine code, I don't see the problem.
I knew a guy who could patch a running program by flipping switches on the front panel of a computer. He didn't argue my C language output 'is not writing a program'...
imiric•1d ago
You're joking, right? There's nothing "reasonably consistent" about LLMs. You can input the same prompt with the same context, and get wildly different results every time. This is from a single prompt. The idea that you can get anything close to consistent results across a sequence of prompts is delusional.
You can try prompt "hacks" like STRONGLY EMPHASIZING correct behaviour (or threaten to murder kittens like in the old days), but the tool will eventually disregard an instruction, and then "apologize" profusely for it.
Comparing this to what a compiler does is absurd.[1]
Sometimes it feels like users of these tools are in entirely separate universes given the wildly different perspectives we have.
[1]: Spare me the examples of obscure compiler inconsistencies. These are leagues apart in every possible way.
tpmoney•21h ago
And yet, I don't see a problem with saying directors made their movies. Sure, it was the work of a lot of talented individuals contributing collectively to produce the final product, and most of those individuals probably contributed more physical "creation" to the film than the director did. But the director is a film maker. So I wouldn't be so confident asserting that someone who coordinates and architects an application by way of various automation tools isn't still a programmer or "writing software"
eloisius•1d ago
unsupp0rted•1d ago