But I’m negatively surprised with the amount of money CC costs. Just a simple refactoring cost me about 5min + 15min review and 4usd, had I done it myself it might have taken 15-20min as well.
How much money do you typically spend on features using CC? Nobody seems to mention this
That being said, Claude Code produces the best code I've seen from an AI coding agent, and the subscriptions are a steal.
https://support.anthropic.com/en/articles/11145838-using-cla...
Could you spend that 15-20min on some other task while this one works in the background?
Due to oversupply. First you needed humans who can code. But now you need scalable compute.
Equivalent would be hiring those people to wave a flag infront of a car. They are replaced bt modern cars, but you dont get to receive the flag wavers wage as captured value for long if at all.
It is a project for model based governance of Databricks Unity Catalog, with which I do have quite a bit of experience, but none of the tooling feels flexible enough.
Eventually I ended up with 3 different subagents that supported in the development of the actual planning files; a Databricks expert, a Pydantic expert, and a prompt expert.
The improvement on the markdown files was rather significant with the aid of these. Ranging from old pydantic versions and inconsistencies, to me having some misconceptions about unity catalog as well.
Yesterday eve I gave it a run and it ran for about 2 hours with me only approving some tool usage, and after that most of the tools + tests were done.
This approach is so different than I how used to do it, but I really do see a future in detailed technical writing and ensuring we're all on the same page. In a way I found it more productive than going into the code itself. A downside I found is that with code reading and working on it I really zone in. With a bunch of markdown docs I find it harder to stay focused.
Curious times!
Waterfall ~1970, Agile ~2001, Continuous (DevOps) ~2015, Autonomous Dev ~2030, Self-Evolving Systems ~2040, Goal-Directed Ecosystems ~2050+
I plan to do a more detailed write down sometime next week or the week after when I've "finished" my 100% vibe coded website.
This kind of AI-driven development feels very similar to that. By forcing you to sit down and map the territory you're planning to build in, the coding itself becomes secondary, just boilerplate to implement the design decision you've made. And AI is great at boilerplate!
To me it's always felt like waterfall in disguise and just didn't fit how I make programs. I feel it's just not a good way to build a complex system with unknown unknowns.
That the AI design process seems to rely on this same pattern feels off to me, and shows a weakness of developing this way.
It might not matter, admittedly. It could be that the flexibility of having the AI rearchitect a significant chunk of code on the fly works as a replacement to the flexibility of designing as you go.
It’s a totally waste of time to do TDD to only find out you made a bad design choice or discovered a conflicting problem.
Maybe it should be done that way with AI: experiment with AI if you need to, then write a plan with AI, then let the AI do the implementation.
It's interesting because I remember having discussions with a colleague who was a fervent proponent of TDD where he said that with that approach you "just let the tests drive you" and "don't need to sit down and design your system first" (which I found a terrible idea).
(I've certainly seen it done though, with predicable result.)
Here is my playbook: https://nocodo.com/playbook/
Assumptions without evaluation are not trustworthy.
Need sales forecasting? This used to be an enterprise feature that 10 years ago would have needed a large team to implement correctly. Claude implements a docker container in one afternoon.
It really changes how I see software now. Before there were NDAs and intellectual property and companies too great care not to leak their source code.
Now things have changed, have a complex ERP system that took 20 years to develop? Well, claude can re-implement it in a flash. And write documentation and tests for it. Maybe it doesn't work quite that well yet, but things are moving fast.
But since then I have come to have it always write ARCHITECTURE.md and IMPLEMENTATION.md when doing a new feature and CLAUDE-CONTINUE.md. All three live in the resp. folder of the feature (in my case, it's often a new crate or a new module, as I write Rust).
The architecture one is usually the result of some back and forth with CC much like the author describes it. Once that is nailed it writes that out and also the implementation. These are not static ofc, they may get updated during the process but the longer you spend discussing with CC and thinking about what you're doing the less likely this is necessary. Really no surprise there -- this works the same way in meat space. :]
I have an instruction in CLAUDE.md that it should update CLAUDE-CONTINUE.md with the current status, referencing both the other documents, when the context is 2% away from getting compacted.
After the compact it reads the resp. CLAUDE-CONTINUE.md (automatically, since it's referenced in CLAUDE.md) and then usually continues as if nothing happened. Without this my mileage varied as it needs to often read a lot of code (again) first and calibrated to what parts of architecture and implementation it did, before the compact.
I often also have it write out stuff that is needed in dependencies that I maintain or that are part of the project so then it creates ARCHITECTURE-<feature>-<crate>.md and I just copy that over to the resp. repo and tell another CC instance there to write the implementation document and send it off.
A lot of stuff I do is done via Terry [1] and this approach has worked a treat for me. Shout out to these guys, they rock.
Edit: P.S. I have 30+ years R&D experience in my field so I have deep understanding of what I do (computer graphics, system programming, mostly). I have quite a few friends with a decade or less of R&D experience and they struggle to get the same amount of shit done with CC or Ai.
The models are not there yet, you need the experience. I also mainly formulate concisely what I want and what the API should look and the go back and forth with CC, not start with a fuzzy few sentences and cross my fingers that what it comes up with is something I may like and can then mold a tad.
I also found that not getting weird bugs that the model may chase for several "loops" seem correlated with the amount of statically-typed code. I.e. I've been recently working on a Python code base that interfaces with Rust and the number of times CC shot itself in the foot because it assumed a foo was a [foo] and stuff like that is just astounding. This obviously doesn't happen in Rust, the language/compiler catches it and the model 'knows' it can't get away with it so it seems to exercises more rigor (but I may be 'hallucinating' that).
TLDR; I came to the conclusion that statically-typed languages get you higher returns with these models for this reason.
With OpenAI I find ChatGPT just slows to a crawl and the chat becomes unresponsive. Asking it to make a document, to import into a new chat, helps with that.
On a human level, it makes me think that we should do the same ourselves. Reflect, document and dump our ‘memory’ into a working design doc. To free up ourselves, as well as our LLMs.
For anything bigger than small size features, I always think about what I do and why I do things. Sometimes in my head, sometimes on paper, a Confluence page or a white board.
I don't really get it. 80 % of software engineering is to figure out what you need and how to achieve this. You check with the stake holders, write down the idea of what you want to do and WHY you want to do it. You do some research.
Last 20 % of the process is coding.
This was always the process. You don't need AI for proper planning and defining your goals.
And of course for most things, there's a pretty obvious way it's probably going to work, no need to spend much time on that.
In almost all these cases, development process is a mix of coding & discovering, updating the mental model of the code on the go. It almost never starts with docs, spec or tests. Some projects are good for TDD, but some don't even need it.
And even for these use-cases, using AI coding agents changes the game here. Now it does really matter to first describe the idea, put it into spec, and verbalize everything in your head that you think will matter for the project.
Nowadays, the hottest programming language is English, indeed.
doing it for the LLM really highlights that limitation. they arent trained statefully, not at the foundation model, where it matters. that state gets reproduced on top of the model in the form of "reasoning" and "chain of thought" but that level of scaffolding is a classic example of the bitter lesson. like semantic trees of old.
the representation learning + transformer model needs to be evolved to handle state, then it should be able to do these things itself
You can get an extra 15-20% out of it if you also document the parts of the codebase you expect to change first. Let the plan model document how it works, architecture and patterns. Then plan your feature with this in the context. You'll get better code out of it.
Also, make sure you review, revise and/or hand edit the docs and plans too. That pays significant dividends down the line.
So; I have Gemini write up plans for something, having it go deep and be as explicit as possible in its explanations.
I feed this into CC and have it implement the change in my code base. This has been really strong for me in making new features or expanding upon others where I feel something should be considerably improved.
The product I’ve built from the ground up over the last 8w is now in production and being actively demoed to clients. I am beyond thrilled with my experience and its output. As I’ve mentioned before on HN, we could have done much of this work ourselves with our existing staff, but we could not have done the front end work. What I feel might have taken well over a year and way more engineering and data science effort was mostly done in 2m. Features are added in seconds rather than hours.
I’m amazed by CC and I love reading these articles which help me to realize my own journey is being mirrored by others.
ticoombs•4h ago
My usage is nearly the same as OP. Plan plan plan save as a file and then new context and let it rip.
That's the one thing I'd love, a good cli (currently using charm and cc) which allows me to have an implementation model, a plan model and (possibly) a model per sub agent. Mainly so I can save money by using local models for implementation and online for plans or generation or even swapping back. Charm has been the closest I've used so far allowing me to swap back and forth and not lose context. But the parallel sub-agent feature is probably one of the best things claudecode has.
(Yes I'm aware of CCR, but could never get it to use more than the default model so :shrug:)
scastiel•4h ago
NitpickLawyer•3h ago
This is the downside of living in a world of tweets, hot takes and content generation for the sake of views. Prompt engineering was always important, because GIGO has always been a ground truth in any ML project.
This is also why I encourage all my colleagues and friends to try these tools out from time to time. New capabilities become aparent only when you try them out. What didn't work 6mo ago has a very good chance of working today. But you need a "feel" for what works and what doesn't.
I also value much more examples, blogs, gists that show a positive instead of a negative. Yes, they can't count the r's in strawberry, but I don't need that! I don't need the models to do simple arithmetic wrong. I need them to follow tasks, improve workflows and help me.
Prompt engineering was always about getting the "google-fu" of 10-15 years ago rolling, and then keeping up with what's changed, what works and what doesn't.
BiteCode_dev•3h ago
They are well documented because you need context for the LLM to be performant. And they are well tested because the cost of producing test got lower since they can be half generated, while the benefit of having tests got higher, since they are guard rails for the machine.
People constantly say code quality is going to plummet because of those tools, but I think the exact opposite is going to happen.
oblio•3h ago
baq•2h ago
IOW there’s very clear ROI on docs today, it wasn’t so earlier.
oblio•1h ago
samrus•1h ago