Hi HN,
I wrote this post after an experiment over the holidays. I had Claude write a cross-stack feature touching our database, cloud infra, mobile app, and the embedded application running on our hardware devices at work. What would usually take me a week took an afternoon to generate.
But it still took weeks to test and merge.
The takeaway for me was that for teams operating with legacy debt, or teams where verification requires physical interaction (you can't throw prompt engineering at a hardware test bench), AI doesn't solve the bottleneck; it just shifts it. We are making code generation incredibly cheap, but the cost of verification and code review isn't shrinking, and the burden is falling on our most senior engineers.
I’d be curious to hear how others managing complex or non-standard codebases are adapting their CI/CD and review processes to keep up with the volume of AI-generated code.
abahgat•1h ago
But it still took weeks to test and merge.
The takeaway for me was that for teams operating with legacy debt, or teams where verification requires physical interaction (you can't throw prompt engineering at a hardware test bench), AI doesn't solve the bottleneck; it just shifts it. We are making code generation incredibly cheap, but the cost of verification and code review isn't shrinking, and the burden is falling on our most senior engineers.
I’d be curious to hear how others managing complex or non-standard codebases are adapting their CI/CD and review processes to keep up with the volume of AI-generated code.