This is the opinion of someone who has not tried to use Claude Code, in a brand new project with full permissions enabled, and with a model from the last 3 months.
There’s a lot of engineers who will refuse to wake up to the revolution happening in front of them.
I get it. The denialism is a deeply human response.
On the flip side, anyone who believes you can create quality products with these tools without actually working hard is also deluded. My productivity is insane, what I can create in a long coding session is incredible, but I am working hard the whole time, reviewing outputs, devising GOOD integration/e2e tests to actually test the system, manually testing the whole time, keeping my eyes open for stereotypically bad model behaviors like creating fallbacks, deleting code to fulfill some objective.
It's actually downright a pain in the ass and a very unpleasant experience working in this way. I remember the sheer flow state I used to get into when doing deep programming where you are so immersed in managing the states and modeling the system. The current way of programming for me doesn't seem to provide that with the models. So there are aspects of how I have programmed my whole life that I dearly miss. Hours used to fly past me without me being the wiser due to flow. Now that's no longer the case most of the times.
Must be nice. Claude and Codex are still a waste of my time in complex legacy codebases.
Don't get me wrong, I do think AI coding is pretty dangerous for those without the right expertise to harness it with the right guardrails, and I'm really worried about what it will mean for open source and SWE hiring, but I do think refusing to use AI at this point is a bit like the assembly programmer saying they'll never learn C.
https://xcancel.com/hamptonism/status/2019434933178306971
And all that after stealing everyone's output.
By the time you do everything outlined here you’ve basically recreated waterfall and lost all speed advantage. Might as well write the code yourself and just use AI as first-pass peer review on the code you’ve written.
A lot of the things the writer points out also feel like safeguards against the pitfalls of older models.
I do agree with their 12th point. The smaller your task the easier to verify that the model hasn’t lost the plot. It’s better to go fast with smaller updates that can be validated, and the combination of those small updates gives you your final result. That is still agile without going full “specifications document” waterfall.
“Break things down” is something most of us do instinctively now but it’s something I see less experienced people fail at all the time.
The suggestions you make are all sensible but maybe a little bit generic and obvious. Asking ChatGPT to generate advice on effectively writing quality code with AI generates a lot of similar suggestions (albeit less well written).
If this was written with help of AI, I'd personally appreciate a small notice above the blog post. If not, I'd suggest to augment the post with practical examples or anecdotal experience. At the moment, the target group seems to be novice programmers rather than the typical HN reader.
i have written this text by myself except like 2 or 3 sentences which i iterated with an LLM to nail down flow and readability. I would interpret that as completely written by me.
> The suggestions you make are all sensible but maybe a little bit generic and obvious. Asking ChatGPT to generate advice on effectively writing quality code with AI generates a lot of similar suggestions (albeit less well written).
Before i wrote this text, i also asked Gemini Deep Research but for me the results where too technical and not structural or high level as i describe them here. Hence the blogpost to share what i have found works best.
> If not, I'd suggest to augment the post with practical examples or anecdotal experience. At the moment, the target group seems to be novice programmers rather than the typical HN reader.
I have pondered the idea and also wrote a few anecdotal experiences but i deleted them again because i think it is hard to nail the right balance down and it is also highly depended on the project, what renders examples a bit useless.
And i also kind of like the short and lean nature of it the last few days when i worked on the blogpost. I might will make a few more blogposts about that, that will expand a few points.
Thank you for your feedback!
1. Keep things small and review everything AI written, or 2. Keep things bloated and let AI do whatever it wants within the designated interface.
Initially I drew this line for API service / UI components, but it later expanded to other domains. e.g. For my hobby rust project I try to keep "trait"s to be single responsible, never overlap, easy to understand etc etc. but I never look at AI generated "impl"s as long as it passes some sensible tests and conforming the traits.
Tl;Dr I don't mind reading rust I hate writing it and the compiler meets me in the middle.
I find rust generally easier to reason about, but can't stand writing it.
The compiler works well with LLMs plenty of good tooling and LSPs.
If I'm happy with the shape of the code and I usually write the function signatures/ Module APIs. And the compiler is happy with it compiling. Usually the errors if any are logical ones I should catch in reviews.
So I focus on function, compiler focuses on correctness and LLM just does the actual writing.
I've always advocated for using a linter and consistent formatting. But now I'm not so sure. What's the point? If nobody is going to bother reading the code anymore I feel like linting does not matter. I think in 10 years a software application will be very obfuscated implementation code with thousands of very solidly documented test cases and, much like compiled code, how the underlying implementation code looks or is organized won't really matter
OptionOfT•1h ago
A lot of how I form my thoughts is driven by writing code, and seeing it on screen, running into its limitations.
Maybe it's the kind of work I'm doing, or maybe I just suck, but the code to me is a forcing mechanism into ironing out the details, and I don't get that when I'm writing a specification.
discreteevent•1h ago
We vibe around a lot in our heads and that's great. But it's really refreshing, every so often, to be where the rubber meets the road.
shinryuu•1h ago
jofla_net•1h ago
tayo42•1h ago
jeppester•1h ago
I think you have every right to doubt those telling us that they run 5 agents to generate a new SAAS-product while they are sipping latté in a bar. To work like that I believe you'll have to let go of really digging into the code, which in my experience is needed if want good quality.
Yet I think coding agents can be quite a useful help for some of the trivial, but time consuming chores.
For instance I find them quite good at writing tests. I still have to tweak the tests and make sure that they do as they say, but overall the process is faster IMO.
They are also quite good at brute-forcing some issue with a certain configuration in a dark corner of your android manifest. Just know that they WILL find a solution even if there is none, so keep them on a leash!
Today I used Claude for bringing a project I abandoned 5 years ago up to speed. It's still at work in progress, but the task seemed insurmountable (in my limited spare time) without AI, now it feels like I'm half-way there in 2-3 hours.
frankc•36m ago
skydhash•4m ago
agumonkey•1h ago
Outsourcing this to an LLM is similar to an airplane stall .. I just dip mentally. The stress goes away too, since I assume the LLM will get rid of the "problem" but I have no more incentives to think, create, solve anything.
Still blows my mind how different people approach some fields. I see people at work who are drooling about being able to have code made for them .. but I'm not in that group.
Akranazon•57m ago
agumonkey•54m ago
acedTrex•43m ago
I think we realistically have a few years of runway left though. Adoption is always slow outside of the far right of the bell curve.
gtowey•41m ago
But it's also likely that these tools will produce mountains of unmaintainable code and people will get buried by the technical debt. It kind of strikes me as similar to the hubris of calling the Titanic "unsinkable." It's an untested claim with potentially disastrous consequences.
rapind•20m ago
It's not just likely, but it's guaranteed to happen if you're not keeping an eye on it. So much so, that it's really reinforced my existing prejudice towards typed and compiled languages to reduce some of the checking you need to do.
Using an agent with a dynamic language feels very YOLO to me. I guess you can somewhat compensate with reams of tests though. (which begs the question, is the dynamic language still saving you time?)
doug_durham•46m ago
hed•40m ago
agumonkey•36m ago
vunderba•1h ago
PeterStuer•58m ago
chasd00•53m ago
I also use these things to just plan out an approach. You can use plan mode for yourself to get an idea of the steps required and then ask the agent to write it to a file. Pull up the file and then go do it yourself.
wasmainiac•52m ago
Sometimes the AI does weird stuff too. I wrote a texture projection for a nonstandard geometric primitive, the projection used some math that was valid only for local regions… long story. Claude kept on wanting to rewrite the function to what it thought was correct (it was not) even when I directed to non related tasks. Super annoying. I ended up wrapping the function in comments telling it to f#=% off before I would leave it alone.
the_duke•30m ago
With AI, the correct approach is to think more like a software architect.
Learning to plan things out in your head upfront without to figure things out while coding requires a mindset shift, but is important to work effectively with the new tools.
To some this comes naturally, for others it is very hard.
skydhash•18m ago
The same kind of planning you’re describing can and do happen sans LLM, usually on the sofa, or in front of a whiteboard. Or by reading some research materials. No good programmer rushes to coding without a clear objective.
But the map is not the territory. A lot of questions surface during coding. LLMs will guess and the result may be correct according to the plan, but technically poor, unreliable, or downright insecure.
rapind•23m ago
How I see it is we've reverted back to a heavier spec type approach, however the turn around time is so fast with agents that it still can feel very iterative simply because the cost of bailing on an approach is so minimal. I treat the spec (and tests when applicable) as the real work now. I front load as much as I can into the spec, but I also iterate constantly. I often completely bail on a feature or the overall approach to a feature as I discover (with the agent) that I'm just not happy with the gotchas that come to light.
AI agents to me are a tool. An accelerator. I think there are people who've figured out a more vibey approach that works for them, but for now at least, my approach is to review and think about everything we're producing, which forms my thoughts as we go.