I won't regurgitate the common sentiment of it being like pair programming with an oddly spiky profile junior engineer savant. I find my emotions being activated too often in having to recontextualize or reign in or re-orient or repeat.
If I wrote code that doesn't work, that is on me. If I have to read code that doesn't work, I didn't write it, and it's veracity or accuracy are only uncoverable by me - but are already being treated as gospel by my AI peer, and the LLM keeps asserting issues with "my" code that I don't yet feel I have ownership over because I have not ingested it fully is supremely tiring. The whiplash feels almost like an abusive relationship.
I want to get back to feeling accomplished once finished with a coding session, not like I just went through a kafkaesque ringer.
I agree. The way I put it is that it feels like programming has been turned into the job of an editor, when it used to be an author. Of course, editing was always a huge part of programming, probably the biggest part, but I always kinda felt like reviewing and editing was more the "brussel sprouts" part of the job and authoring the "ice cream". I was fine to eat my brussel sprouts if I got to have some scoops of ice cream sprinkled throughout, but now it feels like just endless plates of brussel sprouts.
So it's like an architect but only if you need to triple check every nail, bolt and screw because otherwise the building will probably collapse.
If (a) isn't true for you, then you're operating in a pathological environment, context, organization, etc. which isn't representative of any kind of broader industry experience.
And (b) is usually clear to anyone who's, for example, taken a university course, and compared the efficacy of manual note-taking vs. (say) automatic transcriptions from lectures. The process of parsing the lecture through your brain and into notes that you write yourself, turns out to be essential to information retention and conceptual understanding.
But so much of the software we need to solve various problems is not mission critical. Like tooling. Or any low stakes software that can be easily replaced. Or some script that makes your life a little bit easier. Or maybe even a small gui app that helps you compose some specific configuration for that other software. Tools where the output is easy to verify and can stand on its own and can't be used by an attacker to exploit your systems.
If you put a lot of effort into tooling that you can throw away the moment better tooling appears you can make the mission critical software leaner.
> actually important for engineers to do in order to understand the system
It is one tool for it and an important one. But it is not the end all tool for understanding or trusting code. If that were the case you'd have to rewrite all code you ever had to maintain. You rely on a stacks of software all the time that you do not write.
AI indeed makes mistakes but so does humans so we have to validate the code we use with multiple such tools.
This is in some sense a didactic assertion: if it's not the case, then your engineering team isn't providing any value beyond what a bash script, gluing-together JIRA tickets and GitHub PRs via the LLM-du-Jour, could do autonomously.
That being said, there are lots of brussel sprouts in authoring sometimes - the boilerplate and wrote stuff we've done a billion times. You can make Claude eat those brussel sprouts which is nice.
also, very nice analogy, but don't besmirch the good brussel sprout name. I think in real life, cooked properly, I might even prefer them to ice cream.
Good tasting brussel sprouts are a very recent invention. They used to be a lot more bitter, but a new cultivar was engineered in the 90's that all but completely eliminated the bitter notes. Over the next 30 years, pretty much everybody has switched over, so now brussel sprouts are just drastically better than they were when they got the reputation for being gross.
It's less about code generation for me and more about opportunities to learn new things. It could be as simple as having it optimize my code in a way I hadn't even thought of or it could be me wanting to learn a new complex architectural pattern I haven't had the time to deep dive but have wanted to. Now I can spin something up and have a base understanding of it in minutes. That's exciting to me more than anything else. In a way, it takes me back to way earlier in my career when every day felt like I was learning something new and cool on the job from more seniors devs. I think as you get more senior and experienced, those "cool" learning moments start to happen a little less, so having AI be able to reignite that is exciting in a lot of ways.
When I had a small passive circuit in my head that I wanted to one-shot solder on a protoboard and didn't want to get bogged down in KiCAD, talking through it with an AI and repeatedly correcting its understanding really solidified my own. It's like mentoring while being tutored.
So I still see value in use for smaller novel projects, but using LLMs to hit a deadline for production is not something I want to do any longer, for the time being.
the happy medium for me is either giving it very defined small tasks that i can review (think small edits or refactors over a few files). or i baby sit it and review every change its about to make. doesn't always pan out but it's basically a super powered auto-complete and i can "steer" it the direction i want. i can validate chunks much faster and it's typically of good quality.
that or i make mental notes of what i want it to fix after the first pass and run it thru again with the edits. in that way it does feel more collaborative and less like reviewing some unknown piece of code from someone else.
So far, the AI revolution has only given me more work. People have come at me with applications that are 80% done (really more like 50%) that they "vibe coded" and need an actual programmer to finish. These apps would most likely just be a spark in someone's imagination pre-AI.
In a way, this is a positive. I can't say it's been more fun though.
1. Write detailed prompts.
2. Don't trust generated code output - it must be thoroughly reviewed, and feedback should be given back to the LLM.
3. AIs are very helpful for those learning new architectural patterns.
4. Try different tools.
If you really want to learn about using AI to write code (including the pros and cons), I think this post, https://news.ycombinator.com/item?id=44159166, is excellent. Note the author of the Cloudflare OAuth component, kentonv, makes a lot of good, insightful comments in that thread IMO.
I'm planning on writing more that dives a bit deeper into experimentation I've done as far as tooling, MCP servers, etc that may be a bit more intriguing to those who have already dove into the AI side of things.
> But with AI, I’m producing more than ever before, with a level of speed, quality, and understanding I’ve never experienced. In many ways, it’s 10x’d me as an engineer.
> The best way I can explain my mindset is that I see AI as a multiplier of myself, not just a tool that does work for me on command
> But more often than not, it’s a sign that the prompt wasn’t clear or specific enough. With better framing, you can guide the model toward a much more useful and accurate response
Etc,…
I long for an essay like Seven habits of effective text editing[0] by Bram Moolenaar with proper arguments made about AI Coding.
ausbah•21h ago
worldsayshi•21h ago
We fret about AI adding a bunch of bad code but I see that there are clear methods for avoiding/mitigating this. (Just make sure to test and otherwise verify that the code does the right thing. Make it transparent and easy to replace etc etc.)
Sure it can be used to add a bunch of technical debt but wielded right it can just add well be used to cut down the debt.
dasil003•21h ago
Sure investors and CEOs want to reduce software engineering costs, but at the end of the day software is built to serve human needs, and only humans can reason and make a judgement call about whether software systems are working well or not, and because software is so precise and deterministic, there will always need to be someone who thinks like a programmer to tell the AI what to do with sufficient precision to be useful. Sure I can imagine AGI could at some point invalidate that thinking, but I believe we are very far from that point if its even possible, and even if we do reach that point we'll need massive social change or the pitchforks will be coming out from many directions.
rfitz•20h ago
platevoltage•18h ago
I don't remember the WSIWYG editor companies bragging about eliminating jobs by making web development more accessible.
I don't remember No-Code platforms bragging about eliminating jobs by making it easier to build your own website for your business.
I don't remember Arduino bragging about eliminating jobs by making embedded programming more accessible.
I'm not worried too much about my "job" being eliminated, I just have a really hard time giving people money who want me to end up underneath an Oakland overpass in a tent.