I only commit code that is roughly the same as I would have written anyway.
It feels as good for developer ergonomics as the move away from CRT monitors.
I want to be better month after month, I want to be able to discover new areas.
Using AI tools makes sense to me. It’s important that you don’t believe everything the hype men are telling on Twitter, but it would also be a mistake to believe there is nothing valuable in this technology.
I kind of think CRT monitors were much better for developer ergonomics than LCD because of the tendency to set modern monitors much deeper into the desk and have to lean forward to see them. CRTs forced you to sit with better posture
You want to be delivery service that takes 2 days instead of 30 minutes to bring you pizza so that you don't forget how to ride your horse..?
But yes this is a very extreme position.
It’s about the same for AI coding, I just get better results.
I have no idea what the frontier will look like in a few years but I don’t doubt local models like qwen will still be a staple of my workflows.
And for what it’s worth, there are people out there who lose their sawing ability because a safety brake totals their blade and needs to be replaced for something like $100. Sometimes we pay extra for features we value. We can always pull out the hand tools if we have to. In the mean time, make hay I guess.
Another analog is using power tools to make jigs for hand tools. I’m constantly rigging up test or data wrangling harnesses to improve my ability to verify and refine solutions. It’s so ridiculously useful for improving outputs, even if it isn’t writing the code that makes it to production.
Because that's what every AI usage I've experienced has been.
Faster, yes. Useful, yes. Not better "finish".
With AI coding we're talking about people producing abstract artifacts that most people do not understand and do not know how to test. These aren't just strips of board. They are little machines. So you shouldn't be asking whether you'd trust a table saw to cut your boards, you should be asking whether you'd trust someone who has never cut boards to build your table saw.
Not the hill I would die on.
And everyone having a calculator from grade 4 in school, hasn't made everyone an accountant.
But to be fair, no one has ever experience change as fast as our profession has.
When it comes to employment and other people paying you to code, though, not using AI is increasingly a non-starter for most of us.
- Vendors get to know everything about you
- Chips are becoming more politicized; I fear artificial scarcity as with housing will be put on chips, driving up prices.
- It causes a lot of centralisation. No, I cannot run deepseek at home. I don´t have 100.000+ USD laying around. 1TB of VRAM is not chump change.
- It can be a threat to the flourishing of open source. There is no longer a reason for me to work with other devs to build something in public together. I just have the LLM write what I need. It isolates.
These are the only drawbacks. Eveything else is clearly the artisans' ego getting in the way. That being said, if a piece of code is critical infra onto which many other things hinge, I will still hand code it.
> The thing is that even if I was wrong (I'm not) and AI was somehow helpful for software engineering (it isn't), I still wouldn't want to use it.
So even if you were wrong on the facts (you are) you still wouldn't change your mind? In other words, you're unreasonable and know you're unreasonable and think that's totally fine?
Well, cool. Next time, lead with that.
When doing agentic development, you need to be in control, at least for now. Every frontier model will still do incredibly stupid stuff, and if you let it cook unchallenged, you'll have a codebase that doesn't scale. Claude will happily keep piling turds upon your tower of turds, but at some point, even an LLM will have a hard time working in it.
When you are at the wheel, the engineering hasn't changed. You're still solving all the same problems, but you can iterate a lot faster. Code is now ~free, and the cost of having a bad idea is now much cheaper, because you can quite literally speak the solution out loud and fix it in a few minutes.
Your craft can be typing out code on a keyboard; or it can be building things in the best possible way with the best available tools.
People used to drive manual. Now it’s all automatic transmission. Some cars even drive itself.
People used to proudly use Vi to write code. But now IDE is commonplace.
People used to write asm by hand. Transport Tycoon was written in assembly. But these days that would be insane.
Technological progress is an absolute thing. It produces too much convenience and wealth to ignore.
At one point the author writes
> AI is a tool that can only produce software liabilities
which I would argue is completely caused by misuse of AI. Sure, you can have AI write a ton of code that often comes with subtle bugs. But using AI doesn't mean that it has to write any code for you at all. I've been using LLM often for security analysis and the results are quite good. Vulnerabilities that we had collectively missed were shown and we could fix them ourselves.
In this case, instead of creating liabilities, we were able to use LLM to get more information about our code. It's completely possible we could have deduced this information on our own, but we didn't and LLM is capable of doing it much more quickly than humans.
SilverElfin•1h ago
altern8•50m ago
walrus01•48m ago
https://www.penny-arcade.com/comic/2004/03/19/green-blackboa...
JSR_FDED•48m ago
Give me something with an opinion, personality and evidence of battle scars any day. There’s actually extra signal here that helps me process what I’m reading. When I understand where the author is coming from I can extrapolate, attenuate and compare/contrast the content with my existing mental model far better.
mcbits•24m ago
My mental AI detector would classify that passage as AI-generated with confidence around 85%. It would be 95% if the list had stopped at three items. Regardless of who wrote it, it's the same style.