,,you can outsource your thinking but not your understanding''
There's just no way to not generate much more amount of code with LLMs than we would do as humans, so well structuring code gets much more important than ever before.
But the fact is this is not how it is. Every competent developer I know is delivering significantly more after being AI enabled.
Anyone seriously using the tools without a chip on their shoulder is going to say the same.
Are the tools delivering perfect code 100% of the time, no, of course not. But that's the new skill. Guiding them so they deliver good enough code at 5-50x the velocity. As the models improve and the ecosystem tries out new workflows, the skill changes and the output gets better and better.
What we're capable of delivering now is incredible and would have been unimaginable just a few years ago.
Huge problem with this is the rate at which anyone can take accountability for the code produced.
Of course you can let AI do reviews, but my experience so far is that it's, broadly speaking, not working.
> What we're capable of delivering now is incredible and would have been unimaginable just a few years ago
What I mean is - are there concrete examples, real world "things" that came from AI programming, that are incredible, and someone can talk about and point to how AI led to the thing being possible?
Even if I'm reviewing more, I built the feature without even opening my editor.
My workflow is:
1. Plan mode 2. Read thoroughly or skim if its an easy task 3. /draft command that puts a draft PR on github 4. Review closely then send to team
:blinks: You are producing in a week what used to take you a year?
1. The software is simple because lowly humans wrote them and debugged them and maintained them.
2. The humans are competent in software engineering.
3. All of a sudden we now have help from AI.
Point 3. is here to stay, but 1. and 2. could disappear.
With AI I can build. I'm having so much fun turning ideas into code. I can do a week's worth of work before lunch. I can ask AI to add comments so detailed that my code becomes a refresher tutorial.
It's so exciting to be able to bring my ideas to life, make use of my experience, and not be hobbled by my somewhat atrophied hands-on coding skills. I for one welcome this revolution.
We can't even decide if type systems have made us more productive. It's barely been studied. Same with test-driven development.
What it sounds like we'll see, from your description of AI-enabled developers, is a commensurate (perhaps linear) increase in the rate of errors reaching production systems. Every line of code is a liability. Now everyone has a fire-hose they can aim at a production environment.
At least time and effort prevented some bad ideas and potentially bad code from reaching production.
I'm sure the platforms providing these tools are going to be happy with the results when every business writing code this way becomes dependent on them and have no exit strategy. The prices increase, the service gets worse, and you're locked in. Sounds real productive.
Maybe I am not "competent" developer, but the point has some merit.
And it is great. It does produce fixes, produce a facimilie of understanding. It answers my questions, and is often right. And tinkering with the process of it is satisfying. Integrating more and more data, writing better specs, you can get better results. Its tempting to think that it could be sustainable, this way of working, but also so scary to lose the understanding, to not have the confidence in how things work. Finding duplicated stacks using different libraries, or even the same library, is becoming more and more common. Even our debugging tools, our tracing grow fragmented and unstandardized.
I liked the old way of working. It was fun for me, if often frustrating. It was solving hard sudoku on the train. This new way is lower friction, but more stress. It's steering a rocket ship using chopsticks to hold the wheel. You desperately want to slow things down and work methodically, to be sure, and safe. But you won't get anywhere near as far if you do that.
Somewhere quiet, the tech debt demon smiles.
How long have you been doing this?
Are you at a product company, a consultancy, a place where technology is an enabler but not core, or somewhere else?
What happens when there are bugs or an outage due to that 3k LoC PR?
We're at a product company, not a consultancy. Hard to say exactly about tech, the tech is namely the product, but its b2b, so massive contracts move like glaciers, customer purchase decisions are often as much or more about the claims we made as the reality of the code.
As for outages, its the same as it always was. We have our testing, in layers. unit tests, integrations, e2e, staging envs. Layers and layers before it reaches the customer. If there ever is something that reaches there, as has happened, its so hard to pin the blame on AI, and of course we run a blameless culture here anyhow. Tickets are assigned, emergency patches are made, and the behemoth lumbers on.
I don't pin blame on stupid management or whatever, I think this is complacency rather than a specific effort to push ai, as some claim. AI has just made it easier to work on more and understand less, and this is the result, no external intervention needed. I don't have a solution other than observing that trying to stop this is fighting the tides. People used to hate working on legacy codebases, where the original developers werent around to explain themselves, now everything is a legacy codebase right from inception - even if you personally don't use ai, the job is fundamentally different.
Same - literally found a re-build of a library feature for use with the library the other day (e. g. MyCustomFooProviderFor(Bar) but Bar already literally has a `.foo` method.) No, it didn't need to be there.
With AI agents, I'm significantly more productive, but it feels an awful lot more drudgelike to sit and type into a chat bot. For me personally, the most intellectually stimulating parts of the job were automated away first, and I am getting increasingly sick of dealing with project management frenzy and pressing enter all day.
I'm not having fun any more, and I've decided to leave the field and become a teacher. I won't earn nearly as much money but I expect to feel more fulfilled, and I hope I can help make a difference to some young people.
Complaining about this is rather tone deaf, having had an extraordinarily privileged career. Many people do not have the luxury of enjoying what they do to make ends meet. But I've made the calculation that I'd rather at least try and enjoy what I do day to day than persist in this.
Probably because they mandate its adoption. And while there are plenty of developers who will happily comply and see it as a good thing. There are others who will do it because they have to or risk losing their jobs.
It's a bit of a silly thing to claim. "We made everyone use it, so they did, and now adoption is going up!"
It seems like they're overgeneralizing quite a bit here and focusing on a narrow subset of the population while ignoring the people who are actually thriving with their new AI-enabled dev workflows.
LLMs are not a panacea by any means and they have lots of cons. But I for one would find it difficult to go back to a world where I can't lean on LLMs in my day-to-day.
One very specific example that could not possibly contribute to the brainrot mentioned in this article: AI saves time and reduces the headache of having to pore through pages of documentation (if there even is any) to find how that one method works or what arguments it can take. This alone is immensely helpful and can keep you in a state of flow instead of sending you off on a potentially fruitless side quest that derails your whole train of thought.
It's also taken me quite a bit of time, effort, and experimentation to find the right tools and the right ways to work AI into my workflows which I would bet that the developers mentioned in this article have not explored too deeply if at all.
Claiming AI is rotting your brain because you can't one-shot an entire app or even a single feature is a straw man fallacy.
And I used to love my work :(
deweller•41m ago
This has not been my experience. Sure it feels like more work to fix the AI code problems sometimes - it is a different skillset than writing code from scratch. But the speed that I can deliver software has significantly increased by using coding agents.
jjulius•27m ago
>But the speed that I...