Knowing some machining still lets you design parts and assemblies that are some combination of cheaper, better, etc. This is noticeable with precision or high performance assemblies. And how many revisions are needed.
I still reject > 50% of AI suggestions, because they're too mediocre, like moving code for no reason or sometimes it is just plain wrong.
If this was an actual paid job, I do wonder how that would change my LLM use. The reason I'm a software developer at all is because I love the craft. The act of building, of using my brain to transform ideas into code... that's what I enjoy. If it was just prompting an LLM, would I still do that job? I don't know. I'd probably start looking into the idea of switching careers, at least.
Nothing stopping you from iterating with the agent till the code is the exact same quality that you yourself would write
There are some times after iterating so much I’m not sure if I’ve even saved time because going from “it works” to “it’s up to quality” takes so long
And yea usually does for me
If you are coding by hand like the old days you are probably not literally writing everything from scratch anyway, you are copy pasting a bunch of shit off google and stackoverflow or installing open source libraries.
I don't want my code quality, I want AGI code quality - that's what I was promised and jetpacks and flying cars too!
I think many people already recognize the problem:
-“Our ability to write code is being damaged.” -“If our ability to write code declines, our ability to recognize good code also declines.”
But the problem is that the market no longer works without LLMs.
Freelance rates and deadlines are now calibrated around LLM-assisted output. Even clients who write “do not vibe code” often set deadlines that are impossible to meet unless you use something like vibe coding. The client’s expectations themselves are becoming abnormal.
That is the irony of the market.
I honestly do not know what to do.
Recent Hacker News discussions are mostly a negative echo chamber about AI use. In other places, it is often the opposite: only positive echo. But almost nobody discusses the actual solution.
The main topics I keep seeing are roughly these:
1. Is the large repository PR system failing a fundamental stress test? Or should AI-generated(GEN AI) code simply not be merged? If PR review is moving from handmade production to mass production, how should the PR system change? Or should it remain the same?
2. As vendor lock-in continues, can we move toward local LLMs to escape it? Are cost and harness design manageable? What level of local model is required to reach a similar coding speed?
3. If we are forced to use agentic coding, how do we avoid damaging our own ability to code? There is a passage from Christopher Alexander that I keep thinking about:
“A whole academic field has grown up around the idea of ‘design methods’—and I have been hailed as one of the leading exponents of these so-called design methods. I am very sorry that this has happened, and want to state, publicly, that I reject the whole idea of design methods as a subject of study, since I think it is absurd to separate the study of designing from the practice of design. In fact, people who study design methods without also practicing design are almost always frustrated designers who have no sap in them, who have lost, or never had, the urge to shape things.” — Christopher Alexander, 1971
This quote feels relevant to programming now. If we separate the study and supervision fo programming from the actual practice of making, something important may be lost.
In architecture, there is this idea that without practice, the architect loses meaning. But now the market is forcing the separation.
People with enough symbolic capital and high status have the freedom not to use AI. But people lower in the market are under pressure to use it.
So I think the discussion now needs to move beyond whether AI coding is good or bad.
The real question is How do we keep using AI because the market demands it, while still preserving the human practice that makes programming meaningful and keeps our judgment alive?
I think these are the important question. How do you maintain market value without using AI?
Or, if you do use AI, how do you avoid being treated as low-quality?
If you do not use AI, how can you remain more competitive than people who do use it?
If you do use AI, what advanatge do you have over people who do not use it, and how should you position yourself?
I know that agentic coding can cause skill degradation. I can feel it happening to me already. But for someone like me, who does not have strong status, credentials, or symbolic capital, social and market pressure makes AI almost unavoidable.
What frustrates me is that I do not see practical answers anywhere.
Stop using AI for coding. Period...there is no other solution. You can't make it work, nobody else can either. Without determinism, the entire process is useless. We need to stop trying to act like we all know that this isn't true. We have given it a chance, it failed, time to move on to something else no matter how much the VCs and execs don't want to. Those that do move on have a chance, the others have no future in software.
So while I agree with your point, it does not feel like a practical answer for my situation. For someone who is already well known and has enough reputation, refusing to use AI may be a matter of principle. But I am dealing with survival.
I do not think your answer is bad. But because this is a survival problem, it is difficult for me to risk everything on principle.
In other words, I know that your answer may be the morally correct one. If everyone boycotted this, perhaps it would not be adopted so aggressively.
But I cannot do that.
What I need is a way to use AI while degrading my own ability as little as possible, and while still preserving my skills.
I am not saying you are wrong. I am saying that your answer is too idealistic for someone in my position.
The market realigns, and unless you handwrite the highest possible quality at a quick pace, you won't be competitive with the vibe-coders who can fix a hundred issues a month.
It was the same with gps-assisted driving, now most people can't orient themselves autonomously. Worse, there are no roadsigns with directions installed, meaning that you are stuck with using the GPS.
That's exactly what I do. I know I am lucky to be gifted in this skillset. But that's not a good reason to excuse people destroying the market for everyone.
There is skill loss from heavy AI use.
But I want to acknowledge the awkward elephant in the room. AI Is making people too fast. I don't mean that a faster output is bad. It's a faster output and code rather than a full understanding and experience in producing the code. It's rewarding people who try to talk about business value rather than the people that are building and making safe decisions with deep knowledge.
AI: Yes, its good and it can produce some good solutions, however it ultimately doesn't know what it's doing and at the best of cases needs strong orchestrators.
We're in a cesspit of business driven development and they're not getting the right harsh and repulational punishments for bad decisions.
Now if your career is built on writing out the same boilerplate code in its infinite slight variations every day, congrats, you've been automated. Thank god we can free up our intellects to focus on the actual hard problems, the ones that are somewhat cutting edge, the ones that actually push our field and humanity forward.
Literally every example of AI generated code (without significant human input) is just basic stuff that is wholly unimpressive. Oh wow, you had an AI generate a Next.js app? It's writing HTML for you? It made a generic SAAS? Guess I'll become a farmer now.
Or, wait, I'll continue to write my multithreaded real-time multiplayer network for a MMO, since the AI currently generates something that would get me fired 10 seconds ago if I tried to push it to production.
It's amazing how you introduce just the slightest difficulty or novelty to an AI and it just craps the bed. And then you go online and apparently we're gonna be replaced -6 months ago or something.
People need a reality check.
its easier for me to code now, because its like i have a 24/7 insane intern that needs to be supervised via pair programming but also understands most topics enough to be useful/ dangerous.
ironically ive been spending much of my time iterating on ways to improve model reasoning and reliability and aside from the challenge of benchmark design, ive had some pretty good success!!
my fork of omp: https://github.com/cartazio/oh-punkin-pi has a bunch of my ideas layered on top. ultimately its just a bridge till i’ve finished the build of the proper 2nd gen harness with some other really cool stuff folded in. not sure if theres a bizop in a hosted version of what ive got planned, but the changes ive done in my forks have made enough difference that i can see the different in per model reasoning
I have been described as a decel and a Luddite though so be weary of my opinions.
Re the understanding code point: you can still use LLMs to understand code. If you write the spec without knowing anything about the code, of course the architecture might suck. Maybe there is already a subsystem that you can modify and extend instead of adding a completely new one for the new feature you are adding, etc.
I use LLMs for my daily workflows and they do understand code perfectly and much more quickly than if I read it.
> An increase in the complexity of the surrounding systems to mitigate the increased ambiguity of AI's non-determinism.
My question is why isn’t there an effort from the author to mitigate the insane things that LLMs do? For example, I set up a hexagonal design pattern for our backend. Claude Code printed out directionally ok but actually nonsensical code when I asked it to riff off the canonical example.
Then, I built linters specific to the conventions I want. For example, all hexagonal features share the same directory structure, and the port.py file has a Protocol class suffixed with “Port”.
That was better but there was a bunch of wheel spinning so then I built a scaffolder as part of the linter to print out templated code depending on what I want to do.
Then I was worried it was hallucinating the data, so I wrote a fixture generator that reads from our db and creates accurate fixtures for our adapters.
Since good code has never been “explained for itself 100%, without comments”, I employ BDD so the LLM can print out in a human readable way what the expected logical flow is. And for example, any disable of a custom rule I wrote requires and explanation of why as a comment.
Meanwhile, I’m collecting feedback from the agents along the way where they get tripped up, and what can improve in the architecture so we can promote more trust to the output. Like, I only have a fixture printer because it called out that real data (redacted yes) would be a better truth than any mocks I made.
Finally, code review is now less focused on the boilerplate and much more control flow in the use_case.
The stakes to have shitty code in these in-house tools is almost zero since new rules and rule version bumps are enforced w a ratchet pattern. Let the world fail on first pass.
Anyway, it seems to me like with investment you can slap rails on your code and stay sharp along the way. I have a strong vision for what works, am able to prove it deterministically with my homespun linters, and am being challenged by the LLMs daily with new ideas to bolt on.
So I don’t know, seems like the issue comes down to choosing to mistrust instead of slap on rails.
Edit: I wanted to ask if anyone is taking this approach or something similar, or have thought about things like writing linters for popular packages that would encourage a canonical implementation (I have seen some crazy crazy modeling with ORMs just from folks not reading the docs). HMU would love to chat youngii.jc@gmail
Why make this assumption so confidently?
The arrival of the electronic computer did not turn human computers into programmers, it simply eliminated them en masse.
My sense is that a decade from now, the people who generally see their place as the driver seat but recognize when its not are going to be writing the code that matters.
You can debate with agentic coding who is monitoring and who is flying but, if we assume the user is monitoring what that means, in practice, for me is that I'm reading and making sure I understand all the changes the agent is proposing to make. That includes reading and understanding all the code.
The result of that though would be establishment of development patterns that are good practices.
The rule of thumb is: An agent can write it, but a human has to understand it before it gets pushed to prod.
I'm still not convinced about the doom and gloom over developers being replaced. I'm not a dev as part of my main job function, but where I do use LLMs, it has been to do things I couldn't have done before because I just didn't have time, and had to de-prioritize. You can ship more and better features. I think LLMs being tools and all, there is too much focus on how the tool should be used without considering desired and actualized results.
If you just want an app shipped with little hassle and that's it, just let Claude do most of the work and get it over with. If you have other requirements, well that's where the best practices and standards would come in the future (I hope), but for now we're all just reading random blog posts and see how others are faring and experimenting.
The article essentially claims that no, that line of thinking is false. If the agent writes all of it (or too much of it, where "too much" is still not well defined), then your ability to understand it will atrophy with time, and you will either a) never push to prod, because you can't understand it well enough, or b) push to prod anyway, and cause bugs and outages.
I think the article is correct.
> I'm still not convinced about the doom and gloom over developers being replaced.
Agreed. The agents are just not good enough to write code unsupervised, or supervised by people without senior-level skills. And frankly it's hard to imagine them getting there. Each new release of the coding tools/models is a mixed bag. Some things are better, some things are worse, and the gains are diminishing with each iteration. I am afraid that we're going to hit a ceiling at some point, at least with the transformer architecture.
> but for now we're all just reading random blog posts and see how others are faring and experimenting.
Yes, exactly, and many people are not faring well. The article cites several examples of people feeling less capable after using LLMs to write code for a while.
Yeah, likely
> development patterns that are good practices.
Wait, now you lost me
The claim you should know everything about everything you work on is an intensely naive one. If you’ve worked on a team of more than one there’s a lot of stuff you don’t totally grok. If you work in an old code base there’s almost every bit of it that’s unfamiliar. If you work in a massive monorepo built over decades, you’re lucky if you even understand the parts everyone considers you an expert in it.
I often get the impression folks making these claims are either very junior themselves or work basically alone or on some project for 20 years. No one who works in a team or larger org can claim they know everything in their code base. No one doing agentic programming can either. But I can at least ask the agent a question and it will be able to answer it. And after reading other people’s code for most of my adult life, I absolutely can read the LLMs. The fact a machine wrote crappy code vs a human bothers me not in the least, and at least the machine will take my feedback and act on it.
What is important is not being afraid to learn the rest of your system and keeping an index.
Most importantly it's about being able to spin up on anything quickly. That's how you have wide reach. Digging in when you have to, gliding high when you have to. Appropriate level for the problem at hand.
When I was in college eons ago they taught CS folks all of engineering. "When do I need to know chem-e or analog control systems?" We asked. "You won't. You just need to be able to spin up on it enough to code it and then forget it. We're providing you a strong base."
That holds even within just large code bases.
From a person perspective though, I'm apprehensive about the effect AI will have on the human "very well read intern." People who know a lot very deeply about specific areas are fascinating to talk to, but now almost everyone is able to at least emulate deep knowledge about an area through the use of AI. The productivity is there, but the human connection is missing.
Nothing in the article made that claim.
Also, let’s not forget. The developer is rarely the person pitching the feature, and is normally given the constraints and the PRD…
Soooo people can keep tiptapping on the keyboard, but eventually they need to open their mind to the possibility that “the old way” is actually dead.
Wait, is this the same AWS I have been using?
The questions came flying in fast, without any introduction, and this was about an external integration out of a dozen. They have their own lingo, different from ours, to make the situation worse.
I had a _very hard time_ making sense of the questions, as I indeed relied heavily on a model to produce these integrations (extremely boring job + external thick specs provided).
I'm still positive these would have simply not happened in a 10x the time if I did not use models, however, I'm now carefuly considering re-documenting the "ohhs" and "aahs" of these so that these kind of uncomfortable moments never happen again.
I haven't felt so clueless and embarassed in a meeting, ever. All I could say was "I'll get back to you on that one, and that one, and this one".
Cognitive debt is very real, and it hurts worse than technical debt on a personal level! Tech debt is shared across the team, cognitive debt is personal, and when you're the guy that built the thing, you should know better!
To be continued... But from now on, the work isn't done if I don't get a little 5 mins flash-card type markdown list of "what is this" and "what is that", type glossary.
"I'll need to study the docs and code to answer these questions properly" is a perfectly fine (and very diplomatic) response to treatment like that.
An additional factor: to find issues in generated code, the developer has to care. Many developers (especially at big firms) are already profoundly checked out from their work and are just looking for a way to close their tickets and pass the buck with the minimum possible effort. Those developers - even the capable ones - aren't going to put in the effort to understand their generated code well enough to find issues that the agents missed. Especially during the current AI-driven speed mania.
Does anyone really do this? You want verification and self-correction, not rerolling and cherrypicking. The non-determinism point is really tiresome to hear over and over.
Yes, lots of people. It’s a whole issue.
turtleyacht•2h ago
However, the code review study needs to compare between surface scanning and reviewing long enough to get over a theoretical slough of perspective: when you assume the coding chair and are in their frame, whether the brain shifts into a different cognitive mode.
Otherwise, just stamping "Looks good to me" is likely to lead to the same atrophy. There's no critical thought, even a self-summary of the change or active questioning.
Thoughtful, deliberate code review just plain takes longer. AI can help here a lot, although it still takes over the "get into review mode" process.
winwang•1h ago
deadbabe•59m ago
Code review alone is kind of like being able to understand a foreign language enough to read it, but not really understand it in flowing conversation or being able to speak it, much less construct a complex piece of literature.
Retention also suffers, as you will quickly forget what you just reviewed. What is the last PR you remember?
hgyyy•53m ago
And they will deserve it.