If that's the case, zero programmers should be worried.
Beautiful code only tends to exist in open source.
This hasn't been my experience at all. Beautiful code happens when strong experienced developers are present - either as the authors or as leads instilling good culture on teams. It exists where teams acknowledge that someone (maybe even them) will have to come back to this code in the future. There is plenty of beautiful code inside the non-OSS repositories of Google, Microsoft, and others.
I think the opposite trend will also emerge and more than offset this. Yes, vibe coded tools may fit the needs of low-stake applications for some companies, but if it was low-stake enough to start with, they likely didn't hire many devs, if any. More likely we'll see non-tech companies starting to hire a dev to build custom software (e.g., ERP) suitable for their use cases vs. buying SaaS and paying consultants to customize it.
Look at how much they are spending. Think about how HN cheered Tesla's innovation in disintermediating vehicle sales by selling direct. Now think about OpenAI or whomever selling directly to enterprises. It's the same proposition:
"That Enterprise SaaS startup adds a markup to the AI that is powering your app."
Again, this is only IF the AI-vangelists are correct that some startups will collapse to a solopreneur. I am not sure that those same startups won't vanish entirely.\
———
There are some high-stakes apps where Enterprises prefer vendors for extrinsic reasons. For example, some apps must have certain certifications, and in a vibe-coded future, the cost of certification could exceed the cost of development by multiple orders of magnitude. And it needs to be kept up. Another example would be apps where for liability reasons, it is helpful that there be an "industry standard."
But lots and lots and lots of apps are not nearly so consequential.
But... Anyone who has vibe coded knows that world is really far away, our careers will be over before it is here. This is like saying we should be telling kids to use Waymo instead of learning to drive. It is a nice dream but I am definitely not gonna bet on it.
I've seen it in juniors and in myself, entering into a company excited after hearing all these advices, and realizing that the job is really to "just code".
You do not communicate with the customer, you do not decide the business direction, you simply code, and make sure there are as little issues as possible (usually not stated explicitly), and that new features (as requested by management/customers) are implemented as quickly as possible.
I don't know how to respond to this "developers shouldn't focus on quality" argument any more. Is shipping fast important? Of course. Is understanding the customer important? Definitely. But why is there always this animosity towards "developers focused on quality"?
Where I worked my experience has been the exact opposite, many coworkers have been writing slop long before LLMs were popular, all kinds of horrible hacks, the constant talk about quality was not "we are working on quality all the time", but "quality is so bad my life is so painful".
So, my question to all that fight this "quality obsessed developer" is: Why have I never met any developer obsessed about the code quality, but I hear about them all the time?
EDIT: To be more concise, I think that the "perfectionist developer" is simply a scapegoat for the inherent difficulties and challenges of software development.
I care about quality because tomorrow I still have to work with this stuff. In 10 years, I have to remember why I decided to do it this way, and I have to be able to extend it to satisfy the business needs then. I leave it in a way I won't want to find the original developer (me) and go full Law Abiding Citizen on them.
AFAIK, most developers out there want to spend the minimum time on the problem right now, and plans to jump ship tomorrow into somebody's else mess.
Why not? Did noone in your life teach you to build things well and take pride in your work?
I wonder where those guys are too, since most of the things I see are poorly made slop. This is similar to the "premature optimization" scaring - I've never seen those engineers going into the weeds optimizing bytes in code... but I've see so much code that's been 10.000 - 1.000.000x slower than it had to be and hurt users.
A business maps org structures to reflect the problem spaces topology.
Code maps to reflect the topology and ontology of the problem spaces trying to be solved.
Spaghetti code is where there is no structural mapping, where every workflow/pipeline/process exists as if it is the only one and any code that fulfills that specific request will do.
Ain't nothing wrong with Spaghetti code... except, horrifyingly bad unintended consequences. Race conditions, data corruption, security holes.
Is your code a mountain where every drop of water that falls on it has a deterministic path to the base and little channels and protrusions can be dug or filled in locally and with ease, or is it a poorly knit sweater where any single thread failure, or need to change the pattern, causes unraveling and so altering the sweater requires massive disruption.
So, sure, once there's some bare minimum qualification that one must attain to be an "owner-builder" of software, do that. Until then, vibe-coding perfectly describes what vibe-coders do -- except for the vibes, which aren't (obviously).
I long for articles that have been written in more time that it takes to read them.
No one has, to my knowledge, demonstrated a machine learning program with any understanding or complexity of behaviour exceeding that of a human.
LLMs don't have understanding.
Frees up who, the LLM or the human? Same question for "they".
What does symmetrical, fractal code look like in this context? How does this property assist the LLM's parser?
That's a strong reason we want modular, concise, clean code: because tomorrow, we will want to solve a slightly different problem, and if you have a nice clean base, you can reuse it. If you don't, you need to rebuild from scratch (which may not be a problem with vibe coding) and rebuild the trust that the new tool is doing what it is supposed to do (which is a problem).
I've thought about how I could possibly vibe code from scratch something I've built and I just don't think it's possible. So much of the API contract and UI behavior is implicit that there is no way you could clone it without missing edge cases. And that's assuming the prompt is take this code and do X with it. Starting from scratch, like blank slate... Impossible.
You simply can't adequately describe a sufficiently complex app in natural language well enough to get a perfect copy. You'd have to detail every business rule, every quirk, everything that makes your app work. Try and condense the hundreds, thousands of tickets and bug reports into prompts.
Folks hedging on GenAI just constantly rewriting from scratch when they declare technical bankruptcy are in for a rude awakening (and tons of bugs).
Even in my own use of AI, letting the AI get away with shit code means that it continues to do a worse job. When it sees examples of good code, it does better. What happens when there's nobody at the wheel?
I personally like builder style when doing oop new Client().withTimeOut().ignoreHttpErrors()
Not everyone would consider that clean when using it in your code base.
And let’s face it all code has hacks and patches just to get it out before the deadline then there are more things to do so it will just stay that way.
I don't know if I like the builder style; I could go either way. But if I saw that, I'd still consider that clear and well designed. But I've seen some truly ugly code from both people and AI.
And yet, it would be ridiculous to pretend that we cannot say that there is an advantage in avoiding cooking a dish made with dirt and toxic waste. The fact that we cannot define an absolute objective "good food" is not at all a problem.
I think when someone designs a software system, this is the root process, to break a problem into parts that can be manipulated. Humans do this well, and some humans do this surprisingly well. I suspect there is some sort of neurotransmitter reward when parsimony meets function.
Once we can manipulate those parts we tend to reframe the problem as the definition of those parts, the problem ceases to exist and what is left is only the solution.
With coding agents we end up in weird place, one, we have to just give them the problem, or we have to give them the solution. Giving them the solution means that we have to give them more and more details until they arrive at what we want. Giving an agent the problem we never really get the satisfaction of the problem dissolving into the solution.
At some level we have to understand what we want. If we don't we are completely lost.
When the problem changes we need to understand it, orient ourselves to it, find which parts still apply and which need to change and what needs to be added, if we had no part in the solution we are that much further behind in understanding it.
I think this, at an emotional level is what developers are responding to.
Assumptions baked into the article are:
You can keep adding features and Claude will just figure it out, sure, but for whom, and will they understand it.
Performance won't demand you prioritize feature A over feature B.
Security (that you don't understand) will be implemented over feature C, because Claude knows better.
Claude will keep getting more intelligent.
The only assumption I think is right, is that Claude will keep getting better. All the other assumptions require you know WTF you are doing (which we do, but for how long will we know what we are doing).
We don’t care about “clean code” (thats mostly just juniors yak shaving into a slow system anyway). We care about correct code -code that is solving the actual problem (as close to correct as possible anyway).
Using an LLM and expecting it to be more than 60-70% is a bad idea. And using it “as a tool” - we have a hard time believing a dev reviewed, understood, and verified 130,000 line PRs every day.
Additionally, it is still unclear if the generated code violates licenses / actually “becomes” your IP.
And nobody was worried about correctness beyond the obvious.
Computers and LLMs!
I wonder whether 'enshittification' will hit the installed software base hard in the future and the software engineers tasked to fix it will not have been brought through the rigors of designing and implementing complex systems on their own, without LLMs.
This doesn't address the degradation that may happen when LLMs are increasingly trained on LLM output.
BTW I used to work in AI and I don't think scaling is a solution here.
> But what if the next “person” isn’t a person?
There's certainly a hypothetical future where AI writes and maintains enterprise software and airline baggage control systems consisting of millions of lines of spaghetti code, codes that violates all the principles of good software design that we currently value, and everything turns out peachy.
Nobody but the AI understands the code, but it mostly works, and the AI fixes it when it doesn't. We lose the capability to understand and test the code ourselves, but the AI says "trust me bro - it's all been tested" (even though bugs keep turning up).
But, I doubt it.
First off, nevermind all the "parroting" nonsense, but LLMs are nonetheless auto-regressively trained and therefore fundamentally a copying technology, so as long at the LLM creators make an effort to train on high quality code, then what's generated should at least match those high quality patterns to some degree.
Secondly, human's hard-won best-practices for designing code are there for a reason, and it's not just because of the limits of our feeble minds to work with anything-goes spaghetti code. The reason why we prefer code that is modular, with thin/clean interfaces between modules, shortish functions, etc, are because these practices fundamentally do make for code that is easier to reason about, to test and debug, to update and extend without breakage, etc.
Per the Halting Problem, we know that ultimately the only way to know what code does is to run it, and therefore even if LLMs/AI were to exceed humans in general reasoning ability they would never gain some magical abililty to write arbitrarily complex/unstructured code and be able to successfully reason about what it is doing and it's correctness, etc. Following human best practices not only helps create code that is testable (possible to analyze and create test cases for all paths through a piece of code), but also code that can more easily be reasoned about, whether by man or machine.
In terms of where we are right now regarding AI's ability to write good quality code, it's perhaps informative to look at Claude Code, which is of late writing most of it's own code under the guidance of it's creator, and despite being something on the very simple end of the spectrum in terms of software complexity*, currently has about 5,000 issues filed against it on github.
* A minimal CLI agent is a few hundred lines of code vs for example the 15 million LOC of gcc.
If you’re operating exclusively at the level of business problems, you’ve always been “vibe coding”, by hiring developers to write code for you. The question of whether “good code” is important is not something you’re qualified to discuss as a technical problem, only as a resource management problem.
The developers will probably tell you they need to write “good code” and that, while it may seem expensive in the short term, it’s worth it in the long term. You can believe them, or not.
If you’re a developer, then you are operating at the level of code, not just business problems. And you do need to be able to read the code and make technical judgments about it, because that’s your job. If you aren’t doing that, you have no reason to be involved.
I think LLMs won’t be any better at maintaining spaghetti code than humans are. I don’t see why the principle of modularity, or any other “good code” principle, would change just because a computer rather than a human is reading the code.
This is not good universally. Even if you become 10x coder along with your team, the users are not going to become 10x users instantly. They will pay the same and would want the software to be more or less same day to day. If you develop 10x features a week, they will just get frustrated after a while as your software become completely unpredictable by the amount of unneeded changes.
Does every photo editing app needs to be Photoshop? No, users come in all different capabilities and you as a project leader has to decide which features are gonna be important for your users, and for many apps that list is limited. We all think about an app which we used to like it in the past but became complete mess as more ideas were added. HN itself is a good example of not putting all ideas into a project.
functionmouse•13h ago
a456463•10h ago
This is the dumbest take that the author wants to believe and is stating it like fact and arguably exposes that author knows so little about reliable maintainable software
robot-wrangler•10h ago
> If your job is only to write beautiful code, you have a problem.
Definitely no one at senior or even mid thinks this is what their job is about. Something like modularity is beautiful because it works and because of what it enables.. we don't try to shove it into place because it's beautiful. Talking about it the other way around sounds like a manager who does not understand engineering trying to paraphrase stuff that engineers are saying, poorly. Indeed quoting from this self-described entrepreneur's "About Me" page:
> While I was there, I took a couple computer science courses, decided I was terrible at it and swore to never write software again.
I guess that's thought-leadership for you.
falloutx•7h ago
Also a big issue with AI imo is: It allows people to write who stopped writing code ages ago because they think they somehow can work at higher level.