I think AI has already gotten to a point to where it can help skilled devs be more productive.
I got tired of hearing about vibe coding day in and day out, so I gave in, and I tried everything under the sun.
For the first few days, I started to see the hype. I was much faster at coding, I thought, I can just type this small prompt to the LLM and it will do what I wanted, much faster than I could have. It worked! I didn't bother looking at the code throughly, I was more vigilant the first day, and then less so. The code looks fine, so just click "accept".
At first I was amazed by how fast I was going, truly, but then I realized that I didn't know my own code. I didn't know what's happening when and why. I could churn lots of code quickly, but after the first prompt, the code was worth less than toilet paper.
I became unable to understand what I was doing, and as I read through the code very little of it made any sense to me, sure the individual lines were readable, functions made some semblance of sense, but there was no logic.
Code was just splattered throughout without any sense of where to put anything, global state was used religiously, and slowly it became impossible for me to understand my code. If there was a bug I didn't know where to look, I had to approach this as I would when joining a new company, except that in all my years, even looking through the worst human code I have ever seen, it was not as bad as the AI code.
I have tried beign specific, I have even gone as far as to feed its prompt with a full requirement document for a feature (1000+ words), and it did not seem to make any significant difference.
I'm finding the inverse correlation: programmers who are bullish on AI are actually just bad programmers. AI use is revealing their own lack of skill and taste.
I'm not chasing the AI train to be on the bleeding edge because I have better things to do with my time
Also I'm trying to build novel things, not replicate well represented libraries and algorithms
So... Maybe I'm just holding it wrong, or maybe it's not good at doing things that you can't copy and paste from github or stackoverflow
Nope. The factors are your prompt, the thousand of lines of system prompts, whatever bugs may exist inside the generator system, and the weights born from the examples in the data which can be truly atrocious.
The user is only partially in control (a minimal part of that). With standard programming workflow, the output are deterministic so you can reason out the system's behavior and truly be in control.
LLMs write code according to thousands of hidden system prompts, weights, training data, and a multitude of other factors.
Additionally, they’re devoid of understanding.
I truly hope you do better in the future.
That would be more inline with how the technology works and less a sign of your inherent genius though.
Maybe taking the AI code as input and refactoring it heavily will result in a better step 2 than your previous step 0 was.
Among my peers, there seemed to be a correlation between programming experience and agreement with my personal experience. I showed a particular colleague a part of the code and he genuinely asked me which one of the new hires wrote this.
As for management, well, let's just say that it's going to be an uphill battle :)
> What steps could realistically reverse the momentum of vibe coding?
For me and my team it's simple, we just won't be using those tools, and will keep being very strict with new hires to do the same.
For the software industry as a whole, I don't know. I think it's at least partially a lost cause. Just as the discussions about performance, and in general caring about our craft are.
> I have tried being specific, I have even gone as far as to feed its prompt with a full requirement document for a feature (1000+ words), and it did not seem to make any significant difference.
Yes, this happens, and it’s similar to when you first start working on a codebase you didn’t write
However, if instead of giving up, you keep going, eventually you do start understanding what the code does
You should also refactor the code regularly, and when you do, you get a better picture of where things are and how they interact with each other
I believe you missed the part of my comment saying that I have been coding professionally for 20 years. I have seen horrible codebases, and I'm telling you I'd rather see the switch statement with 2000 cases (real story), many of which were 100s of lines long, with C macros used religiously (same codebase). At a bare minimum, once you get over the humps, with human written code, you will always find some realm of reason. A human thought to do this, they had some realm of logic. I couldn't find that with AI, I just found cargo cult programming + shoving things where they make no sense.
If you stay on top of the code you are getting from the AI, you end up molding it and understanding it
The AI can only spit out so much nonsense until the code just doesn’t work. This varies by codebase complexity
Usually when starting from scratch, you can get pretty far with barely even looking at the code, but with bigger repos you’ll have to actively be involved in the process and applying your own logic to what the AI is doing
If the code of what you are building doesn’t make sense, it’s essentially because you let it get there. And at the end of the day, it’s your responsibility as the developer to make it make sense. You are ultimately accountable for the delivery of that code. AI is not magic, it’s just a tool
whenever i try to get it to do my work for me, it ends badly.
it can be my syntax gimp tho sure.
At that point however, is it really saving you that much time over good snippets and quick macros in your editor?
For me writing the code is the easiest part of my job, I can write fast, and I have my vim configured in such a way where it makes writing code even faster.
I had someone say that to me ~a month ago. I had mentioned online that one of the things I had the AI tooling do that morning was to convert a bunch of "print" statements to logging statements. He said that was something he'd just have his editor find/replace do. I asked him what sort of find/replace he'd do that based on the content of the log message appropriately selected between "logging.debug", "info", "warning", and "error", because the LLM did a good job of that. It also didn't fall into the issues of "pprint()" turning into "plogging.debug()" and the like.
for you to choose to be minified later?
Not really. The nice thing about knowing when to do something is you can just turn off your brain when typing code of thinks about architecture in the meanwhile. Then you just run the linter for syntax mistakes and you're golden. Zero to no mental load.
And if you've been in the same job for years, you have mental landmarks all over the codebase, the internal documentation, and the documentation of the dependencies. Your brains runs much faster than your finger, so it's faster to figure out where the bug is and write the few lines of code that fixes it (or the single character replacement). The rest of the time is thinking about where similar issues may be and if you've not impacted something down the line (aka caring about quality).
What happens when other yous start using ai. I suspect they will obv outperform you just in sheer typing speed.
I find CTO as a title meaningless. At most it means they've got the signature powers.
In a sense, you are right that the title is meaningless, and only means signature powers.
And you are right that other principals rewrote legacy stuff. But I also participated at the same level. I just knew I was only above due to some market luck lining up with my experience. Will never know exactly how much influence I had over that situation.
But I have a Jazz background. It is easy for me to respect equally everyone on the team. Sports can do the same, but I think music is probably more effective, because it is inherently more spectator than competitive. Musicians tend to make incredible team players, if they stuck with it for long enough. Many leaders in band are leaders in sports, in school. And the formal language abilities are already demonstrated in musicians.
Well, that went in a strange direction.
Skills don't work like muscles, please stop with this thinking model of the world. No one is going to fire you because you don't have the same speed of recall of language constructs and have to look more things up. Speed of coding is not the damn bottleneck.
Plus have a little faith in your brain that you could get back to that point if you wanted to.
If people stop manually coding their ability to do so WILL atrophy. Take away the coding agents and you'll soon have a generation of graduates wondering why their tab complete isn't writing the entire feature for them.
Surely you realize this is the problem. I just landed a mid 6-figures job _without_ grinding leetcode. They’re out there. This game everyone plays is an abomination.
The arcane rituals of jumping though leetcode hoops reminds me of pledging a fraternity.
> I just landed a mid 6-figures job _without_ grinding leetcode. They’re out there. This game everyone plays is an abomination.
I offered an anecdote.
As others have mentioned this is the problem. Not being able to pull up a binary tree the most efficient way on the spot should not be the criteria to identify a good developer.
>Sure someone won't fire you because you can't balance a binary tree by hand, but it certainly might exclude you from getting that next job
People that do this need their behavior changed. Testing people on quickly find-able implementations is an absolute circus. Obviously an exception if the job actually involves writing CS algorithms, but most of them do not.
There’s one optimization path that people seems to barely explore (other than editor nerds): navigation based on search and marking stuff for further actions. You see people using VS Code like notepad and they go on to complain that coding is tedious.
As for the second part. Learn vim, emacs, kakoune, or try to be fluent in your current editor. The reason I put Vim and Emacs on top of the list is that they have powerful primitives for coding and they're not notepad patched with external tools.
What you say would only be true in an anarchist island colony devoted to software craftsmanship where everyone is healthy and under 40 years old.
I would say in a generation or two, you could operate in a way where you rarely have to dive into the code.
Can you clarify what "non-trivial" means here? To me "non-trivial" is writing FoundationDB or Kafka from scratch.
I’ve heavily benefited from AI when I’m inexperienced or willing to trust the AI. The devs I’ve seen happy with AI either fit that use case or are subpar devs who don’t really understand what they’re doing. I have multiple coworkers who are the latter - code works for their single happy path and they’re on to the next thing.
For example giving it a meta process to follow before making an edit. I've also had the models keep a document of architecture and architecture decisions.
I think, based on that, Rust will be the most popular and powerful language by next year. Of course, I might be completely wrong.
The author repeatedly states they have little knowledge of the tech they're using. But they're CERTAIN in what the industry will be in future.
Hubris
“Oh you’ll never have to do this tech stuff ever again! How amazing! Ai all the things!”
Like, ok great. Good for you. Leave the rest of us out of whatever mission-to-replace-some-thing-you-don’t-like.
Or even better, if you don’t like, go away and do something else. I’m not big into jogging, but I don’t go around telling runners that their hobby is redundant and that “nobody will run now that we have segways”.
Author, if you’re reading this, “let’s” => “lets” (2 occurrences).
Often I'll ask the AI to do something and it goes sideways. Sometimes it really saves me time, but many times not. I'll break down and actually type out commands or even Google what to do instead of using the AI because it's still faster.
It's true that my trainee uses the AI more because there's fewer commands in his muscle memory. But, it's still not great yet.
Further, the AI must have each one of its actions approved. I've tried fully automatic mode. It's bad.
AI is more like a lawn mower. It's self-propelling, but you still have to hold on to it, sometimes you got to turn it off and just pull the stuff out of its way or it gets stuck.
Wait, what?
> I was never particularly interested in the code itself
> Instead, I was always more interested in the product
Confusing contradictions aside, I had trouble engaging with this article.
The author seems to think every developer thinks like they do. Some people actually enjoy helping their business/users.
The author also has trouble imagining other perspectives as a people manager. From the linked article,
> I do not get any sort of high from managing people. I don’t think anyone gets that same high from this role
Hate to break it to the author again, but some people actually enjoy seeing those they mentor/manage succeed.
Being a people manager isn’t the right fit for everyone. Perhaps being a developer in the next 20, 5, or 1 year won’t be the right fit for the same people it is for today.
With all due respect, this perspective baffles me. Some see it your way, others see so much opportunity.
Beyond that - doing coding without solving problems or enabling anyone/anything is just doing art for art's sake. It may have a place, but its more personal, a hobby, an expression than anything tangible to be used in the real world - leaving aside business.
Product is not the same as code. We code to build a product, sure, but I think the author means they are interested in designing the product to solve users problems (a.k.a UX)
Don't worry, there's still more coding problems after that one is solved by AI, because the "last 10%" is another 90% of work when it comes to polishing.
However, at my current job & role, my manager has left (or taken quite long leave) 2nd time now. Although both me and my team are assigned to another manager in different region (BigTech), it is not the same thing...
Why I mention this: I am gonna avoid doing any and all managerial work. Because last time, I did a lot of managerial work without the benefits. Both in terms of reporting, keeping the team morale (happiness) up, keeping our interest above from a lot of inter-team fighting/prioritization, etc.
In turn, I got no appreciation or compensation out of it. Even I partially did the jobs of other people (collecting artifacts, reporting up to the chain, etc.) So, nobody would get any _bad_ performance review. (Or worse, lay-offs...)
But I agree with the author, I got no dopamine out of these. Yes I was solving some problems, but they were like package conflicts of NPM peer dependencies. Provided no value to me, no improvement for my own performance, and worse, no goal or direction at all!
PS: My team is a completely DevOps team, in Big-Tech terms, support team. What we do is the grunt-work of various other teams to keep them up-to-date, which is why, overall job satisfaction is quite below of the average...
Now, I am refusing to do the same work again. My manager is in parental leave since mid-June. He has _not_ been doing good job in terms of job-satisfaction and team morale since he has joined. I slowly degraded doing the low-key managerial job, and he has not been taking over. With the long-leave in process, I just stopped taking care of it.
Since I stopped doing the managerial grunt-work, 2 people already left from the team of, well, 8 engineers.
Since I am also taking over the work that has been done by the other engineers, I noticed couple of things: 1. The code quality is somewhat okay, but there are obvious "useless" AI generated areas. Similarly, commit messages yield little to no value, as the review process were only within the people who worked within the project. (ie, Several "fix bugs" commits back to back, yields no value) 3. People who left or stayed, has no recollection of the things I helped them with, problems I solved (ie, unblocking those stuck) and no appreciation for the "space" I was able to get to them. (Even though I was quite explicit with each person. 4. I am one of those special engineers where you can put me in any domain/language whatsoever and I will do a good/decent job at it. (jack of all trades, swiss-army-knife, whatever you call it.) I also solve the issues as I go through with some bug-fixes, features, whatnot. 5. Product/project manager actively sabotages these tech-debt fixes, or the "refactor" of the "AI-generated" code to be a simpler, more-readable versions.
Which is why, unlike the CTO in question of the article, I started caring less and less about these. Now, I also produce code with AI-agents, as the leadership loves the AI-slop metrics.
At some point, these AI-generated code will fail to do something. We'll need to fix that, or replace that. This boils down to 2 different scenarios: 1. If this code is running an airplane, then it is a disaster. Maybe your engines will fail, you must crash-land somewhere at best. 2. If this code is running a rocket, then it already has a limited time anyway. Does not matter if has a memory leak at all. The lifetime is already so limited that the rocket will not even reach to the resource limits being hit.
I guess most of the leadership is currently betting on most problems being #2. Because software engineering going quite fast, rewrites are always at the next corner, what is the point of "maintaining" the codebase?
Meanwhile, I am not sure I will be there to solve more of an airplane problem when it occurs. I just wish best of luck with the AI-agents to the leaders who have just attached pair or rocket-boosters instead of actual jet-engines to an airliner!
However, at my current job & role, my manager has left (or taken quite long leave) 2nd time now. Although both me and my team are assigned to another manager in different region (BigTech), it is not the same thing...
Why I mention this: I am gonna avoid doing any and all managerial work. Because last time, I did a lot of managerial work without the benefits. Both in terms of reporting, keeping the team morale (happiness) up, keeping our interest above from a lot of inter-team fighting/prioritization, etc.
In turn, I got no appreciation or compensation out of it. Even I partially did the jobs of other people (collecting artifacts, reporting up to the chain, etc.) So, nobody would get any _bad_ performance review. (Or worse, lay-offs...)
But I agree with the author, I got no dopamine out of these. Yes I was solving some problems, but they were like package conflicts of NPM peer dependencies. Provided no value to me, no improvement for my own performance, and worse, no goal or direction at all!
PS: My team is a completely DevOps team, in Big-Tech terms, support team. What we do is the grunt-work of various other teams to keep them up-to-date, which is why, overall job satisfaction is quite below of the average...
Now, I am refusing to do the same work again. My manager is in parental leave since mid-June. He has _not_ been doing good job in terms of job-satisfaction and team morale since he has joined. I slowly degraded doing the low-key managerial job, and he has not been taking over. With the long-leave in process, I just stopped taking care of it.
Since I stopped doing the managerial grunt-work, 2 people already left from the team of, well, 8 engineers.
Since I am also taking over the work that has been done by the other engineers, I noticed couple of things: 1. The code quality is somewhat okay, but there are obvious "useless" AI generated areas. Similarly, commit messages yield little to no value, as the review process were only within the people who worked within the project. (ie, Several "fix bugs" commits back to back, yields no value) 3. People who left or stayed, has no recollection of the things I helped them with, problems I solved (ie, unblocking those stuck) and no appreciation for the "space" I was able to get to them. (Even though I was quite explicit with each person. 4. I am one of those special engineers where you can put me in any domain/language whatsoever and I will do a good/decent job at it. (jack of all trades, Swiss-army-knife, whatever you call it.) I also solve the issues as I go through with some bug-fixes, features, whatnot. 5. Product/project manager actively sabotages these tech-debt fixes, or the "refactor" of the "AI-generated" code to be a simpler, more-readable versions.
Which is why, unlike the CTO in question of the article, I started caring less and less about these. Now, I also produce code with AI-agents, as the leadership loves the AI-slop metrics.
At some point, these AI-generated code will fail to do something. We'll need to fix that, or replace that. This boils down to 2 different scenarios: 1. If this code is running an airplane, then it is a disaster. Maybe your engines will fail, you must crash-land somewhere at best. 2. If this code is running a rocket, then it already has a limited time anyway. Does not matter if has a memory leak at all. The lifetime is already so limited that the rocket will not even reach to the resource limits being hit.
I guess most of the leadership is currently betting on most problems being #2. Because software engineering going quite fast, rewrites are always at the next corner, what is the point of "maintaining" the codebase?
Meanwhile, I am not sure I will be there to solve more of an airplane problem when it occurs. I just wish best of luck with the AI-agents to the leaders who have just attached pair or rocket-boosters instead of actual jet-engines to an airliner!
matt3210•7mo ago