If you find it is quicker not to use it then you might hate it, but I think it is probably better in some cases and worse in other cases.
I strongly disagree. Struggling with a problem creates expertise. Struggle is slow, and it's hard. Good developers welcome it.
I think we'll find a middle ground though. I just think it hasn't happened yet. I'm cautiously optimistic.
A third? I would expect at least a majority based on the headline and tone of the article... Isn't this saying 66% are down on vibe coding?
I thought vibe coding meant very little direct interaction with the code, mostly telling the LLM what you want and iterating using the LLM. Which is fun and worth trying, but probably not a valid professional tool.
From Karpathy's original post I understood it to be what you're describing. It is getting confusing.
And then, more people saw these critics using "vibe coding" to refer to all LLM code creation, and naturally understood it to mean exactly that. Which means the recent articles we've seen about how good vibe coding starts with a requirements file, then tests that fail, then tests that pass, etc.
Like so many terms that started out being used pejoratively, vibe coding got reclaimed. And it just sounds cool.
Also because we don't really have any other good memorable term for describing code built entirely with LLM's from the ground up, separate from mere autocomplete AI or using LLM's to work on established codebases.
I’m willing to vibe code a spike project. That is to say, I want to see how well some new tool or library works, so I’ll tell the LLM to build a proof of concept, and then I’ll study that and see how I feel about it. Then I throw it away and build the real version with more care and attention.
E.g one tool packages a debug build of an iOS simulator app with various metadata and uploads it to a specified location.
Another tool spits out my team's github velocity metrics.
These were relatively small scripting apps, that yes, I code reviewed and checked for security issues.
I don't see why this wouldn't be a valid professional tool? It's working well, saves me time, is fun, and safe (assuming proper code review, and LLM tool usage).
With these little scripts it creates it's actually pretty quick to validate their safety and efficacy. They're like validating NP problems.
This is complicated by the fact that some people use “vibe coding” to mean any kind of LLM-assisted coding.
But trying to use it like “please write this entire feature for me” (what vibe coding is supposed to mean) is the wrong way to handle the tool IMO. It turns into a specification problem.
I've also been close to astonished at the capability LLMs have to draw conclusions from very large complex codebases. For example I wanted to understand the details of a distributed replication mechanism in a project that is enormous. Pre-LLM I'd spent a couple of days crawling through the code using grep and perhaps IDE tools, making notes on paper. I'd probably have to run the code or instrument it with logging then look at the results in a test deployment. But I've found I can ask the LLM to take a look at the p2p code and tell me how it works. Then ask it how the peer set is managed. I can ask it if all reachable peers are known at all nodes. It's almost better than me at this, and it's what I've done for a living for 30 years. Certainly it's very good for very low cost and effort. While it's chugging I can think about higher order things.
I say all this as a massive AI skeptic dating back to the 1980s.
That makes sense, as you're breaking the task into smaller achievable tasks. But it takes an already experienced developer to think like this.
Instead, a lot of people in the hype train are pretending an AI can work an idea to production from a "CEO level" of detail – that probably ain't happening.
Older devs are not letting the AI do everything for them. Assuming they're like me, the planning is mostly done by a human, while the coding is largely done by the AI, but in small sections with the human giving specific instructions.
Then there's debugging, which I don't really trust the AI to do very well. Too many times I've seen it miss the real problem, then try to rewrite large sections of the code unnecessarily. I do most of the debugging myself, with some assistance from the AI.
I've largely settled on the opposite. AI has become very good at planning what to do and explaining it in plain English, but its command of programming languages still leaves a lot to be desired.
And remains markably better than when AI makes bad choices while writing code. That is much harder to catch and requires pouring over the code with a fine tooth comb to the point that you may as well have just written it yourself, negating all the potential benefits of using it to generate code in the first place.
While I'll say it got me started, it wasn't a snap of the fingers and a quick debug to get something done. Took me quite a while to figure out why something worked but really it didn't (LLM using command line commands where Bash doesn't interpret the results the same).
If its something I know, probably wont use LLM (as it doesn't do my style). If it's something I don't know, might use it to get me started but I expect that's all I'll it for.
For me, success with LLM-assisted coding comes when I have a clear idea of what I want to accomplish and can express it clearly in a prompt. The relevant key business and technical concerns come into play, including complexities like balancing somewhat conflicting shorter and longer term concerns.
Juniors are probably all going to have to be learning this kind of stuff at an accelerated rate now (we don't need em cranking out REST endpoints or whatever anymore), but at this point this takes a senior perspective and senior skills.
Anyone can get an LLM and agentic tool to crank out code now. But you really need to have them crank out code to do something useful.
I feel no shame in doing the later. I've also learned enough about LLMs that I know how to write that CLAUDE.md so it sticks to best practices. YMMV.
If I don't know how to structure functions around a problem, I will also use the LLM, but I am asking it to write zero code in this case. I am just having a conversation about what would be good paths to consider.
Found myself having 3-4 different sites open for documentation, context switching between 3 different libraries. It was a lot to take in.
So I said, why not give AI a whirl. It helped me a lot! And since then I have published at least 6 different projects with the help of AI.
It refactors stuff for me, it writes boilerplate for me, most importantly it's great at context switching between different topics. My work is pretty broadly around DevOps, automation, system integration, so the topics can be very wide range.
So no I don't mind it at all, but I'm not old. The most important lesson I learned is that you never trust the AI. I can't tell you how often it has hallucinated things for me. It makes up entire libraries or modules that don't even exist.
It's a very good tool if you already know the topic you have it work on.
But it also hit me that I might be training my replacement. Every time I correct its mistakes I "teach" the database how to become a better AI and eventually it won't even need me. Thankfully I'm very old and will have retired by then.
binarymax•3h ago
But I’ve come full circle and have gone back to hand coding after a couple years of fighting LLMs. I’m tired of coaxing their style and fixing their bugs - some of which are just really dumb and some are devious.
Artisanal hand craft for me!
crowbahr•3h ago
oasisaimlessly•3h ago
binarymax•2h ago
baq•3h ago
Usually it isn't, though - I just want to pump out code changes ASAP (but not sooner).
binarymax•3h ago
It’s just not worth it anymore for anything that is part of an actual product.
Occasionally I will still churn out little scripts or methods from scratch that are low risk - but anything that gets to prod is pretty much hand coded again.
gardnr•1h ago
https://github.com/BeehiveInnovations/zen-mcp-server/blob/ma...
It basically uses multiple different LLMs from different providers to debate a change or code review. Opus 4.1, Gemini 2.5 Pro, and GPT-5 all have a go at it before it writes out plans or makes changes.