This still happens quite a bit, and it's just like taking away a hard task from someone less experienced. The difference is there is no point in investing your time teaching or explaining anything to the AI. It can't learn in that way, and it's not a person.
I like to vibe code single self-contained pages in html, css, and JavaScript, because there's a very slim chance that something in the browser is going to break my computer.
This is the problem I have seen a lot. From professionals to beginners, unless you actually carved the rock yourself, you don't really have an insight into every detail of what you see. This is incidentally why programmers tend to like to just rewrite stuff than fix it. We're lazy.
Hmm. And maintenance?
But this is, properly expressed, "see if it works, based on my incomplete understanding of the code that I haven't worked through and corrected by trying to write it, but I have nevertheless communicated directly to an AI that may not have correctly understood it and therefore may not have even the vaguest inkling of the edge cases I haven't thought to mention yet".
Vibe-coded output could only properly be said to "work" if we were communicating our intentions in formal methods logic.
What you mean is "apparently works".
You notice how LLMs do good on small tasks and on small projects? The more LLM code you add to your projects, the slower and worse (read: more tokens) they perform. If this was by design (create bigger, unmaintainable projects, so you can slowly squeeze out more and more tokens out of your users), I'd have applauded the LLM creators, but I think it's by accident. Still funny though.
except someone less experienced never gets to try, since all the experienced programmers are now busy shepherding AI agents around, they aren't available to mentor the next generation.
There will be bugs that the AI cannot fix, especially in the short term, which will mean that code needs to be readable and understandable by a human. Without human review that will likely not be the case.
I'm also intrigued by "see if it works". How is this being evaluated? Are you writing a test suite, manually testing?
Don't get me wrong, this approach will likely work in lower risk software, but I think you'd be brave to go no human review in any non-trivial domain.
Reminds me of Steve Yegge's short-lived CHOP - Chat Oriented Programming: https://sourcegraph.com/blog/chat-oriented-programming-in-ac...
I remain a Karpathy originalist: I still define vibe coding as where you don't care about the code being produced at all. The moment you start reviewing the code you're not vibe coding any more, by the definition I like.
My wife used the $20 claude.ai and Claude Code (the latter at my prompting) to vibe-code an educational game to help our five-year-old with phonics and basic math.
She noticed that she was constantly hitting token limits and that tweaking or adding new functionality was difficult. She realized that everything was in index.html, and she scrolled through it, and it was clear to her that there was a bunch of duplicated functionality.
So she embarked on a quest to refactor the application, move stuff from code to config, standardize where the code looks for assets, etc. She did all this successfully - she's not hitting token limits anymore and adding new features seems to go more smoothly - without ever knowing a lick of JS.
She's a UX consultant so she has lots of coding-adjacent skills, and talks to developers enough that she understands the code / content division well.
What would you call what she's doing (she still calls it vibe coding)?
But I think the fact that she's managing without even knowing (or caring to know) the language the code base is written in means that it isn't really "coding" either.
She's doing the application architecture herself without really needing to know how to program.
I've copy-pasted snippets for tools and languages that I do not know. Refactored a few parameters, got things working. I think that counts as programming in a loose sense. Maybe not software development or engineering, but programming.
The first non-toy program I ever wrote was in BASIC, on ZX Spectrum 48, and although I don't have it anymore, I remember it was one of the shittiest, most atrocious examples of spaghetti code that I've ever seen in my life.
Everyone starts somewhere.
I do think we need a new definition for vibe-coding, because the way the term is used today shouldn’t necessarily include “not even reading the code”.
I’m aware that Karpathy’s original post included that idea, but I think we now have two options: - Let the term vibe-coding evolve to cover both those who read the code and those who don’t. - Or define a new term — something that also reflects production-grade coding where you actually read the code. If that’s not vibe-coding, then what is it? (To me, it still feels different from traditional coding.)
I have a few problems with evolving "vibe coding" to mean "any use of LLMs to help write code:
1. Increasingly, that's just coding. In a year or so I'll be surprised if there are still a large portion of developers who don't have any LLM involvement in their work - that would be like developers today who refuse to use Google or find useful snippets on Stack Overflow.
2. "Vibe coding" already carries somewhat negative connotations. I don't want those negative vibes to be associated with perfectly responsible uses of LLMs to help write code.
3. We really need a term that means "using prompting to write unreviewed code" or "people who don't know how to code who are using LLMs to produce code". We have those terms today - "vibe coding" and "vibe coders"! It's useful to be able to say "I just vibe-coded this prototype" and mean "I got it working but didn't look at the code" - or "they vibe-coded it" as a warning that a product might not be reviewed and secure.
Just like no one speaks of vibe-aeronautics-engineering when they’re “just” using CAD.
More specifically, GAIA in SDE produces code systematically with human in the loop to systematically ensure correctness. e.g. Like the systematic way tptacek has been describing recently [2].
[1] https://en.m.wikipedia.org/wiki/Gaia
[2] https://news.ycombinator.com/item?id=44163063
Briefly summarized here I guess: https://news.ycombinator.com/item?id=44296550
Blind-coding.
This is very dumb. Of course you can.
When it's your problem being delegated, you can't delegate consequences away. I can eat for you, but you won't get satiated this way.
You cannot delegate the act of thinking because the process of delegation is itself a decision you have made in your own mind. The thoughts of your delegates are implicitly yours.
Just like if you include a library in your code, you are implicitly hiring the developers of that library onto your project. Your decision to use the library is hiring the people who wrote it, to implicitly write the code it replaces. (This is something I wish more people understood)
That's not to say that these models don't provide value, especially when writing code that is straightforward but can't be easily generalized/abstracted (e.g., some test-case writing, lots of boilerplate idioms in Go, and basic CRUD).
In terms of labor I potentially see this increasing the value (and therefore cost) of actual experienced developers who can approach novel and challenging problems, because their productivity can be dramatically amplified through proper use of AI tooling. At the other end of spectrum, someone who just writes CRUD all day is going to be less and less valuable.
That said, if you spend most of your time sussing out function signatures and micromanaging every little code decision the LLM makes, then that's time wasted imo and something that will become unacceptable before long.
Builders will rejoice, artisan programmers maybe not so much.
Maintainers definitely not so much.
A requirement to do so might lead to more. Like loss of job for the illiterate "programmer".
So you measure “productivity” in lines of code? Say no more.
So no, you don’t _need_ to read code anymore. But not reading code is a risk.
That risk is proportional to characteristics that are very difficult, and in many cases impossible, to measure.
So currently best practice would be to continue reading code. Sigh.
This is the logical conclusion of the indiscipline of undereducated developers who have copied and pasted throughout their career.
This reality is then expressed as "but humans also copy and paste" as if this makes it OK to just hand that task over now to an AI that might do it better, where the solution is to train people to not copy and paste.
Everything about AI is the same story, over and over again: just pump it out. Consequences are what lawyers are for.
It's really interesting to me that within basically a generation we've gone from people sneering at developers with old fashioned, niche development skills and methodologies (fortran, cobol, Ada) to sneering at people with the old-fashioned mindset that knowing what your code is doing is a fundamental facet of the job.
I guess it's really hard to work with AI agents, if you don't have real project experience in a more senior position.
* Treat the AI/ML as a junior programmer, not a senior - albeit one willing to make a leap on basically any subject, nevertheless - a junior is someone whose code must always be questioned, reviewed, and understood - before execution. Senior code is only admissible from a human being. However, human beings may have as many junior AI’s in their armpit as they want, as long as those humans do not break this rule.
* Have good best practices in the first f’in place!!
Vibe-coding is crap because ‘agile hacking’ is crap. Put your code through a proper software process, with a real workflow - i.e. don’t just build it and ship it. Like, ever. Even if you’ve written every line of code yourself - but especially if you haven’t - never ship code you haven’t tested, reviewed, proven, demonstrated in a production analog environment, and certified before release. Yes, I mean it, your broken FNORD hacking habits will be force-magnified immediately by any AI/ML system you puke them into. Waterfall or gtfo, vibe-code’rs…
* Embrace Reading Code. Look, if you’re gonna churn milk into butter, know what both milk and butter taste like, at least for your sake. Don’t ship sour butter unless you’re making cheese, and even then, taste your own cheese. AI/ML is there to make you a more competent human being; if you’re doing it to avoid actually doing any work, you’re doing it wrong. Do it to make work worth doing again….
davidmurdoch•4h ago
tempodox•3h ago