But we definitely need a word that makes a distinction between a LLM autonomously generating software, and having a human ultimately curating all the code (even if they're using a LLM to generate it.)
If not vibe coding, what should we call that?
In this case, most people have to be told what "vibe coding" is. I think hardly anyone guesses what it is by just hearing the name.
So I'm not at all surprised that people are using it to mean other things already.
Why do we need a term anyway? Neologisms are nice for writing articles but I feel like we get lost in the weeds unnecessarily trying to categorize things. Its like trying to come up with genre names for electronic music. The more terms we create, the less useful they become.
I always thought of it as a temporary term anyway while we reconcile this technology with the status quo. In a few years I'll bet it will go back to be just "programming" or "development".
Or where your customers/bosses vibe coding when they told you their requirements?
For the same reason I think it’s useful to distinguish between purely AI-generated imagery vs. using AI tooling in some specific and restricted way. e.g. if someone uses an AI de-noising program on a RAW photograph, the implications of that are entirely different from generating the entire image.
Until recently, it was not possible to generate sophisticated programs from scratch across a wide variety of problems domains without being involved in writing the code. This type of “development” is entirely unlike a more limited “AI-assisted” workflow and the implications of each are quite different.
As a viewer of artwork, how the art was produced entirely changes how I value it.
As a user of software - especially in certain categories - how the software is produced also changes how I value it, whether or not I trust it with my data, would be willing to install it in my computers, etc.
To your point, there’s a spectrum of AI involvement. But I think it’s necessary and useful to have language that helps identify software that sits on the extremes of that spectrum. Classifying things across the range is more difficult.
Usually spam
Otherwise all my customers/bosses did vibe coding way before LLMs.
So why do we need a term? What difference in action will hearing a different term drive?
On that front, I like the fact that the term "vibe coding" exists because anyone who uses that term unironically has just told me that they care more about the process than the result. Now that is information that changes my actions because it changes how seriously I take them.
I know a crash-course-day's worth of programming language(python and C), but very strong understanding of the principles of programming. So LLMs are a godsend because I can basically write code, function by function if need be, in English.
"Create this variable, do these mathematical transformations on it with retrieved value from API, display the output of it here, also compare it to the output of everything in the test set, store differences greater than 30% to their own set, store all values in an SQLite database, create a simple GUI with a field showing each output, make a button that outputs the results to a .pdf, give it a title block with labeled results listed, etc. etc."
Does coding become vibecoding when you do it in English? hah
"There’s a lot of hype about vibe coding"
There is incredibly little hype around vibe coding. 99% of the comments about vibe coding are people propping it up as a strawman to knock down. Otherwise it's like it fills the void left by web 3.0's decline to irrelevance where a bunch of useless masturbatory noise is had by people trying to get in front of something that they think will be a thing. Maybe they can put it on the blockchain.
All of us normal people incorporate LLMs into our work process. No vibes at all. Just another tool in our belt.
EDIT: Just discovered that my comment is apparently dead by default, which is...interesting.
What the author is proposing isn't really "vibe" anything, it's just dedicating a small amount of time to fixing tech debt in a way that happens to involve an LLM as an assistant. The LLM in this model is honestly mostly superfluous.
Don't get me wrong, this absolutely is how LLMs should be used in a professional setting, but I just question why we needed a name and a blog post for it. This is just responsible code maintenance as it's always been.
[0] https://x.com/karpathy/status/1886192184808149383?lang=en
One is ok, the other is risky
Judging from the style and complete lack of substance I don't think there is an author, unless you count ChatGPT as one.
That didn't stop the submission from rocketing to #1 on here though since it jingles the right set of keys.
I have a feeling a lot of "vibe coders" are joining the community. Could be bot spam too. At this point I'm starting to miss the liberal thought pieces.
I have been coding since I am 12 years old. I always loved writing code. But I also always loved reading code. I don't know, for me code is a kind of art. Never met anyone else who sees it like this. When a friend was hiring for his startup a while ago, I was happy to sit down for multiple hours and read all the code the applicants wrote and gave him advice on whom to hire.
So for me, the new times are paradise. I try to not touch code directly anymore. I write prompts that would enable a really good developer to implement features and then let various LLMs work on it. Afterward, I rate the results so I have an overall score for each LLM. I pick the best solution for my codebase and manually finetune it to perfection.
After each commit, I also ask the LLMs if they can find anything in the files that can be refactored to make the code shorter or more logical. The result is that the codebase becomes better and better. Because the LLMs often find stuff to improve. They usually come up with 10 ideas I dislike, but also one idea that I like. And so the codebase becomes better and better over time. Instead of worse and worse like in the past when you had to keep a balance of refactoring for the sake of beautiful code and building new features. Nowadays, refactoring becomes more and more free.
Citation needed? The worst kind of code tends to be clever code. This seems like a lot of code churn for no real benefit other than some loose definition of "better". How do you prevent bugs with these constant refactors?
Sonnet 3.7 with thinking is my go-to.
Deepseek R1
Gemini 2.5 pro (I've heard it said Gemini is outperforming Sonnet, but I find Sonnet more consistent)
O1-mini
Depending on what I'm doing it's generally either via Cody or Aider.
Maybe I'm just bad at getting it to do things, but I think your question about "letting go" is the real story. I think there are a lot of people not paying close enough attention to what's coming out of the LLM, and the tech debt building up is going to come back to bite them when it builds to a point the LLM can no longer make progress and they have to untangle the mess.
I ask the LLM "Can you find anything in this file(s) that can be made shorter or more logical?"
And then as I said, I like less than 10% of the ideas the LLM comes up with. But it is so fast to read through 10 ideas (A minute or so) that it is well worth it.
Probably an opportunity to category kill that niche.
"Got a vibe coded prototype you want to make more robust?"
Proper refactoring with an llm requires full testing coverage and, by the time you do all that, is the refactoring really necessary? I prefer 100% stability. If you're refactoring the because it's poorly structured and unreadable, that’s okay.. LLMs can help understand it.
In my use of llms, i find it’s actually much easier to rebuild something from scratch rather than refactoring flawed code. It’s much less likely to inherit strange assumptions and code smells that way.
With all that said, the one prompt I do use when refactoring is to tell the llm to do a lossless refactor and then follow up with a "was this really lossless"? It's not foolproof. LLM's love to lie.
Proper refactoring WITHOUT an LLM requires full test coverage! But most definitely WITH an LLM.
In cases where there is no test coverage, the first thing I do is have the LLM write one test at a time. The problem there is that if you truly wanted valid tests, you'd have to actually break the code, watch the test fail to prove it was valid (basically, the inverse of TDD) and then re-fix the code and start on the next test, but in practice, it is difficult to get an LLM to stick to this loop. I wish someone would train or refine some coding LLM to use either TDD or this form of "inverse TDD" where you're applying tests after the fact and also want to check their validity. (Or tell me how to do it.) Because mere prompting doesn't seem to stick- it always regresses to the mean eventually.
(I'm currently seeking work, btw, and would probably be happy to help refactor old code, advise people on code, etc. Sorry for the self-promo.)
Coming in as a gray hair to fix their disaster definitely pays the bills.
i.e. "I have this function. Can you suggest ways to make it more efficient?" etc.
Sometimes, it gives good feedback, sometimes, not. I almost always need to modify whatever it gives me.
Or will they do what most companies do when they sink millions of dollars into a codebase that doesn't work: dump the codebase, dump the team, hire a new one, and build from scratch?
What does everybody think?
edit: oxford comma is life.
panny•3h ago
Some questions,
Does repomix do anything Github copilot should be doing? It seems like this should be something copilot does automagically.
Does it work on any language? I notice the repomix github suggests a different tool if you're using python.
It seems straightforward to create an output.xml on the repomix site, but is there an opinionated try-it-free AI to use that output with?
I'm tired of trying things only to get ai slop. If this cures the slop, I would be interested.
edit: it seems the HN discussion is dominated by the definition of "vibe coding" and not at all interested in what the article presented as a solution... nice.