And apparently the original cliché as well.
Ask Nokia, BlackBerry and Kodak.
LLMs have been so spectacularly useless the couple of times that I've tried to use them for programming, that I can't really wrap my head around what this must be.
However, for most cases I've tried, I get wildly incorrect and completely non-functional results. When they do "function", the code uses dangerously incorrect techniques and gives the wrong answer in ways you wouldn't notice unless you were familiar with the problem.
Maybe it's because I work in scientific computing, and there just aren't as many examples of our typical day to day problems out there, but I'm struggling to see how this is possible today...
My favorite comment so far (I haven't gotten to the end) paraphrased:
"I don't know what Swagger is, but let's just paste it in here."
Somehow he figured out that Swagger docs tell Cursor enough to figure out how to talk to this API. Which is exactly what Swagger is for!
Seems like the odd, formal syntax of programming languages is the major block for many people from doing software development. Because he is doing every other step a professional developer does when building an application.
Just from what you wrote, I don't know cursor, but sounds like something to do with word processing so maybe it helps write docs. Then swagger that sounds like maybe it goes around grabbing free stuff? Maybe its a dependency manager?
Not a dev but been “vibe coding” since chatgpt came out. The llms can write a book… if you try to accomplish it with a single prompt it’s trash. If you construct the book chapter by chapter it’s a lot better and more cohesive.
You don’t build the app with a single prompt - you build a function or file at a time in a modular, expandable format.
Hackers are comfortable working in the dark— navigate with a flashlight (some background knowledge, understanding on syntax, data structures, secure coding practices etc) and you can get where your going a lot quicker and can try out a lot of different routes you may not have seen or had an opportunity to explore otherwise- maybe stumble upon an Easter egg along the way.
You don’t necessarily need to spend hours reading the documentation on an unfamiliar library if you know how to get the AI to understand it, reinforce it with some examples and and use it- maybe in that process it expands your perspective or gives you an idea to incorporate into your production grade environment.
With how quickly things advance- it seems rapid prototyping would allow you to qualify what’s worth investing time in vs what’s not.
If you know about DAST, SAST and containers you can probably create a non total trash workflow for prototype qualifications and then pass to a more technically savvy specialized team member if warranted?
Exploratory data analysis doesn’t seem wholly dissimilar in value- never know when you’ll stumble across a good nugget to feature engineer if you aren’t actively mining and exploring.
“Vibe coding”==you’re getting the model to do what YOU want. Craft some nefarious things to understand how to hold the reins on the beast and that’s a decent starting point.
If the LLM is useless- learn up on NLP, word embeddings and BERT and fine tune one to your specific use case. Don’t use the same chat session to make every file- manage the memory and tokens strategically and use few-multi shot reinforcement learning to specialize the sessions knowledge.
Maybe things become a lot more bespoke and require less dependencies- less susceptible to supply chain attack. More variety could make your system less susceptible to automated attacks and make the pyramid of pain stronger.
If everyone reverse engineers the dependencies and builds most things in house with their own twist, maybe that enables more flexibility with custom encoding and makes it less intuitive for an attacker to analyze your tech stack and infer how it operates.
—surely over simplifying a few things and missing out on some production grade concepts but just grasping that the same thing that’s viewed as creating security gaps could also be used as a mechanism to close some if used efficiently and strategically. -— it’s not competition to a dev, use it so you can learn more and do better
I'd be concerned purchasing a book from a "programmer" who claims to teach people how to code without code. Kinda sounds like an "author" who publishes books without writing books.
If vibe coding somehow becomes the method of programming, then code will become obsolete. Hear me out:
Why code when you can just ask the computer to do what you want and get the results. The coding part is abstracted deep in the background where no human needs venture.
When vibe coding dominates, It's not that people won’t know how to code anymore, it's that coding becomes irrelevant. The same way that there are people who still know how to ride horses, but it's irrelevant to transportation. When vibe coding reaches its peak, programming languages will evolve into something unrecognizable. Why do we need a human readable programming language when no human needs to read it? I picture a protocol agreed upon by two computers, never released to us humans.
Because then you won't know the design of the code or how it even works.
The hard part of coding isn't writing the code itself. It's the design of the code that takes skill, and if you leave that part completely up to AI, you are taking your life in your hands. Bad idea.
When the person building the application doesn't know or care, the application will still be deployed.
Resistance is futile.
We will adapt.
Something needs to be done. It should be uncontroversial to require solid understanding of fundamentals from software professionals, yet here we are discrediting knowledge by calling such things "gatekeeping." It's reckless behavior as the industry is hellbent on hoarding as much personal information as it possibly can. Information that any responsible professional should be working to keep secure at the very least.
Emergency services, hospital infrastructure, financial systems (like Social Security, where a missed check may actually mean people starve) are all places where you don't want to fail because of a weird edge case. It also feeds into fixing those edge cases requiring some understanding of design in general and also the design implemented.
Then there's the question of liability when something goes wrong. LLMs are still computers right now: they do exactly, and only what you tell them to do.
I would argue that this is already true for people who practice vibe coding, because otherwise they'd spend less time just banging it out themselves instead of twisting prompts to get something that mostly works and needs hours of debugging.
Then any end users with the proper credentials can vibe code UIs (web apps, iOS and Android apps) that call those APIs to their heart's content.
We may also need operating systems and web browsers hardened in new ways to survive vibe coded apps.
That does mean it's hard to break the app, but it also means people quite frequently run into those limits.
If you're writing code for production, even if you get an LLM to put together bits of it for you, that's programming. It's pretty much copy-and-paste-for-stackoverflow if StackOverflow had a massively larger library of snippets that almost always included the thing you needed at that exact moment.
Professional programmers still need to take responsibility for making sure the code they are producing actually works!
It's well past time for traditional "high level" programming languages to meet the same fate.
Natural language to formal language does not provide that. How the hell would I debug or operate a system by just looking at a prompt? I can't intuit which way the LLM generated anything. I will always have to be able to read the output.
AFAICT, the only people who say you can remove code are people who don't code. I never hear this from actual devs, even if they are bullish on AI.
SIMD optimization is already handled well by the current generation of models [1]. There will be no point in doing that by hand before too long. An exercise for antiquarians, like building radios from vacuum tubes.
I never hear this from actual devs, even if they are bullish on AI.
You're hearing it from one now. Five years from now the practice of programming will look quite different. In ten to fifteen years it will be unrecognizable.
> How many C++ coders these days can read x86 or ARM assembly? Not many, because they almost never need to. It's well past time for traditional "high level" programming languages to meet the same fate.
There is a misunderstanding, let me rephrase. How will I operate and maintain that software without a high level language to understand it? Or do you think we will all just be debugging asm? The same language you just said people don't bother to learn? Or am I supposed to debug the prompt, which will nondeterministically change the asm, which I can't verify because I can't read it?
Doesn't matter how it evolves, some easy to read for humans high level language that deterministically generates instructions will always be needed. To try and replace that is kind of counter productive, imo. LLMs are good at generating high level language. Leave the compilers to do what they are good at.
Data compression on a massive scale and NLP search on top of that will not be the thing that finally does it. Code is logically constrained so it can be load bearing.
If NLP coding is ever solved that might change. But LLMs did not solve NLP, they improved massively on the state of the art but they are still riddled with glaring issues like devolving into nonsense often and in unpredictable ways.
All LLM-as-AI hype hinges on some imaginary version of it that is just around the corner and solves the current limitations. It's not about what is there, but what ought to be in the mind of people who think it's the silver bullet.
I was big into writing text adventures in those days, where the central problem was how to get the computer to understand what the user was saying. It was common for simple imperative sentences to be misinterpreted in ways that made players want to scream in frustration. Even the best text parsers — written by the greatest minds in the business — could seem incredibly obtuse, because they were. Now the computer is writing the fucking game.
You had to be there to understand what a big deal this is, I guess. If you went back in time and brought even our current primitive LLM tech with you, you'd be lucky not to be burned at the stake. NLP is indeed 'solved,' in that it is now more of a development problem than a research problem. The Turing Test has been passed: you can't say for sure if you're arguing with a bot right now. That's pretty cool.
But you're right. The NLP interface is cool. It's kinda like VR. It would be awesome if it worked the way we dream it could.
Maybe that's why we keep getting hung up on implementations that make it seem like they got it figured out. Even when they clearly haven't, we still avert our eyes from the fraying edges and make believe.
Maybe that's why both fields are giant money pits.
... which is my point as well. You can no longer tell if your interlocutor is even human, and yet you're still thinking and talking about books written in the 1980s.
(NLP wasn't much of a money pit back in the 80s, I know that much. If it was, somebody else must've been getting all the money...)
I don't really think modern chatbots pass the Turing test. It's not that hard to figure it out.
> (NLP wasn't much of a money pit back in the 80s, I know that much. If it was, somebody else must've been getting all the money...)
No, the point is that it's become one now that LLMs make it seem like we finally got the NLP interfaces we've been dreaming about for decades.
A lot of people on Reddit didn't figure it out, if you've followed that story ( https://old.reddit.com/r/changemyview/comments/1k8b2hj/meta_... ).
You can file that under the apparently-infinite set of Things That Are Only Going to Get Worse.
The initial PR did introduce a buffer overflow.
Also keep in mind that there was no novel vectorization, there were already multiple SIMD implementations for other ISAs.
This is what it must have felt like when a few people started suggesting that horses were probably not going to be the way people got around for much longer, and other people giggled and guffawed at them and said "LOL" in Morse code, or whatever the memetic currency of the day was.
All the first group could do was wrinkle their noses and reach for the doorknob, once they realized that winning an argument with such people was neither possible nor necessary.
Even Star Trek has engineers with programming experience who can dig in when the voice-controlled computer goes awry.
Below worked for me
intext:"vibe coding" before:2025/02/01
The first result I got was this one https://vibecoding.vercel.app/ - which is shown as being from 2023... but that's fake.
Read the page and it starts by saying:
> On November 30, 2023, Andrej Karpathy coined the term "vibecoding"
And then includes a screenshot of Andrej's tweet - this one: https://twitter.com/karpathy/status/1886192184808149383
But that tweet is dated 2nd February 2025. The Vercel-hosted site is lying about the date.
I don't think Google's "before" filter actually works. It looks to me like blog spammers have figured this out and now habitually back-date their posts to a date well before they were written.
Another clue: that article links to this one ("describe what you want ") https://alitu.com/creator/workflow/what-is-vibe-coding/ - which links to Andrej's February tweet as the origin of the term.
It's funny, I tried to search that exact keywords and this exact comment is on the top page
Friend of mine suggested “apping”.
I ‘apped’ this in 2 hours vs I ‘vibe coded’ this in 2 hours.
I AOCSed that shit together in 2 hours.
Doubles as a play on the awk bash command for text manipulation.
Picture this- are tools like Devin "vibe coding"?
if we break down the mechanics of what interfaces it's looping through:
1)Chat 2)IDE 3)CLI 4)Dev console/Browser
and it's effective copy and pasting what it sees while trying to complete an objective it doesn't fully comprehend. Blissfully ignoring the ramifications of desired combinations as long as decent version control practices are being applied. iterating / adjusting prompts subtlety along the way to debug when getting stuck in a thought loop. changing your prompt from "fix it" to something with more "pizazz" as the key to breaking this cycle.
how is it any different than when I do all this manually?
Slog through this game of 4 square long enough and you can pretty much vibe anything together.
To the publishers it's not a mistake, it's just clever marketing. Consider which of these two jumps off that glossy cover and into the distracted eye of a Technical Program Manager most readily: AI-Assisted Programming, or Vibe Coding
Now consider whether either of those parties feels an obligation to help maintain coherence of the software community's technical discourse.
---
The part we are seriously missing is a text specification of the software. For any AI coded software, the modular text spec must be neatly written and organized in the ./docs/spec/ folder. It is the foundation for any new model to regenerate the entire software.
billy99k•2mo ago
It's similar to the whole hacker/cracker debate. Words become defined by the one that has the most influence over the community and sometimes evolve on their own through places like social media.
9rx•2mo ago
esperent•2mo ago
Next one down is dictionary definition (or claim to authority, for example a tweet where the term was first used). But community meaning takes precedence.
Authors are free to use a nonstandard meaning but should provide readers with their definition if they want to be understood.
9rx•2mo ago
No. That would make it impossible for someone new to a community to communicate with a community, all while at the same time a community isn't going to accept someone new who isn't communicating. Yet clearly people do join communities.
What happens in reality is that everyone accepts that the speaker's definition reigns supreme, and if there is suspicion – absent of a definition – that there is a discrepancy between the speaker's definition and own's own definition, the speaker will be asked to clarify.
A speaker may eventually adopt a community's definition, but that doesn't always happen either. Look at the Rust users here. They hang desperately onto words like enum that do not match the community definition. But it doesn't really matter, does it? We remember that when they say enums they really mean sum types and move on with life.
esperent•2mo ago
"Communities" here also have a hierarchy:
1. All speakers of a language 2. Speakers in a region (dialect) 3. Subgroups like teenagers, engineers, gamers, redditors 4. Sub-sub groups like teenagers from a specific school, engineers of a specific discipline, gamers of a certain game
x. Personal dialect (idiolect), which is a sub group that exists separately to all of these and is most closely related to family and the friend group you had as a child and teenager.
We are all members of many different groups which intermesh, and we seemlessly switch meaning and pronunciation as we do that (code switching).
When joining a community, people start at the top (learn French) then move through subgroups. Dictionaries and textbooks are designed to match as closely as possible to the top level community's consensus of word meaning, which, being large, is also the most static. But any language learner will know that textbooks can only get you so far in learning how language is actually spoken day to day. You have to join the community to learn that.
The same applies as you go down to dialects and other sub groups. You can learn a bit before you join, but becoming a part of the community, linguistically, can only happen after joining.
xnx•2mo ago
9rx•2mo ago
A good speaker will define the words as he speaks (still relying on some baseline shared understanding of the most common words, of course; there is only so much time in the day...) so there is no room for confusion, but in absence of that it is expected that the listener will question any words that show an apparent disconnect in meaning, allowing the speaker to clear up what was meant, to ensure both parties can land on the same page.
sampullman•2mo ago
Even more so when it's an author/reader relationship. The reader is free to interpret the book/article/etc. how they want, and if enough agree, it becomes the consensus.
9rx•2mo ago
Where audience understanding is important than you will definitely go out of your way to ensure that definitions are made abundantly clear and that the audience agrees that they understand.
But, in actual practice, most of the time the audience understanding really doesn't matter. Most people speak for the sake of themselves and themselves alone. If the audience doesn't get it, that's their problem. Like here, it means nothing to me if you can't understand what I'm writing.
afiori•2mo ago
9rx•2mo ago
While the work is being written, the author is the only known audience. In practice, if an author cannot find enough motivation to write something for himself, it simply won't get written. Anyone else who happens to become the audience is gravy, but the work cannot be written for them as they only become known as an audience later.
sampullman•2mo ago
I think we're quibbling over what "matter" means. If you look at it from this perspective, I understand where you're coming from.
In my opinion it's almost exactly the opposite - once something is written the only thing that matters is the audience.
9rx•2mo ago
conorjh•2mo ago
9rx•2mo ago
But that doesn't mean the audience came from the same place. It is very possible, and often happens, that they heard/created an entirely different definition for the same word. The speaker cannot possibly use their definition before even knowing of it.
afiori•2mo ago
If you are trying to tell something to someone you should say it so that they someone understand what you want them to understand.
It does not mean that the audience can on the fly redefine your words, it just means than you cannot expect them to magically read your mind
gensym•2mo ago
auxbuss•2mo ago
afiori•2mo ago
polotics•2mo ago
9rx•2mo ago
I know the tech crowd in particular loves to make up their own pet definitions for words and then double down on refusing to acknowledge that any other definition is possible, thereby continually talking past each other because there is no shared lexicon to hilarious effect, but that's not the norm, thankfully.
To each their own. However, we're all in this together.
simonw•2mo ago
afiori•2mo ago
afiori•2mo ago
How would you even know which language anyone is speaking?
Counterproposal: words are a tool for communication and meaning is something we gather from the communication. In this words are no different than hand gestures, facial expressions, and body language.
The parties to a communication can only communicate effectively if they agree enough on the meaning of words/gestures/expressions/actions (which is why we cannot speak a language we do not know)
9rx•2mo ago
You can, however, speak a language you do know even when the listener doesn't know said language. Thus proving that the words spoken are defined by the speaker. It must be that way, fundamentally – as you suggest, you can only speak the language you know. If that tells that words have no meaning, sure. That assertion means nothing anyway.
afiori•2mo ago
What they are trying to say is
"the speaker defines the meaning of words" -> everybody understand things differently and there is no external absolute authority of meaning
and
"the listener defines the meaning of words" -> people are going to understand their interpretation of what you say, not what you mean
Both are useful and important statement people can learn from while "X defines the meaning of words" is meaningless.
9rx•2mo ago
Of course, at this point we're ultimately getting stuck in that nerd thing I spoke of in another comment: "I know the tech crowd in particular loves to make up their own pet definitions for words and then double down on refusing to acknowledge that any other definition is possible, thereby continually talking past each other because there is no shared lexicon to hilarious effect, but that's not the norm, thankfully."
So, while hilarious, I will break the cycle and ask you to clarify what you mean by "define" so that I can shift to using your definition. I'd have used it from the start but I haven't quite figured out how to read your mind yet, so I as the speaker, unfortunately, was beholden to defining it as I understand it.
smokel•2mo ago
Another unfortunate example is the increasingly negative connotation assigned to the word "algorithm".
dylan604•2mo ago
bet
drfuchs•2mo ago
cruffle_duffle•2mo ago
And while the general public might not know the fine distinctions between these, I think society does get that there’s a whole spectrum of actors now. That wasn’t true in 2000—the landscape of online crime (and white hat work) hadn’t evolved yet.
Honestly, I’m just glad the debate’s over. “Cracker” always sounded goofy, and RMS pushing it felt like peak pedantry… par for course.
That said, this whole “vibe coding” thing feels like we’re at the beginning of a similar arc. It’s a broad, fuzzy label right now, and the LLM landscape hasn’t had time to split and specialize yet. Eventually I predict we’ll get more precise terms for all the ways people build software with LLM’s. Not just describing the process but the people behind the scenes too.
I mean, perhaps the term “script kiddie” will get a second life?
nathan_douglas•2mo ago
Timber-6539•2mo ago
simonw•2mo ago
dang•2mo ago
tptacek•2mo ago
https://news.ycombinator.com/item?id=1865063
polotics•2mo ago
jimbokun•2mo ago
The article acknowledges this.
alganet•2mo ago
Ideas, ideas are much sturdier.
If you change the meaning of something too radically, it has a tendency to snap back.