Worse yet, there are grifters like levelsio who will pretend like it's great because they know that's what their followers want to hear.
Similarly ~100% of written word will probably be penned by LLMs soon if not already, but that doesn't necessarily mean the writing is any good, only that LLMs can type really fast for pico-pennies on the dollar.
But it is slower than someone that is knowledgeable about the library and languages unless you don’t care about correctness.
Having inherited a couple of hastily written but dirt cheap offshored codebases to fix, I would rather vibe code than try to untangle those garbage code bases.
Vibe coding is terrible compared to the thoughtful output of a seasoned dev, but is significantly cheaper, faster and easier to iterate on than the lowest bidder offshored. Undoubtedly most offshore coders will just vibe code anyway, so in the near term I suspect LLMs will impact these “programmers” the most.
I’m using AI to increase my productivity, but whenever I’ve vibe coded (not intentionally, just by getting caught up in the dumb vibes) I’ve regretted it. I’ve ended up with a tangled mess.
It sorta worked, and sorta didn’t. I’m seeing no evidence that this round is different. LLMs allow coding via natural language and assuming a lot of context that is typical to human conversation, but a lot of coding is delving into nuance, which is going to be work, no matter the tool.
Vibe coding promises the world, and behold fails to deliver.
All the excitement I felt because it was new to me, but it was really just a very basic toy not a real piece of software.
Meanwhile there were seasoned pros who could probably do amazing things even with GWBASIC. But that was more in spite of it than because of it.
this is the core issue. The best language we have for specifying this nuance is still programming languages, not English
Moving up from ASM -> C -> Python we're mainly abstracting over performance and implementation details, not functionality.
Here thouhg, in making the next jump from the high level Python to an English-specified LLM output we're trying to abstract over functionality. This doesn't work. It's flawed from the start
I really do feel like we're living in completely separate worlds, so many people are very enthusiastic about LLMs, and every time I try them, they leave me completely disappointed.
The decision to automate is already done, no need to rush through it.
It doesn't feel that different but it is a little faster
If you are having to understand something you didn't understand, it's probably taking a bunch of time to read and verify what the script does. This can be a good learning experience and reveal unknown unknowns, but probably isn't a massive speedup.
My most recent example of this: write a script to delete all git branches that point to HEAD.
Go ahead see how long it takes you to figure out the command for that, because it took me under 60 seconds with AI.
Learning about `git rev-parse` through documentation and learning about `git rev-parse` through AI fundamentally have the same outcome at the end of the day: you have learned how to use `git rev-parse`.
But with a non-zero chance of hallucination.
http://www.catb.org/jargon/html/S/script-kiddies.html
If you are using AI to learn, understand, and verify what it spit out, by definition, you aren’t a script kiddy. My argument was about how you use AI rather than a commentary on if you should.
Yeah now go and do that in under 60 seconds.
You seem to be a bit confused. I understand Git very well. But AI can simply do this much faster than I could.
Just because the parts are available doesn't mean every possible combination of them has already been made
Even though, from personal experience, at scale it still falls apart
I'm sure it will go down in history as some sort of intentional movement, and then someone will occasionally point out "did you know that originally this idea was never meant to be used for serious work, Andrej literally said it was only okay for throwaway weekend projects." Big not-always-good things[1] started this way — by misinterpreting something.
Of course, it's possible that in the future code written this way will actually be maintainable, performant, and secure enough that you could start trusting it, and this will morph into a legit way of writing software, but right now the hype is way ahead of what Andrej meant when he coined the term.
[1]: Examples come to mind: Agile, Scrum, Javascript (2 week project originally), there're definitely more examples.
But if you don't review the code you're asking for hurt. It has literally written stuff like "if pending changes > 20, delete all", etc.
I tried three times, just for fun, to start a decent-sized projects from start to finish without interventions. All failed. The AI can't seem to have far enough horizon to see the projects to completions.
AI needs a super well defined scoped, for example: an IDE companion or logo generator or writing a specific Class implementation, to thrive.
E.g. I still kind of write python as if I was writing C++. So, sometimes I’ll write a for loop iterating over integer indexes and tell the LLM “Hey can you rewrite this more pythonically” and I get decent results.
I'm going to keep shilling this until people start doing it: I'd love to see everyone who says he's vibecoded something non-trivial and posts/blogs about it start dumping and attaching his full chats to it. This would really help people to understand what's hype/exaggeration and what's real, and would help others be better at the real part.
But it felt magical the one time it happened. Every other time, I'm mostly using tab autocomplete and that works quite well too.
And when I code in Elixir, I have a 20% success rate even with small requests.
But since my free time is so extremely limited, it would be really nice to get better LLM and agent support. So I hope the lesser used languages and frameworks get more training love, soon. Lest we all only use Next for hobby projects in the future.
Was this vibe-blogged?
'Vibe coding' is here to stay and the future of software development is assistant-led interfaces. I pair program daily with Claude Code on a distributed systems & a handful of other apps as needs arise. Most of my work is puppeteering, as well as careful directing. & I can't sit blindly, but we're producing valid software at pace I've never experienced in my career; with additional bandwidth to do some leisure activities in real time without losing productivity, vibing.
Seems like we’re need a term for something between basic tab autocomplete and purely vibing without glancing at the code. If only to have separate conferences in 2-3 years.
Citation needed.
Serious question, is this self-reported?
Honest question here. I'd be curious too what scenarios in general folks find LLMs useful re coding.
edited: typo
Also, why is it so hard for LLMs to generate code without stupid comments everywhere? I don't want to see // frobincate followed by thing.frobnicate(). Even when I include clear, strict instructions they'll ALWAYS generate comments like that. It pisses me off.
I’ve been able to extend its functionality several times without it breaking, and had Claude separate all functionality as components so it’s easier to manage. I could never have done that myself.
I consider it a very good prototype although we use the tool internally on a weekly basis. If we’re going to continue using it long term, I will probably find an experienced programmer to rebuild it from scratch, and support it.
m348e912•11h ago
It took 21 iterations of code updates to get a working version. It isn't bad but doesn't pull the latest articles (I don't know why) and hell if I could debug it.
cumo•11h ago
salgernon•11h ago
hajile•10h ago
This is the market disrupted by vibe coding and AI-generated copy/images.
These Wordpress or Wix/Squarespace sites account for MOST of the web, but only a minuscule fraction of high-paid devs. For those devs, AI just doesn't change very much because it has no idea what questions need to be asked to actually deliver what management wants let alone the ability to problem solve and deliver novel solutions to issues.