Posts like these will always be influenced by the author's experience with specific tools, in addition to what languages they use (as I can imagine lesser-used languages/frameworks will have less training material, thus lower quality output), as well as the choice of LLM that powers it behind the scenes.
I think it is a 'your mileage may vary' situation.
Is there a market for such fast rotating code? Yes. And that market will probably grow, due to it getting flooded with cheap labor and attention – I’m sure people will find new use cases as well. But, crucially, this is not the market we have. You can bet all you want on AI, but in all likelihood the market needs will largely remain the same.
The only use i seem to get out of LLMs with my work is writing mundane brainless stuff like arrays for json responses etc which saves me 5 minutes so i can browse ycombinator and write these comments
Free ChatGPT?
Codex?
Jules?
Cline?
Cursor?
Claude Code with Opus?
Tight leash with conventions, implementation plan?
YOLO vibe coding?
It's got to the point where if someone talks about "vibe coding" you have to confirm with them which definition they are using, because otherwise you risk people talking right past each other because they're not actually talking about the same thing.
Just like every HN thread about vibe coding!
Now you might tell me; make the tasks smaller and more focused, yes, that's true, it performs better, but then i'm just faster myself coding. Most what I use CC for is exploration & frontend; generating slabs of mind numbing crap react and tailwind is fantastic. But for most other stuff, we have patterns and libs in place where i'm much faster and better than AI as it's not much code, just more logic/thinking.
> My take on AI for programming and "vibe coding" is that it will do to software engineering what fast fashion did to the clothing industry: flood the market with cheap, low-quality products and excessive waste.
> cheap, low-quality products
Product quality != code quality.
Years ago, I was an amazing C++ dev. Later, I became a solid Python dev. These days, I run a small nonprofit in the digital rights space, where our stack is mostly JavaScript. I don’t code much anymore, and honestly, I’m mediocre at it now. For us, AI coding agents have been a revelation. We are a small team lacking resources and agent let us move much faster, especially when it comes to cleaning up technical debt or handling simple, repetitive tasks.
That said, the main lesson I learned about vibe coding, or using AI for research and any other significant task, is that you must understand the domain better than the AI. If you don’t, you’re setting yourself up for failure.
Only if you fully trust it works. You can also first take time to learn about the domain and use AI to assist you in learning it.
This whole thing is really about assistance. I think in that sense, OpenAI's marketing was spot on. LLMs are good at assisting. Don't expect more of them.
This metaphor is too limiting though. You can do so much more with software than you can with clothes. Take a look at what injidup wrote. People are creating small home brewed projects for personal use.
So a lot of "fast fashion software" is going to be used at home. And let's face it, for our own home brewed projects for personal use, standards have always been lower because we know our own requirements.
I think in this "shadow economy of personal software use", LLMs are a boon.
1. Tea App wasn't vibe coding - It was built before vibe coding and the leak was incorrectly secured Firebase https://simonwillison.net/2025/Jul/26/official-statement-fro...
2. Replit "AI Deleted my Database" drama was caused by guy getting inaccurate AI support. All he needed to do was click a "Rollback Here" button to instantly recover all code and data. https://x.com/jasonlk/status/1946240562736365809
What does this eagerness to discredit vibe coding say about us?
If you know the domain it's a 3-6X efficiency improvement.
Amazing how well LLMs work on airplane wifi. Just text after all.
However, I find the analogy a bit off the mark. LLMs are, fundamentally, tools. Their effectiveness and the quality of output depend on the user's expertise and domain knowledge. For prototyping, exploring ideas, or debugging (as the author's Docker Compose example illustrates), they can be incredibly powerful (not to mention time-savers).
The risk of producing bloated, unmaintainable code isn't new. LLMs might accelerate the production of it, but the ultimate responsibility for the quality and maintainability still rests with the person pressing the proverbial "ship" button. A skilled developer can use LLMs to quickly iterate on well-defined problems or discard flawed approaches early.
I do agree that we need clearer definitions of 'good quality' and 'maintainable' code, regardless of AI's role. The 'YMMV' factor is key here: it feels like the tool amplifies the user's capabilities, for better or worse.
Middling programmer: Don't use AI. It creates bad legacy code that no one understands and is hard to debug. Machines will never write code as beautiful as true human artisans. Even if your saving time, you're actually wasting time.
Advanced programmer: AI is really useful
koakuma-chan•15h ago
ben_w•15h ago
LLMs are very useful tools, but if they were human, they'd be humans with sleep deprivation or early stage dementia of some kind.
mrweasel•15h ago
All code needs to be carefully scrutinized, AI generated or not. Maybe always prefix your prompt with: "Your operations team consists of a bunch and middel aged angry Unix fans, who will call you at 3:00AM if your service fails and belittle your abilities at the next incidents review meeting.".
As for the 100% vibe coders, please let them. There's plenty of good money to be made cleaning up after them and I do love refactoring, deleting code and implementing monitoring and logging.
varjag•15h ago
lemiffe•15h ago
WesolyKubeczek•15h ago
svantana•15h ago
What the vibe-coded software usually lacks is someone (man or machine) who thought long and hard about the purpose of the code, along with extended use and testing leading to improvements.
thefz•15h ago
I asked for a very, very simple bash script to test code generation abilities once. The AI got it spectacularly wrong. So wrong that it was ridiculous. Here's my reason why I think it does produce low quality code; because it does.
KronisLV•15h ago
> "Here's a link to the commits in my GitHub repo, here's the exact prompts and models that were used that generated bad output. This exact example proves my point beyond a doubt."
I've used Claude Sonnet 4 and Google Gemini 2.5 Pro to pretty good results otherwise, with RooCode - telling it what to look for in a codebase, to come up with an implementation plan, chatting with it about the details until it fills out a proper plan (sometimes it catches edge cases that I haven't thought of), around 100-200k tokens in usually it can knock out a decent implementation for whatever I have in mind, throw in another 100-200k tokens and it has made the tests pass and also written new ones as needed.
Another 200k-400k for reading the codebase more in depth and doing refactoring (e.g. when writing Go it has a habit of doing a lot of stuff inline instead of looking at the utils package I have, less of an issue with Spring Boot Java apps for example cause there the service pattern is pretty common in the code it's been trained on I'd reckon) although adding something like AI.md or a gradually updated CODEBASE.md or indexing the whole codebase with an embedding model and storing it in Qdrant or something can help to save tokens there somewhat.
Sometimes a particular model does keep messing up, switching over to another and explaining what the first one was doing wrong can help get rid of that spiraling, other times I just have to write all the code myself anyways because I have something different in mind, sometimes stopping it in the middle of editing a file and providing additional instructions. On average, still faster than doing everything manually and sometimes overlooks obvious things, but other times finds edge cases or knows syntax I might not.
Obviously I use a far simpler workflow for one off data transformations or knocking out Bash scripts etc. Probably could save a bunch of tokens if not for RooCode system prompt, that thing was pretty long last I checked. Especially good as a second set of eyes without human pleasantries and quick turnaround (before actual human code review, when working in a team), not really nice for my wallet but oh well.
simonw•14h ago
smartmic•15h ago
wobfan•15h ago
Which is fine, as long as people are aware of it.
simonw•14h ago
The LLM vendors are all competing on how well their models can write code, and the way they're doing that is to refine their training data - they constantly find new ways to remove poor quality code from the training data and increase the volume of high quality code.
One way they do this is by using code that passes automated tests. That's a unique characteristic of code - you can't do that for regular prose, or legal analysis or whatever.
"Even if you describe your problem (prompt) to a high standard, there is no way it can deliver a solution of the same standard."
My own experience doesn't match that. I can describe my problems to a good LLM and get back code that I would have been proud to have written myself.
tgv•15h ago
What it does do perfectly: convert code from one language to another. It was a fairly complex bit, and the result was flawless.
mettamage•15h ago
I've seen both happen. Sometimes it produced fairly good quality code on small problem domains. Sometimes it produced bad code on small problem domains.
The code is always not that great to bad at big problem domains.
jeltz•13h ago
If someone on my team who was a software engineer and not very junior consistently produced such low quality code I would put them on a performance improvement plan.