What the fuck? Are people taking "vibe coding" as a serious workflow? No wonder people's side projects feel more broken and buggy than before. Don't get me wrong, I "work with" LLMs, but I'd never merge/use any code that I didn't review, none of the models or tooling is mature enough for that.
Really strange how some people took a term that supposed to be a "lol watch this" and started using it for work...
don't forget about the insane amount of marketing around AI code companies and how they put "vibe coding" in front of everyone's face all the time.
You tell someone something enough times and they'll belive it
As an aside, I really hate how cynical I feel I've been compelled to become at the arrival of such a genuinely innovative technology. Like, with this very article I can't help but to think there are ulterior motives behind it's production.
Yeah I'm feeling like this too. This should be so exciting! We're getting close to the Star Trek dream of just telling the computer to do work and it works!
I've been trying to examine why it's not exciting for me, and I'm actually pretty repulsed by it.
I think it's a combination of things
To start, I'm pretty disgusted by the blatant and unapologetic scraping of every single scrap of public data, regardless of license or copyrights.
I'm also really discouraged by how this is turning out to be another tool in the capitalist toolbox to justify layoffs, increase downward pressure on salaries, and once again extract more value per hour worked from employees
I also don't feel like the technology is actually that good or reliable yet. It has transformed my workflow but for the worse. Because my company is very bullish on AI it has resulted in me losing what little control I had to choose the tools that I feel are best for my job, in favor of what they want me to use because of the hype
Ultimately I'm cynical because I don't feel like this is making my life better. It feels like it is enriching other people at my expense and I am very bitter about it
I can't imagine doing it for anything serious though.
But this survey seems to span much more than just one-off tiny things, and gives the impression people working as professionals in companies are actually doing "vibe-coding" not as a joke, but as a workflow, for putting software into production.
As capable as the models are, what matters more is how competent they are perceived to be, and how that is socialized. The hype machine is at deafening levels currently.
But I'm also an experienced developer and at this point, an experienced "vibe coder". I use that last term loosely because I have a structured set of rules I have AI follow.
To really understand AI's capability you have to have experienced it in a meaningful way with managed expectations. It's not going to nail what you want right away. This is also why I spend a lot of time up front to design my features before implementing.
Right, but what defines if what you're doing is "vibe-coding" or not is if you actually view the code it produces, at any point of the workflow. You're "vibe-coding" if you're merging/pushing without reviewing the code.
I'm also an experienced developer, and used LLMs a lot, but never pushed/merged anything into production that I haven't read and understood myself.
> "Among those who feel AI degrades quality, 44% blame missing context; yet even among quality champions, 53% still want context improvements."
Is this even true anymore? Doesn't happen to me with claude 4 + claude code.
I cannot believe what's said in the report because it doesnt even reflect what my pro-AI coding friends say is true. Every dev I know says AI generated suggestions are often full of noise, even the pro-AI folks.
"It's full of noise but I'm confident I can cut through it to get to the good stuff" - Pro AI
"It's full of noise and it takes more effort to cut through than it would take to just build it myself" - Anti AI
I'm pretty Anti myself. I think "I can cut through the noise" is pretty misplaced overconfidence for a lot of devs
But if you're getting a lot of noise, I'd immediately try to adjust my system/user prompt to never get that noise in the first place. I'm currently using a variation of https://gist.github.com/victorb/1fe62fe7b80a64fc5b446f82d313... which is basically my personal coding guidelines but "codified" as simple rules for LLMs to understand.
For anything besides the dumb models, I get code that more or less looks exactly like how I would have written it myself. When I find I get code back that I'm not happy with, I adjust the system/user prompt further so this time and the next it returns code like how I would have done it.
When it comes to judging the quality of AI output, I do agree with "AI is ok at some stuff"
When I say I tend to fall on the Anti AI side, I am saying "But I still don't think it's worth using much"
I don't really want to lean on tools that are just ok at some stuff.
So I guess that puts me into "pro AI" camp, but it's not like we actually disagree.
I don't really find that typing is my bottleneck mostly. AI saving me time spent typing code also just costs me time spent prompting and re-prompting the AI so... Kinda a wash mostly?
> 25% of developers estimate that 1 in 5 AI-generated suggestions contain factual errors or misleading code.
Seem incompatible with "often full of noise", to you?
I can't speak for factual errors, but I'd say less than 20% of the code ChatGPT* gives me contains clear errors — more like 10%. Perhaps that just means I can't spot all the subtle bugs.
But even in the best case, there's a lot of "noise" in the answers they give me: Excess comments that don't add anything, a whole class file when I wanted just a function, that kind of thing.
* Other LLMs are different, and I've had one (I think it was Phi-2) start bad then switch both task *and language* mid-way through.
my experiences range from helping design penn's new AI degree programs, hearing from friends at algorithmic hedge funds, hearing from friends at startups, and my own development.
As an example, I asked one of my devs to implement a batching process to reduce the number of database operations. He presented extremely robust, high-quality code and unit tests. The problem was that it was MASSIVE overkill.
AI generated a new service class, a background worker, several hundred lines of code in the main file. And entire unit test suites.
I rejected the PR and implemented the same functionality by adding two new methods and one extra field.
Now I often hear comments about AI can generate exactly what I want if I just use the correct prompts. OK, how do I explain that to a junior dev? How do they distinguish between "good" simple, and "bad" simple (or complex)? Furthermore, in my own experience, LLMs tend to pick up to pick up on key phrases or technologies, then builds it's own context about what it thinks you need (e.g. "Batching", "Kafka", "event-driven" etc). By the time you've refined your questions to the point where the LLM generate something that resembles what you've want, you realise that you've basically pseudo-coded the solution in your prompt - if you're lucky. More often than not the LLM responses just start degrading massively to the point where they become useless and you need to start over. This is also something that junior devs don't seem to understand.
I'm still bullish on AI-assisted coding (and AI in general), but I'm not a fan at all of the vibe/agentic coding push by IT execs.
They could iterate with their LLM and ask it to be more concise, to give alternative solutions, and use their judgement to choose the one they end up sending to you for review. Assuming of course that the LLM can come up with a solution similar to yours.
Still, in this case, it sounds like you were able to tell within 20s that their solution was too verbose. Declining the PR and mentioning this extra field, and leaving it up to them to implement the two functions (or equivalent) that you implemented yourself would have been fine maybe? Meaning that it was not really such a big waste of time? And in the process, your dev might have learned to use this tool better.
These tools are still new and keep evolving such that we don't have best practices yet in how to use them, but I'm sure we'll get there.
Assuming of course that the LLM can come up with a solution similar to yours.
I have idle speculations as to why these things happen, but I think in many cases they can't actually. They also can't tell the junior devs that such a solution might exist if they just dig further. Both of these seem solvable, but it seems like "more, bigger models, probed more deeply" is the solution, and that's an expensive solution that dings the margins of LLM providers. I think LLM providers will keep their margins, providing models with notable gaps and flaws, and let software companies and junior devs sort it out on their own.
Imagine if wat (https://www.destroyallsoftware.com/talks/wat) appeared on the internet, and execs took it serious and suddenly asked people to actually explicitly make everything into JS.
This is how it sounds when I hear executives pushing for things like "vibe-coding".
> More often than not the LLM responses just start degrading massively to the point where they become useless and you need to start over
Yeah, this is true. The trick is to never go beyond one response from the LLM. If they get it wrong, start over immediately with a rewritten prompt so they get it right on the first try. I'm treating "the LLM got it wrong" as "I didn't make the initial user/system prompt good enough", not as in "now I'm gonna add extra context to try to steer it right".
But that’s the point. The feedback loop is faster; AI is much worse at coping with poor code than humans are, so you quickly learn to keep the codebase in top shape so the AI will keep working. Since you saved a lot of time while coding, you’re able to do that.
That doesn’t work for developers who don’t know what good code is, of course.
I disagree. I expect that companies will try to overcome AI-generated technical debt by throwing more AI at the problem.
"If the code doesn't work just throw it away and vibe code new code to replace it"
It's something that is... Sort of possible I guess but it feels so shortsighted to me
Maybe I just need to try and adjust to a shortsighted world
In part this is because the process of development leans less hard on the discipline of devs; humans. Code becomes more formal.
I regularly I have a piece of vibe-coded code in a strongly typed language, and it does not compile! (would that count as a hallucination?) I have thought many times: in Python/JS/Ruby this would just run, and only produce a runtime error in some weird case that likely only our customers on production will find...
I'm a proponent of functional programming in general, but I don't think neither types (of any "strength") nor functional programming makes it easier or harder to write bad code. Sure, types might help avoid easy syntax errors, but can also give the developer false confidence with "if it compiles it works :shrug:". Instruct the LLM to figure out the solution until it compiles, and you'll get the same false confidence, if there is nothing else asserting the correct behavior, not just the syntax.
> in Python/JS/Ruby this would just run
I'm not sure how well versed with dynamic languages you are, especially when writing code for others, but you'll in 99% cases cover at the very least all the happy paths with unit tests, and if you're planning on putting it in a production environment, you'll also do the "sad" paths. Using LLMs or not shouldn't change that very basic requirement.
LLMs remove the easy work from the junior devs task pile. That will make it a lot more difficult for them to do the actual hard work required of a dev. They skipped the stepping stones and critical thinking phase of their careers.
Senior devs are senior because they’ve done the easy things so often it’s second nature.
Tools can't replace human understanding of a problem, and that understanding is the foundation for effective growth and maintenance of code.
Maybe an AI would be better on the easy cases- slightly faster and cheaper. But it would mean that she would never develop the skills to tackle the problems that AI has no idea how to handle.
And a third AI to review the pseudocode, I guess.
More seriously, I think that this is generally the correct approach: create a script that the AIs can follow one step at a time; update the script when necessary.
It’s hard to predict how this plays out IMO. Especially since this industry (broadly speaking) doesn’t believe in training juniors anymore.
I've mostly used LLMs with python so far and I'm looking forward to using them more with compiled languages where at least I won't have mismatching types a compiler would have detected without my help.
My experience (using a mix of Copilot & Cursor through every day) is that AI has become very capable of solving problems of low-to-intermediate complexity. But it requires extreme discipline to vet the code afterward for the FUD and unnecessary artifacts that sneak in alongside the "essential" code. These extra artifacts/FUD are to my mind the core of what will make AI-generated code more difficult to maintain than human-authored code in the long-term.
This requirement to be commercially useful and valuable, and to aid all kinds of businesses everywhere, gave a bad reputation to what is otherwise an amazing technological achievement. I am an outspoken AI enthusiast, because it is fun and interesting, but I hate how it is only seen as useful when it can do actual work like a human.
esafak•22h ago