But I think this is a great thing to show that they're pushing to outsource coding to a bot and to shame them that their plan isn't working out so well as they're trying to force people to believe.
I think it may help if we start personalizing these trends with the people who are amplifying it. I.e. Jassyslop, Siemiatbot (Klarna CEO was bold to brag he dropped 80% of a role for AI) etc.
Only among people who don't value the quality of their output. There are, fortunately, many who do value quality and are not using AI tools until they get to the point where they can usefully contribute.
I value the quality of my output and I make extensive use of AI tools.
That's why the original definition of "vibe coding" is useful: creating code with AI tools without reviewing or caring about the quality of that code.
It's also possible to use AI tools as part of a responsible engineering process that is intended to produce production quality software.
AI tools can absolutely contribute usefully, I can't keep count of the times where an AI pointed to an edge case I didn't think about, then helped me write the fix and the test for the issue.
I'm not vibe coding, as I'm reviewing the code, but saying they can't be useful means you haven't taken the time to look at the state of them recently.
Ha, gotcha, AI slop poster!
I know you didn't, but this is where we'll end up if people just write off everything as 'bad because AI' instead of critically assessing the quality of something on its own merit rather than the (very ironic) 'vibe' that it was generated rather than written.
Also, it's not entirely obvious to me that the vulnerability was introduced by vibe coding.
https://github.com/jamubc/gemini-mcp-tool
Disclosure: I work at Google, but not on anything related to this.
IDK why people act as if vibe coding invented software bugs that lead to vulnerabilities, as if those weren't already a thing by human programmers.
Here's one set of numbers from the CATO institute: https://www.cato.org/policy-analysis/illegal-immigrant-murde...
The only way your statement holds up is if you treat the act of existing while undocumented as a crime for this comparison, in which case sure - it's a tautology.
This whole website and everything around it are almost ironic.
One of the "fun" hallmarks of many of these LLM assisted websites is that they seem to completely disregard basic accessibility (especially Web Content Accessibility Guidelines [1]). That small dark gray subtext on a black background is just horrific.
The AI is already substantially better than most humans for a huge spectrum of at least narrow tasks. Those 'skills' will expand in scope, the evidence is overwhelming and unequivocal.
Within 12 months it will be considered a 'security concern' to not have AI at least to some degree of autonomous review.
It's very easy to overstate the impact of AI (and sometimes it's annoying), but it's just unreasonable to be in 'denial' at this stage.
The only concern really is how, when, and with what kind of oversight we use the new tools - that that 'they are used'.
For example, I'd rather use a calculator to do calculations than ask an LLM to do it. I'd rather use LanguageTool for grammar than asking an LLM to do it. Id RTFM then have an LLM summarize it.
I didn't read any credible arguments suggesting that was caused by vibe coding. They had their PyPI publishing credentials stolen thanks to an attack against a CI tool they were using.
Plus the linked article for the Amazon outage is https://d3security.com/blog/amazon-lost-6-million-orders-vib... which appears to be some other vendor promoting their product without providing any details on what happened at Amazon.
Barely anything on the site makes sense if you look at them closely.
We call that "slop", the last time I checked.
-> [Endor Labs] https://www.endorlabs.com/learn/teampcp-isnt-done
-> On March 24, 2026, Endor Labs identified that litellm versions 1.82.7 and 1.82.8 on PyPI contain malicious code not present in the upstream GitHub repository. litellm is a widely used open source library with over 95 million month downloads. It lets developers route requests across LLM providers through a single API.
Edit: it appears the traditional content is free. What is paid is an AI interview pack, which is basically content with some tokens in order to present the content. They could be cheap Haiku tokens. Also it isn't a subscription, it's one-time purchase of packs. My bad.
The specific point is that you cannot prompt your way to reliable software (AKA vibe coding). Just as you cannot reach the same goal by glueing together stackoverflow snippets without understanding them.
Sort of like showing off self-driving car crashes. You can spend all day listing the crashes and showing people how it has problems, but if it's statistically safer than the average driver it would save thousands of lives per year to deploy it anyway even if it's not perfect.
New code will still need to be written though.
It's not reasonable to suggest that AI is only going to repeat older patterns that have been trodden before, or 'things that don't matter'.
AI will be writing most new code, by far.
Without even getting into complicated arguments about 'creativity' - the AI is an encyclopedia of best practices, and can think a couple of steps ahead for most things you'll ever want to do.
Like pro chess players thinking they're going to beat the algo with some kind of fancy human creativity.
Developers roles are changing, very fundamentally, you're now 1/2 a layer of abstraction above the code, and you're not going to writing it better than AI (in most cases) any more than a human will be better at sawing wood than the power tools. And yet, carpenters still exist.
But you’re also never going to convince the people who still only run vi on the Linux console, without Xorg…
So basically Reddit.
How much abysmal code and products have we all shipped? Exploitative, clumsy , dangerous, vulnerable? What was our excuse?
I find the entire anti-vibe coding movement to be terribly tacky and judgmental.
We have an incredible tool that could 10-100x productivity. We should be using it to fix all of the terrible software we’ve made over the past 20 years. Instead there are 2-3 camps. People building stuff, people hyping AI and people shaming the first 2.
Sad, really.
> We should be using it to fix all of the terrible software we’ve made over the past 20 years. Instead there are 2-3 camps. People building stuff, people hyping AI and people shaming the first 2.
This seems like an odd take. The pro who are using and hyping AI are not fixing all of the crap we put out the last 20 years. They are putting the gas pedal on the amount of crap being shipped
I don't think anyone except the most die hard AI lovers truly believe is producing high quality work on the balance. It is absolutely producing more but worse output than we've ever seen before
Even if it is capable of producing high quality work, you have to realize that most people using it are not capable of getting it to produce work of that quality. Nor do they seem to really care to
Think about the 90s PC revolution, opening up developer opportunities. There were commercial devs and open source devs. The open source devs decided to put the new resources and tools to use to evangelize computing . And in many ways won.
We have new tools now, and can put them to good use. Moaning from the sidelines is a losing strategy.
I don't think the general arc of computing since the 90s has been good for humanity
As a detractor, that's my goal. I want to undermine this garbage technology that is actively making life worse for the majority of people while enriching a vanishingly small segment of humans at our expense
How often does software fail in production with human-written code? How many times has a production failure been avoided because an LLM didn't make a typo or mistake that a human would have?
This is pushing an agenda. It's not measuring anything meaningful.
Personally, I don't care that much about org incentives (even though they obviously matter for what OP posted) but more about what it does to my thinking. For me, actually writing code is what slows my brain down, helps me understand the problem, and helps me generate new ideas. As soon as I hand off implementation to an LLM (even if I first write a spec or model it in TLA+) my understanding drops off pretty quickly.
wa5ina•2h ago
g947o•1h ago