I guess a large of that is that 1-2 years ago the whole process was much more non-deterministic and actually getting a sensible result much harder.
Sculding??
Rice by any other name??
One might say it's spec strumming.
The question is if you can have it all? Can you get faster results and still be growing your skills. Can we 10x the collective mind knowledge with use of AI or we need to spend a lot of time learning the old wayTM to move the industry forward.
Also nobody needs to justify what tools they are using. If there is a pressure to justify them, we are doing something wrong.
There's actually a wealth of literature on defining levels of software automation (such as: https://doi.org/10.1016/j.apergo.2015.09.013).
Everyone seems to want to invent a new word for 'programming with AI' because 'vibe coding' seems to have come to equate to 'being rubbish and writing AI slop'.
...buuuut, it doesn't really matter what you call it does it?
If the result is slop, no amount of branding is going to make it not slop.
People are not stupid. When I say "I vibe coded this shit" I do not mean, "I used good engineering practices to...". I mean... I was lazy and slapped out some stupid thing that sort of worked.
/shrug
When AI assisted programming is generally good enough not to be called slop, we will simply call it 'programming'.
Until then, it's slop.
There is programming, and there is vibe coding. People know what they mean.
We don't need new words.
This includes standard testing strategies, but also much more general processes
I think of it as steering a probability distribution
At least to me, this makes it clear where “vibe coding” sits … someone who doesn’t know how to express precise verification or feedback loops is going to get “the mean of all software”
People need to stop apologizing for their work product because of the tools they use. Just make the work product better and you don't have to apologize or waste people's time.
Especially given that you have these tools to make cleanup easier (in theory)!
I feel like this wording isn't great when there are many impactful open source programmers who have explicitly stated that they don't want their code used to train these models and licensed their work in a world where LLMs didn't exist. It wasn't their "gift", it was unwillingly taken from them.
> I'm a programmer, and I use automatic programming. The code I generate in this way is mine. My code, my output, my production. I, and you, can be proud.
I've seen LLMs generate code that I have immediately recognized as being copied a from a book or technical blog post I've read before (e.g. exact same semantics, very similar comment structure and variable names). Even if not legally required, crediting where you got ideas and code from is the least you can do. While LLMs just launder code as completely your own.
LLMs are doing this on an industrial scale.
Yes but you can also ask the developer (wheter in libera.irc, or say if its a foss project on any foss talk, about which books and blogs they followed for code patterns & inspirations & just talk to them)
I do feel like some aspects of this are gonna get eaten away by the black box if we do spec-development imo.
That’s been the fate of many creators since the dawn of time. Kafka explicitly stated that he wanted his works to be burned after his death. So when you’re reading about Gregor’s awkward interactions with his sister, you’re literally consuming the private thoughts of a stranger who stated plainly that he didn’t want them shared with anyone.
Yet people still talk about Kafka’s “contribution to literature” as if it were otherwise, with most never even bothering to ask themselves whether they should be reading that stuff at all.
However, if I was using a more recent/niche/unknown theorem, it would absolutely be considered bad practice not to cite where I got it from.
That belief has no basis at this point and it's been demonstrated not only that AI doesn't improve coding but also that the costs associated are not sustainable.
Is not a gift if it was stolen.
Anyway, in my opinion the code that was generated by the LLM is yours as long as you're responsible for it. When I look at a PR I'm reading the output of a person, independently of the tools that person used.
There's conflict perhaps when the submitter doesn't take full ownership of the code. So I agree with Antirez on that part
Yeah, I had a visceral reaction to that statement.
I spend hours on a spec, working with Claude Code to first generate and iterate on all the requirements, going over the requirements using self-reviews in Claude first using Opus 4.5 and then CoPilot using GPT-5.2. The self-reviews are prompts to review the spec using all the roles and perspectives it thinks are appropriate. This self review process is critical and really polishes the requirements (I normally run 7-8 rounds of self-review).
Once the requirements are polished and any questions answered by stakeholders I use Claude Code again to create a extremely detailed and phased implementation plan with full code, again all in the spec (using a new file is the requirements doc is so large is fills the context window). The implementation plan then goes though the same multi-round self review using two models to polish (again, 7 or 8 rounds), finalized with a review by me.
The result? I can then tell Claude Code to implement the plan and it is usually done in 20 minutes. I've delivered major features using this process with zero changes in acceptance testing.
What is funny is that everything old is new again. When I started in industry I worked in defense contracting, working on the project to build the "black box" for the F-22. When I joined the team they were already a year into the spec writing process with zero code produced and they had (iirc) another year on the schedule for the spec. At my third job I found a literal shelf containing multiple binders that laid out the spec for a mainframe hosted publishing application written in the 1970s.
Looking back I've come to realize the agile movement, which was a backlash against this kind of heavy waterfall process I experienced at the start of my career, was basically an attempt to "vibe code" the overall system design. At least for me AI assisted mini-waterfall ("augmented cascade"?) seems a path back to producing better quality software that doesn't suffer from the agile "oh, I didn't think of that".
Agile was really pushing to make sure companies could get software live before they died (number 1) and to remedy the anti-pattern that appeared with number 2 where non-technical business people would write the (half-assed) spec and then technical people would be expected do the monkey work of implementing it.
Mostly I’ve seen agile as, let’s do the same thing 3x we could have done once if we spent time on specs. The key phrase here is “requirements analysis” and if you’re not good at it either your software sucks or you’re going to iterate needlessly and waste massive time including on bad architecture. You don’t iterate the foundation of a house.
I see scenarios where Agile makes sense (scoped, in house software, skunk works) but just like cloud, jwts, and several other things making it default is often a huge waste of $ for problems you/most don’t have.
Talk to the stakeholders. Write the specs. Analyze. Then build. “Waterfall” became like a dirty word. Just because megacorps flubbed it doesn’t mean you switch to flying blind.
Agile core is the feedback loop. I can't believe people still don't get it. Feedback from reality is always faster than guessing on the air.
Waterfall is never great. The only time when you need something else than Agile is when lives are at stake, you need there formal specifications and rigorous testing.
SDD allows better output than traditional programming. It is similar to waterfall in the sense that the model helps you to write design docs in hours instead of days and take more into account as a result. But the feedback loop is there and it is still the key part in the process.
It certainly quicker (and at times, more fun!) to develop this way, that is for certain.
Also I do feel like this is a very substantial leap.
This is sort of like the difference between some and many.
Your editor has some effect on the final result so crediting it/mentioning it doesn't really impact it (but people still do mention their editor choices and I know some git repo's with .vscode which can show that the creator used vscode, I am unfamiliar if the same might be true for other editors too)
But especially in AI, the difference is that I personally feel like its doing many/most work. It's literally writing the code which turns into the binary which runs on machine while being a black box.
I don't really know because its something that I am contradicted about too but I just want to speak my mind even if it may be a little contradicted on the whole AI distinction thing which is why I wish to discuss it with ya.
I posted yesterday about how I'd invented a new compression algorithm, and used an AI to code it. The top comment was like "You or Claude? ... also ... maybe consider more than just 1-shotting some random idea." This was apparently based on the signal that I had incorrectly added ZIP to the list of tools that uses LZW (which is a tweak of LZ78, which is a dictionary version of the back-reference variant by the same Level-Ziv team of LZ77, the thing actually used in Zip). This mistake was apparently signal that I had no idea what I was doing, was a script kiddie who had just tried to one shot some crap idea, and ended up with slop.
This was despite the code working and the results table being accurate. Admittedly the readme was hyped and that probably set this person off too. But they were so far off in their belief that this was Claude's idea, Claude's solution, and just a one-off that it seemed they not only totally misrepresented me and my work, but the whole process that it would actually take to make something like this.
I feel that perhaps someone making such comments does not have much familiarity with automatic programming. Because here's what actually happened: the path to get from my idea (intuited in 2013, but beyond my skills to do easily until using AI) was about as far from a 'one-shot' as you can get.
The first iteration (Basic LZW + unbounded edit scripts + Huffman) was roughly 100x slower. I spent hours guiding the implementation through specific optimization attempts:
- BK-trees for lookups (eventually discarded as slow).
- Then going to Arithmetic coding. First both codes + scripts, later splitting.
- Various strategies for pruning/resetting unbounded dictionaries.
- Finally landing on a fixed dict size with a Gray-Code-style nearest neighbor search to cap the exploration.
The AI suggested some tactical fixes (like capping the Levenshtein table, splitting edits/codes in Arithemtic coding), but the architectural pivots came from me. I had to find the winning path.
I stopped when the speed hit 'sit-there-and-watch-it-able' (approx 15s for 2MB) and the ratio consistently beat LZW (interestingly, for smaller dics, which makes sense, as the edit scripts make each word more expressive).
That was my bar: Is it real? Does it work? Can it beat LZW? Once it did, I shared it. I was focused on the bench accuracy, not the marketing copy. I let the AI write the hype readme - I didn't really think it mattered. Yes, this person fixated on a small mistake there, and completely misrepresented or had the wrong model of waht it actually took to produce this.
I believe that kind of misperception must be the result of a lack of familiarity with using these tools in practice. I consider these kind of "disdain from the unserious & inexperienced" to be low quality, low effort comments than essentially equate AI with clueless engineers and slop.
As antirze lays out: the same LLMs depending on the human that is guiding the process with their intuition, design, continuous steering and idea of software.
Maybe some people are just pissed off - maybe their dev skills sucked beofre AI, and maybe they still suck with AI, and now they are mad at everything good people are doing with AI, and AI itself?
Idk, man. I just reckon this is the age where you can really make things happen, that you couldn't make before, and you should be into and positive. If you are a serious about making stuff. And making stuff is never easy. And it's always about you. A master doesn't blame his tools.
Don't fall into the trap of thinking "if I don't heavily adopt Claude Code and agentic flows today I'll be working at Subway tomorrow." There's an unhealthy AI hype cottage industry right now and you aren't beholden to it. Change comes slowly, is unpredictable, and believe it or not writing Redis and linenoise.c doesn't make someone clairvoyant.
Strongly disagree. This is a huge waste of currently scarce compute/energy both in generating that broken slop and in running it. It's the main driver for the shortages. And it's getting worse.
I would hate a future without personal computing.
What does that even mean? You are a failed novelist who does not have ideas and is now selling out his fellow programmers because he wants to get richer.
I disagree. The code you wrote is a collaboration with the model you used. To frame it this way, you are taking credit for the work the model did on your behalf. There is a difference between I wrote this code entirely by myself and I wrote the code with a partner. For me, it is analogous to the author of the score of an opera taking credit for the libretto because they gave the libretto author the rough narrative arc. If you didn't do it yourself, it isn't yours.
I generally prefer integrated works or at least ones that clearly acknowledge the collaboration and give proper credit.
rvz•1h ago
Disagree.
So when there is a bug / outage / error, due to "automatic programming" are you ready to be first in line to accept accountability (the LLM cannot be) when it all goes wrong in production? I do not think that would even be enough or whether this would work in the long term.
No excuses like "I prompted it wrong" or "Claude missed something" or "I didn't check over because 8 other AI agents said it was "absolutely right"™".
We will then have lots of issues such as this case study [0] where everything seemingly looks fine at first, all tests pass but in production, the logic was misinterpreted by the LLM with a wrong keyword, [0] during a refactor.
[0] https://sketch.dev/blog/our-first-outage-from-llm-written-co...
antirez•1h ago
Absolutely yes. Automatic programming does not mean software developers are no longer accountable for their errors. Also because you can use AP in order to do ways more QA efforts than possible in the past. If you decide to just add things without a rigorous process, it is your fault.
RobinL•56m ago
CraigJPerry•1h ago
I'm accountable for the code i push to production. I have all the power and agency in this scenario, so i am the right person to be accountable for what's in my PR / CL.
9dev•9m ago
That is really about the least confusing part of the story.
sirwitti•53m ago
To me code created like this smells like technical debt. When bugs appear after 6 months in production - as they do, if you didn't fully understand the code when developing it, how much time, energy and money will it cost to fix the problem later on?
More often than I like I had to deal with code where it felt like the developer did'nt actually understand what they were writing. Sometimes I was this developer and it always creates issues.