Unfortunately "Slop" will appear to work enough of the time to fool a Junior.
Also the reason Junior devs get "slop" is because their prompts are "slop". They don't know all the right terminologies for things, nor do they even have the writing/language skills necessary for good prompting.
EDIT: Due to everyone checking my math I corrected this to 30x, as what's provable, from past experience.
If I'm dealing with a difficult to implement algorithm, a whiteboard is a better help than bashing out code.
> The text presents to the wood cutter the alternative either to spend time in sharpening his axe, or expend his strength in using a dull one. Which shall he do? Wisdom is profitable to direct.
Sure, you don't need to sharpen your axe. Given a powerful internal combustion engine, you could drive a tank through the forest and fell many trees in rapid succession. But this strategy doesn't leave you with quality lumber, and leaves a huge mess for whoever comes after you (which may be your future self), and one day there won't be any trees left.
If your job is producing disposable software, be aware that you're using unpaid labour to do so. Some of the programmers who produced that AI's training data are struggling to eat and keep a roof over their heads. Act accordingly.
When I do spend minutes (not seconds) writing prompts, it's because I'm actually typing a "Context File" which describes with full clarity certain aspects of my architecture that are relevant to an Agent task set. This context file might have constraints and rules I want the Agent to follow; so I type it once and reference it from like 10 to 20 prompts perhaps. I also keep the prompt files as an archive for the future, so I can always go back and see what my original thoughts were. Also the context files help me do system documentation later.
That doesn't mean it's reviewed, it means I'm accepting it to _BE_ what I go with and ultimately review.
That is an absurd claim.
a 50x gain would literally mean you could get a year's worth of work done in a week. Preposterous.
The bottleneck is still finding good developers, even with the current generation of AI tooling in play.
The amount of pushback I got on this thread tells me most devs simply haven't started using actual Agents yet.
The rule of thumb I've learned is to give an Agent the smallest possible task at a time, so there's zero ambiguity in the prompt, and context window is kept small.
A massive amount of valuable work can result in a few lines of code. Conversely a millions lines of code can be useless or even have negative value.
An experienced developer can generate tons of great code 30x faster with an Agent, with each function/module still being written using the least amount of code possible.
But you're right, the measure of good code isn't 'N', it's '1/N' (inverse), where N is number of lines of code to do something. The best code is [almost] always that with the least amount of lines, as long as you haven't sacrificed readability in order to remove lines, which I see a lot of juniors do. Rule of thumb is: "Least amount of easily understood LOC". If someone can't look at your code for the first time, and tell what it's doing, that's normally an indication it's not good code. Claude [almost] never breaks any of these rules.
Well it does for me, frequently. An example is here: https://news.ycombinator.com/item?id=44126962
Most things I try to use it for, it has so many problems with its output that at most I get a 50% productivity gain after fixing everything.
I'm already super efficient at editing text with neovim so honestly for some tasks I end up with a productivity loss.
For example, the other day I asked a major LLMs to generate a simple markdown viewer with automatic section indentation for me in Node.js. The basic code worked after a few additional prompts from me.
Now I wanted folding. That was also done by the LLM. And then when I tried to add a few additional simples features, things fell apart. There were one or two seemingly simple runtime errors that the LLM was unable to fix after almost 10 tries.
I could fix it if I started digging inside the code, but then the productivity gains would start to slip away.
I'm using VSCode Github Copilot in "Agent Mode", btw. It's able to navigate around an entire project, understand it, and work on it. You just lean back and watch it open files, edit them, show you in realtime what it's thought process is, as it does everything, etc. etc. It's truly like magic.
Any other way of doing development, in 2025, is like being in the stone ages.
Anything beyond that and LLMs require a lot of hand holding, and frequently regress to boot
If you write a prompt with perfect specificity as to what you want done, an agent like "Github Copilot+Claude" can work at about the same level as a senior dev. I do it all day long. It writes complex SQL, complex algorithms, etc.
Saying it only does boilerplate well reminds me of my mother who was brainwashed by a PBS TV show into thinking LLMs can only finish sentences they've seen before and cannot reason thru things.
Even if my prompt was ambiguous, the LLM has no excuse producing code that does not type-check, or crashes in an obvious way when run. The ambiguity should affect what the code tries to do, not it's basic quality.
And your use of totalizing adjectives like "zero ambiguity" and "perfect specificity" tells me your arguments are somewhat suspect. There's nothing like "zero" and "perfect" as far as architecturing and implementing code goes.
Right, if you're getting that, experienced senior is a pretty wild stretch.
Based on my general experience with software over the last...30 years, most places must only have entry level and junior devs. Somehow despite 30 years of hardware improvement, basic software apps are still as clunky and slow as their '90s counterparts.
Do whatever you want. That’s an option too.
Make a dumb thing, take your hands off the wheel, have fun. It’s your computer.
This is why we need licensing for software developers:
When you're building a service that has actual users, with actual data, and tangible consequences when it fails, "take your hands off the wheel, have fun" is fundamentally dangerous.
Or, to put it differently: It's totally fine for some kids to build a treehouse. They might even get hurt. But, when it comes to dams and bridges, there is a reason why the people who design those need to get a license.
For folks actually workibg in companies handling various data that dont belong to them? Oh god, please no, thats a horrible advice.
There's such a wide range of software. There's plenty of space for an amateur to do some creative vibe coding. What's the point of the scolding and hand wringing?
Evergreen tweet: https://knowyourmeme.com/photos/2659979-no-bitch-dats-a-whol...
"Vibe coding" is only a few months old. ChatGPT was released less than three years ago. The singularity is just getting started.
History of computer chess:
- 1957 - early programs that played chess very badly. Excessive optimism
- 1967 - programs that play amateur chess
- 1976 - first tournament win
- 1980s - top programs not much stronger, but now running on PCs.
- 1996 - first win against grandmaster
- 2005 - last lose by top program against grandmaster
- 2025 - all the good programs can trounce any human.
LLMs are probably at the 1996 level now.
Don't forget that in, 1957, computer's performance was much lower than today's. I wonder how a 1957 approach would fare on today's computer after removing limitations based on past limitations?
I'm not sure the same can be said about LLMs.
No, it is innovation. The problem is that innovation is often bad.
Like reaching for your phone out of habit the moment you are bored, I don't want to need an LLM any time I am faced with a problem. I want to exercise my own brain. I feel as though my ability to reason without them has already began to degrade, my mind fogs more these days. I try to curb it by having conversations rather than just asking for solutions.
I don't care that the tool isn't going anywhere, but just like relying on calculators won't make you better at arithmetic, I don't think relying on LLM's will make you a better engineer.
But if you don’t like learning, and only do it because you have to, it will magnify that tendency and provide a way to avoid learning altogether.
We are likely to end up with a large subset of the population basically being meat puppets doing whatever their favorite flavor of LLM tells them to do.
I don't need AI to help me code. What I need AI to do is help me figure out new coding solutions. But all AI seems able to do is regurgitate things that other people have already done that it's ingested from the internet.
I'll ask AI how to do abc, within xyz parameters, with def available and ghi constraints. I typically get back one of two things:
1. A list of 20 steps to achieve abc that somewhere around the middle has a step that's the equivalent of "Then magic happens" or two to three steps that are entirely unrelated to one other or the project at hand.
2. A list of what should be 20 steps that suddenly ends at step 7, leaving the problem only half done.
Most frustrating is when the "AI" says to use $tool/$library, but $tool/$library is not available on the specified platform, or hasn't been updated since 2011 and no longer works. When I tell the AI this, it always responds with, "You are right, that tool is no longer available. Here's a list of even more broken steps you can take to work around it."
So far, for my coding needs, AI seems only able to regurgitate what's already been done and published by others. That's great, but there are search engines for that. I have novel problems, and until AI can actually live up to the "I" part of its name, it is worthless to me.
The real cruft of seniority is: processes, knowing right people and their buttons, politics, being there to fix obscure corner case production issues and so on. How can llm help me with that? It can't.
For code sweatshops they may be a blessing, for corporations drowning in regulations and internal abysmal labyrinths of their IT, not so much.
But these articles get posted and upvoted cause we developers just eat that shit up (if I’m being honest I do at least, every time I see these kinds of posts I always smirk cause I know what the comments section is gonna be like).
adityaoberai1•1d ago