When you write the code, you understand it. When you read the code produced by an agent, you may eventually feel like you understand it, but it's not at the same deep level as if your own brain created it.
I'll keep using new tools, I'll keep writing my own code too. Just venting my frustrations with agentic coding because it's only going to get worse.
> since I didn't write the code, in order to understand it I have to read it. But gaining understanding that way takes longer than writing it myself does.
I remember reading Joel Spolsky's blog 25 years ago, and he wrote something like: "It is harder to read code than to write code." I was quite young at that stage in my programming journey, but I remember being simultaneously surprised and relieved -- to know that reading code was so damn hard! I kept thinking if I just worked harder at reading code that eventually it would be as easy as writing code.Also:
Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?
— Brian Kernighan: The Elements of Programming Style, 2nd edition, chapter 2
In summary: write simple code, it's easy to read and understand - by future you who forgot why you did something and others.
As for the feelings that using LLM has when it one shots your project start (and does a pretty good job), have a German word:
Automatisierungskummer
(automation sorrow) • Kummer is emotional heaviness, a mild-to-deep sadness.
Its hard to know what things will look like in 20 years but people may miss the time when AI cost nothing, or very little, and was less fettered. I think probably not- it would be like being nostalgic for really low-res, low frame youtube videos, but nostalgia is pretty unpredictable and some people love those old FMV games.
I remember the feeling of realizing that I had terrible taste just like everyone else and I was putting huge amounts of effort into trying to do seamless tiling background images that still looked awful and distracting and ruined the contrast. And also the feeling of having no idea what to talk about or why anyone would care.
Now I have way too much to talk about — so much that I struggle to pick something and actually start writing — and I'm still not sure why anyone would care. But at least I've learned to appreciate plain, solid-colour backgrounds.
Put it into Google and you will see.
unfortunately this problem preceeds AI, and has been worsened by it.
i've seen instances of one-file, in-memory hashmap proof-of-concept implementations been requested to be integrated in semi-large evolving codebases with "it took me 1 day to build this, how long will it take to integrate" questions
*as an aside, this reminds me of the classic joke where the client asks for the price list for a developer's services:
I do it: $500
I do it, but you watch: $750
I do it, and you help: $1,000
You do it yourself: $5,000
You start it, and you want me to finish it: $10,000
I guess that, with vibe coding, it is very easy for every client to become like this.
That isn't unique to "clients." It's human nature. Human's don't know what they don't know.
See: various exploits since computers were a thing.
Also the worst kind of tech line-manager - typically promoted from individual contributors , but still want to argue about architecture, having arrived at their strong opinion within the 7 minutes they perused the design document between meetings.
If you're such a manager, you need to stop, if you're working with one, change teams or change jobs - you cannot win.
“We use Premiere.” Cool. I use Resolve. If we aren’t collaborating on the edit then this is an irrelevant conversation. You want a final product, that’s what you hired me for my dude. If you want me to slot into your existing editing pipeline that’s a totally different discussion.
“Last guy shot on a Red.” Cool. Hire them. Oh right you hired me this time. Interesting! Should we unpack that?
Freelancers: Stand your ground! Stand by your work! Tell clients to trust you!
If you pay me for a 30s highlight, you get a 30s highlight. If you don’t like the highlight itself that’s a different discussion.
2009 anyone? https://theoatmeal.com/comics/design_hell
For example, "Can't we just add a button that does this?"
I never faced or witnessed that in software dev.
I was involved in such an attempt but it never got off the ground.
The result was a piece of shit, but it was his piece of shit and he loved it.
History doesn’t repeat itself, but it definitely rhymes – I can’t wait for the modern versions of this.
We don't have to seek it out, it finds us.
At this point, the level of puffery is on par with claiming a new pair of shoes will turn you into an Olympic athlete.
People are doing this because they’re told it works, and showing up to run a marathon with zero training because they were told the shoes are enough.
Some people may need to figure out the problem here for themselves.
And continuing to shit and piss and puke in the toilet while they are trying to fix it.
Asking such clients why are we here? What have previous attempts (becuase they have been done) provided and not provided, and why do you think they did or didn't have long term viability so we didn't need to talk.
This is less about coding and helping people learn how to think about where and how things cna fit in.
It's great to go fast with vibe coding, especially if you like disposable code that you can iterate with. In the hands of an a developer they might be able to try more things or get more done in some way, but maybe not all the ways especially if the client isn't clear.
The ability of the client ot explain what they want well with good external signals and how well they know how to ask will often be a huge indicator long before they try to pull you into their web of creating spider diagrams like the spiders who have taken something.
Indeed, [1]
> researchers found that searching symptoms online modestly boosted patients’ ability to accurately diagnose health issues without increasing their anxiety or misleading them to seek care inappropriately [...] the results of this survey study challenge the common belief among clinicians and policy-makers that using the Internet to search for health information is harmful.
[0] https://www.cbc.ca/radio/whitecoat/man-googles-rash-discover...
I have something that about a quarter percent of individuals have in the US. A young specialist would know how to treat based on guidelines but beyond that there's little benefit in keeping up to date with the latest research unless it's a special interest for them (unlikely).
Good physicans are willing to read what their patients send them and adjust the care accordingly. Prevention in particular is problematic in the US. Informed patients will have better outcomes.
But I bet what happens more often is patients showing up with random unsubstantiated crap they found on Reddit or a content farm, and I can understand health care providers getting worn down by that sort of thing. I have a family member who believed he had Morgellon’s Disease, and talking to him about it was exhausting.
Your family member... mistakenly believed that he had a psychiatric condition involving a mistaken belief?
Does your family member have the sores?
doctors don't generally have the time or inclination to spend unpaid time doing specialized research for one of their many patients. competent layman efforts are generally huge wastes of time compared to asking a specialist, but in the absence of a specialist they can still be extremely useful, and specialists don't know everything either. plus there aren't always specialists, whether affordable/accessible or sometimes existent at all.
Similarly, it appears that some doctors are willing to accept that they have limited amount of time to learn about specific topics, and that a research-oriented and intelligent patient very interested in few topics can easily know more about it. In such a case a conducive mutual learning experience may happen.
One doctor told me that what he is offering is statistical advice, because some diseases may be very rare and so it makes sense to rule out more common diseases first.
Other doctors may become defensive, if they have the idea that the doctor has the authority and patients should just accept that.
To me it's more like the board, in some small way, being shaken up, and what I mostly see is an opportunity for consultancies to excel at interfacing with clients who come to them with LLM code and LLM-generated ideas.
I'm not saying we need to dismiss people for using LLMs at all, for better or for worse we live in a world where LLMs are here to stay. The annoying people would have found a way to be annoying even without AI, I'm sure.
A freelance developer (or a doctor) is familiar with working within a particular framework and process flow. For any new feature, you start by generating user stories, work out a high level architecure, think about about how to integrate that into your existing codebase, and then write the code. It's mostly a unidirectional flow.
When the client starts giving you code, it turns into a bidirectional flow. You can't just copy/paste the code and call it done. You have to go in the reverse direction: read the code to parse out what the high level architecture is, which user stories it implements and which it does not. After that you have to go back in the forward direction to actually adapt and integrate the code. The client thinks they've made the developer's job easier, but they've actually doubled the cognitive load. This is stressful and frustrating for the developer.
Charge more and/or set expectations up front.
If I'm working on your project I'm usually dedicated to it 8 hours a day for months.
I do agree this is not new, I had clients with some development experience come up with off the cuff suggestions that just waste everyone's time and are really disrespectful (like how bad at my job do you think I am if you think I didn't try the obvious approach you came up with while listening to the problem). But AI is going to make this much worse.
I treat my doctor as a subject matter expert/collaborator, which means that if I come to him with (for example) "what if it's lupus?" and he says "it's probably not lupus", I usually let the matter drop.
There is no best practices anymore, no proper process, no meaningful back and forth.
There absolutely is and you need to work with the tools to make sure this happens. Else chaos will ensue.
Been working with these things heavily for development for 6-12 months. You absolutely must code with them.
I was almost expecting to hear that it made the job too easy. This kind of work is perfect for vibe coding. But you should be the one doing it.
Ah yes a supabase backed, hallucinated data model with random shit, using deprecated methods, and a copy paste UI. Zero access control or privacy, 1% of features, no files uploading or playback or calling.
“Can you scale this to 1M users by end of the week? Something similar to WhatsApp or Telegram or Signal”
Sybau mf
What does this mean?
Like a neck tattoo, but the text form.
Me: hey make this, detailed-spec.txt
AI: okidoki (barfs 9k lines in 15 minutes) all done and tested!
Me looks at the code, that has feature-sounding names, but all features are stubs, all tests are stubs, and it does not compile.
Me: it does not compile.
AI: Yes, but the code is correct. Now that the project is done, which of these features you want me to add (some crazy list)
Me: Please get it to compile.
AI: You are absolutely right! This is an excellent idea! (proceeds to stub and delete most of what it barfed). I feel really satisfied with the progress! It was a real challenge! The code you gave me was very poorly written!
... and so on.
Meaning, is the answer in the field I'm not an expert of good, or am I simply being fooled by emoji and nice grammar?
Or you can do like some of the others suggest and eliminate pure vibecoding. Just use it as a back and forth where you understand along the way and make well-reasoned changes. That looks a lot more like real engineering, so it's not surprising the other commenters report better results.
I have not experienced the level of malice and sweet-talking work avoidance from anyone. It apologizes like an alcoholic, then proceeds doubling down.
Can you force it to produce actually useful code? Yes, by repeatedly yelling at it to please follow the instructions. In the process, it will break, delete, or implement hard to find bugs in rest of the codebase.
I'm really curious, if anyone actually has this thing working, or they simply haven't bothered to read the generated code
With anything above a toy project, you need to be really good with context window management. Usually this means using subagents and scoping prompts correctly by placing the CLAUDE.md files next to the relevant code. Your main conversation's context window usage should pretty much never be above 50%. Use the /clear command between unrelated tasks. Consider if recurring sequences of tool calls could be unified into a single skill.
Instead of sending instructions to the agent straight away, try planning with it and prompting it to ask your questions about your plan. The planning phase is a good place to give Claude more space to think with "think > think hard > ultrathink". If you are still struggling with the agent not complying, try adding emplasis with "YOU MUST" or "IMPORTANT".
I think like any tool it's has it's pros and cons and the more you use it the more you figure out how to make the best use out of it and when to give up.
It wasn't super bad at converting the code but even it struggled with some of the logic. Luckily, I had it design a test suite to compare the outputs of the old application and the new one. When it couldn't figure out why it was getting different results, it would start generating hex dumps comparisons, writing small python programs, and analyzing the results to figure out where it had gone wrong. It slowly iterated on each difference until it had resolved them. Building the code, running the test suite, comparing the results, changing the code, repeat. Some of the issues are likely bugs in the original code (that it fixed) but since I was going for byte-for-byte perfection it had to re-introduce them.
The issues you describe I have seen but not with the right technology and not in a while.
I have seen AI agents fall into the exact loop that GP discussed and needed manual intervention to fall out of.
Also blindly having the AI migrate code from "spaghetti C" to "structured C++" sounds more like a recipe for "spaghetti C" to "fettuccine C++".
Sometimes its hidden data structures and algorithms you want to formalize when doing a large scale refactor and I have found that AIs are definitely able to identify that but it's definitely not their default behaviour and they fall out of that behaviour pretty quickly if not constantly reminded to do so.
What do you mean? Are you under the impression I'm not even reading the code? The code is actually the most important part because I already have working software but what I want is working software that I can understand and work with better (and so far, the results have been good).
It doesn't really matter what we told it do; a task is a task. But clearly how each LLM performed that task very different for me than the OP.
You migrated code from one of the simplest programming languages to unarguably the most complex programm language in existence. I feel for you; I really do.
How did you ensure that it didn't introduce any of the myriad of footguns that C++ has that aren't present in C?
I mean, we're talking about a language here that has an entire book just for variable initialisation - choose the wrong one for your use-case and you're boned! Just on variable initialisation, how do you know it used the correct form in all of the places?
I had a similar issue with GNU plot. The LLM-suggested scripts frequently had syntax errors. I say: LLMs are awesome when they work, else they are a time suck / net negative.
Was this a local model?
The niche is "the same boring CRUD web app someone made in 2003 but with Tailwind CSS".
If you have a CLI, you can even script this yourself, if you don't trust your tool to actually try to compile and run tests on its own.
It's a bit like a PR on github from someone I do not know: I'm not going to actually look at it until it passes the CI.
What good is AI as a tool if it can get not on the same page as you
Imagine negotiating with a hammer to get it to drive nails properly
These things suck as tools
I had to rewrite several vibe coded projects from scratch due to this effect. It's useful as a prototyping tool but not a complete productionizing tool.
As a developer that has spent far too much of my career maintaining or upgrading companies' legacy code, my biggest fear with the LLM mania is not that my skills go away, but become in so much higher demand in an uncomfortable way because the turn around time between launch and legacy code becomes much shorter and the management that understands why it is "legacy code"/"tech debt" shrinks because it is neither old or in obviously dead technologies. "Can you fix this legacy application? It was launched two days ago and nobody knows what it does. Management says they need it fixed yesterday, but there's no budget for this. Good luck."
It's a language mode with finite context and the ability to use tools. Whatever it can fit into its context, it can usually do pretty well. But it does require guidance and documentation.
Just like working with actual humans that aren't you and don't share your brain:
1) spec your big feature, maybe use an LLM in "plan" mode. Write the plan into a markdown file.
2) split the plan into smaller independent parts, in github issues or beads or whetever
3) have the LLM implement each part in isolation, add automatic tests, commit, reset context
Repeat step 3 until feature is done.
If you just use one long-ass chat and argue with the LLM about architecture decisions in between code changes, it WILL get confused and produce the worst crap you've ever seen.
Being effective with the code to get the same things done is. That requires a new kind of driving for a new kind of vehicle.
Every. single. time. we hit an interface problem he would say “if you don’t understand the error feel free to use ChatGPT”. Dude it’s bare metal embedded software I WROTE the error. Also, telling someone that was hired because of their expertise to chatgpt something is crazy insulting.
We are in an era of empowered idiots. People truly feel that access to this near infinite knowledge base means it is an extension of their capabilities.
Also, is it just me or has the feeling of victory gone away completely 100% ever since AI became a thing? I used to sweat and struggle, and finally have my breakthrough, the "I'm invicible!" Boris moment before the next thing came into my task inbox.
I don't feel that high anymore. I only recently realized this.
Reality check: none of that ever existed, unless either the client mandated it (as a way to tightly regulate output quality from cheaper developers) or the developer mandated it (justifying their much higher prices and value to the customer).
Other than that: average customer buying code from average developer means:
- git was never even considered
- if git was ever used, everything is merged into "master" in huge commits
- no scheduled reviews, they only saw each other when it's time for the next quarterly/monthly payment and the client was shown (but not able to use) some preview of what's done so far
https://www.uceprotect.net/en/index.php?m=7&s=8 -- "pay us to fix a problem that we've caused, and if you have the gall to call it what it is (extortion), then we'll publish your email and be massive dicks about it"
(To be clear, not all spam blacklists are scams - just UCEPROTECTL3 specifically)
What I'm saying is that I can't access this website from my work laptop - it shows me branded blocked page.
I'm not 100% sure, but I think there is a policy set up on Zcaler, blocking access to the domains defined in some sort of blacklist. The reason why I assumed it's UCEPROTECTL3 is because it's the only positive result I got at online blacklist lookup against gmnz.xyz.
And no, I don't feel comfortable sharing my employer.
Not really worth working on any of these project.
I started wondering if this person was actually a developer here. Maybe just a typo, or maybe a dialect thing, but does anyone actually use "codes" as a plural?
It’s somehow ironic though that his written output could’ve been improved by running it through an AI tool.
I mean, it could've been homogonized by running it through an AI tool. I don't think there's a guarantee that it would've been an improvement. Yes, it probably could've helped refine away phrases that give away a non-native English speaker, but it also would've sanded down and ground away other aspects of the personality of the author. Is that an improvement? I'm not so sure.
And if the main complaint is just a few odd words or structures, it's really not that big of a deal to me.
On other hand -- another customer of mine built a few internal tools with vibe code (& yes he does have subscription to my low code service) but then when newer requests came for upgrade thats where his vibe coded app started acting up. His candid feedback was -- for internal tools vibe code doesnt work.
As a service provider for low code --> we are now providing full fledged vibe code tooling on top. While I dont know how customers who do not wish to code and just have the software will be able to upkeep these softwares without needing professionals.
The thing is: I know you might read that and think I'm anti-AI. In this specific situation, at my company: We gave nuclear technology to a bunch of teenagers, then act surprised when they blow up the garage. This is a political/leadership problem; because everything, nine times out of ten, is a political/leadership problem. But the incentives just aren't there yet for generalized understanding of the responsibility it requires to leverage these tools in a product environment that's expected to last years-to-decades. I think it will get there, but along that road will be gallons of blood from products killed, ironically, by their inability to be dynamic and reliable under the weight of the additive-biased purple-tailwind-drenched world of LLM vibeput. But, there's probably an end to that road, and I hope when we get there I can still have an LLM, because its pretty nice to be able to be like "heyo, i copy pasted this JSON but it has javascript single quotes instead of double quotes so its not technically JSON, can you fix that thanks"
The people who think FizzBuzz is a leetcode programmer question are now vibecoding the same trash as always, except now they think they are smart x10 developers for forcing you to review and clean up their trash.
The worst was pushing the tail into the tree. My original code was pretty slow, but every time AI changed more than 4 lines it introduced subtle bugs.
I did not actually think ai would be that useful.
And I am now thinking to specialize in the field: they already know how f*d they are and they are going to pay a lot (or: they have no other opportunity). Something what looked like million-dollar idea created for pennies 3 months later is unbearable, already rotting pale of insanity which no junior human developer or even AI code assistant is able to extend. But they already have investors or clients who use it.
And for me, with >20 years of coding experience, this is a lot of fun cleaning it to the state when it is manageable.
For those who have swallowed the AI panacea hook line and sinker. Those that say it's made me more productive or that I no longer have to do the boring bits and can focus on the interesting parts of coding. I say follow your own line of reasoning through. It demonstrates that AI is not yet powerful enough to NOT need to empower you, to NOT need to make you more productive. You're only ALLOWED to do the 'interesting' parts presently because the AI is deficient. Ultimately AI aims to remove the need for any human intermediary altogether. Everything in between is just a stop along the way and so for those it empowers stop and think a little about the long term implications. It may be that for you right now it is comfortable position financially or socially but your future you in just a few short months from now may be dramatically impacted.
As someone said "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes".
I can well imagine the blood draining from peoples faces, the graduate coder who can no longer get on the job ladder. The law secretary whose dream job is being automated away, a dream dreamt from a young age. The journalist whose value has been substituted by a white text box connected to an AI model.
I don't have any ideas as to what should be done or more importantly what can be done. Pandora's box has been opened, Humpty Dumpty has fallen and he can't be put back together again. AI feels like it has crossed the rubicon. We must all collectively await to see where the dust settles.
AI is just the next step and not even a particularly large leap. We already needed less law secretaries due to advances of technology. We killed most journalism two decades ago. Art and Music had Photoshop and autotune. Now we've actually achieved something we've literally been striving for since the dawn of computing -- the ability to speak natural language to a computer and have it do what we ask. But it's just one more step.
That pattern is bigger than any one of us and it's not a moral judgment. It's simply part of what technology does and has always done. AI is a continuation of that same trend we've all participated in, whether directly or indirectly. My point is that to stop now and say "look at all these jobs being eliminated by computers" is several decades too late.
I do think there is a qualitative difference in AI as compared to previous automation changes. This qualitative difference and its potential impacts beyond the obvious (job losses) is what is more worrying. The societal impact of AI slop, the impact on human intellectual efforts, pursuits, value and meaning are very concerning.
I wonder about that bit, TBH.
If you're 10x more productive at generating lines of code because you're mostly just reviewing, just how carefully are you reviewing? If you're taking the time to spec out stuff in great detail, then iterate on the many different issues with the LLM code, then finally reviewing when it passes the tests ... how are you getting to 10x and not 2x?
TBH, for those people who really are able to create 10x as much code with the LLM, their employment is actually more precarious than those who aren't doing that - it means your problem domain is so shallow that an LLM can hold both it and the code in a single context window.
Back when I did websites for clients, often after carefully thinking a project through and getting to some final idea on how everything should look, feel, and operate, I presented this optimal concept to clients. Some would start recommending changes and adding their own ideas—which I most often already iterated through earlier during ideation and designing.
It rarely builds a good rapport with clients if you start explaining why their ideas on "improvements" are really not that good. Anyway, I would listen to them, nod, and do nothing as to their ideas. I would just stick to mine concept without wasting time for random client's "improvements"—leaving them to the last moment if a client would insist on them at the very end.
Funny thing is that clients usually, after more consideration and time would come on their own to the result I came to and presented to them—they just needed time to understand that their "improvements" aren't relevant.
Nevertheless, if they insisted on implementing their "improvements" (which almost never happened) I'd do it for additional price—most often for them to just see that it wasn't good idea to start with and get back to what I already did before.
So, sometimes, ignoring client's ideas really saves a lot of time.
Coding isn’t creative, it isn’t sexy, and almost nobody outside this bubble cares
Most of the world doesn’t care about “good code.” They care about “does it work, is it fast enough, is it cheap enough, and can we ship it before the competitor does?”
Beautiful architecture, perfect tests, elegant abstractions — those things feel deeply rewarding to the person who wrote them, but they’re invisible to users, to executives, and, let’s be honest, to the dating market.
Being able to refactor a monolith into pristine microservices will not make you more attractive on a date. What might is the salary that comes with the title “Senior Engineer at FAANG.” In that sense, many women (not all, but enough) relate to programmers the same way middle managers and VCs do: they’re perfectly happy to extract the economic value you produce while remaining indifferent to the craft itself. The code isn’t the turn-on; the direct deposit is.
That’s brutal to hear if you’ve spent years telling yourself that your intellectual passion is inherently admirable or sexy. It’s not. Outside our tribe it’s just a means to an end — same as accounting, law, or plumbing, just with worse dress code and better catering.
So when AI starts eating the parts of the job we insisted were “creative” and “irreplaceable,” the threat feels existential because the last remaining moat — the romantic story we told ourselves about why this profession is special — collapses. Turns out the scarcity was mostly the paycheck, not the poetry.
I’m not saying the work is meaningless or that system design and taste don’t matter. I’m saying we should stop pretending the act of writing software is inherently sexier or more artistically noble than any other high-paying skilled trade. It never was.
Nonsense. Coding is creative the same way mathematics is.
> Beautiful architecture, perfect tests, elegant abstractions those things feel deeply rewarding to the person who wrote them [...]
Nonsense. Best practices exist to make the code perform well. As a result, every user cares about them, albeit indirectly.
> That’s brutal to hear if you’ve spent years telling yourself that your intellectual passion is inherently admirable or sexy. It’s not.
Nonsense. Intellectual passion is admirable and sexy for many. This is subjective.
btheunissen•1d ago
This doesn't read like a vibe-coding problem, and more of a client boundaries problem. Surely you could point out they are paying you for your expertise, and to supersede your best practices with whatever AI churns out is making the job they are paying you to do even harder, and frankly a little disrespectful ("I know better").