Am I using the tools wrong or are others finding the same thing?
Am I using the tools wrong or are others finding the same thing?
No, your vibe-coding is not more productive, unless your only metrics for productivity are commit counts, PR counts, deployment counts. I can commit, PR and deploy crap all day long and "score well" - and this is what people are clinging to with their ai-gen defenses. I'm really sorry to inform you that your experienced "speed-up" is just a trick of the brain (remembering from an article written, iirc, by Gurwinder, but I'm having trouble finding it now) - you're actually going slower, and your brain is tricking you into thinking it's faster because whilst the ai was "coding", you didn't have to, so it feels like more of a win than it actually is, considering the results.
Maybe spending $200/mo or whatever to access the top-of-the-line models will mitigate some of that, but I'd rather come up with the solution and gain the understanding myself, then spend the money on something worthwile.
In general, I think of it as a better kind of search. The knowledge available on the internet is enormous, and LLMs are pretty good at finding and synthesizing it relative to a prompt. But that's a different task than generating its own ideas. I think of it like a highly efficient secretary. I wouldn't ask my secretary how to solve a problem, but I absolutely would ask if we have any records pertaining to the problem, and perhaps would also ask for a summary of those records.
Like any new tool, there is a learning curve. The Curve is rather steep right now with the horizon changing to quickly. The right tool also matters a great deal; right now you can run a model at home on 32gb vram that's objectively better than gpt 3.5 from 2023 or grok 2.
>lus, the time I supposedly save building things gets eaten up debugging, correcting, improving the AI-generated slop.
Those complaining about ai slop are almost certainly complaining about lack of prompt engineering skills.
Let me also explain the proper evolution here.
In 2021, you would go to stackoverflow, copy some of your code or ask a question and hopefully someone helped you sometime. Then you'd get the help and probably paste their code in.
In 2024, you would go to AI, copy some of your code, ask a question and the AI responds quickly. The solution might be bad, buggy, and so you reprompt because your first prompt wasnt engineered well. You finally get good code and copy and paste.
In 2025, why all this copy and paste? Why not use agentic where it does the copy and paste for you. It knows what to read, and what to do.
Also 2025, what if you have AI also orchestrating 1 level higher and verifying that it itself is doing a good job.
If you were the type that would just copy paste whatever came up first, then yeah its just quicker to do it that way.
Did take a look at gemini result, but then it was different from immediate results under it so didn't leave lot of confidence even to get simplest things right.
I have decided to be radical about AI and LLM: I don’t like them because they are a waste of time, and I would like them even less if they were this magical world-changing technology people want us to believe. I am at a point of my career where concerns of productivity or how to operate in large-scale tech companies are the least of my problems, while I increasingly appreciate the artistic craft of programming and computers, especially in small-scale to improve our lives rather than accumulate profit. So while I could admit LLMs they have their use, I want to consciously follow a path where human intelligence and well-being is of the utmost concern, and any attempt at creating intelligent machines is tantamount to blasphemy.
Under this philosophy, seeing that all the talk about imminent AGI has led to creating spam and porn at large scale, I can only roll my eyes and remain on the sidelines while we continue down this idiotic path of resource extraction and devaluation of human ingenuity for the profit of the few.
Not unless they drop you first.
I'm far enough in my career to know that avoiding coding assistance or LLM-assisted "search" won't make my life or craft worse in any way. Quite the opposite, in fact.
It feels clever to make comments like yours right now, but in two years when the order of control flow moves up two more steps and you are no longer needed at all, it'll be frustrating to look back and think "I wish I wouldn't have given money to them."
No these things don't actually work if you study human psychology:
* Switching to another work task (what for like a minute?)
* Playing chess or something (sure its better than social media, but still a distraction)
But I do like AI tools that don't interfere with my flow like Github Copilot, or even chatting with Claude / ChatGPT about a task I'm doing.
I wonder if I’ve actually saved time overall or, if I was in an uninterrupted flow state I would have done not just a better but also quicker job.
So aggravating
oh, claude’s done now. how does this thing work?
From a clean NixOS command line install, we've got containers and vms handled. Reverse proxy with cloudflare tunnels with all endpoints automatically getting and renewing SSL certs. All the *arr stack tools and other homelab stuff you'd expect. Split horizon DNS with unbound and pihole running internally. All of my configurations backed up in github. I didn't even create the cloudflare tunnels or the github repos. I had claude code handle that via API and cli tools. The last piece I'm waiting on to tie it all together are my actual data drives which should be here tomorrow.
Is this a smart thing to do? Absolutely not. Tons of things could go wrong. But NixOS is fairly resilient and rollbacks are easy. I don't actually have anything running on the NAS in use yet and I've got my synology limping along until I finish building this replacement. It's still an open question whether I'll use Claude Code like this to manage the NAS once I've migrated my data and my family has switched over. But I've had a very good experience so far.
These tools are _hard_ to use well. Very hard. Deceptively hard. So hard that smart engineers bounce off of them believing them to be all smoke and hype. But if you study the emerging patterns, experiment, and work through the difficulty and rough edges, you will come out the other side with a new skill that will have you moving faster than you believed possible.
There are people who will think I'm either lying or delusional. It's fine. They just haven't made it through the fog yet. They'll get there eventually.
Both "it's so easy" and "it's hard, but believe me, worth it someday" are completely unconvincing arguments to me. I'd rather just do my job well than spend all my time chasing someone's overhyped fantasy down a rabbit hole.
There are "vibe coding" tools like Lovable that let you build a prototype-level app with no coding experience. They're easy and fun, but I probably wouldn't want a novice (or anyone, really) using them within an enterprise codebase.
Then there are tools like Claude Code which, when used by a skilled practitioner, can be used to accelerate real SWE work in enterprise production codebases.
You are of course free to bury your head in the sand and ignore both categories of tools under the same "overhyped fantasy" umbrella, but I think you're doing so at your own professional peril. Just my opinion though.
So far has been working beautifully for real work stuff, sometimes the models do drift, but if you are actually paying attention to the responses, you should be able to catch it early.
I'd estimate maybe 20% of devs have actually integrated AI into their daily workflow beyond occasional ChatGPT queries. The other 80% either tried it and bounced off the friction, or are waiting to see which tools actually stick.
Not using AI doesn't mean you're falling behind - it means you're avoiding cargo-culting. The real skill is knowing when it's worth the context-switching cost and when grep + your brain is faster.
1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
2. Anything that's invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
3. Anything invented after you're thirty-five is against the natural order of things.”
― Douglas Adams
My dislike of GenAI stuff is based on practical, ethical, and economic concerns.
Practical: GenAI output is bland and untrustworthy. It also discourages thought and learning, IMO. Lest folks line up to tell me how wonderful it is for their learning, that may be true, but my observation is that is not how the majority uses it. Once upon a time I thought the Internet/Web would be a revolution in learning for people. Fool me once...
Ethical: So many problems here, from the training data sets, to people unleashing scraper bots that have been effectively DDoS'ing sites for going on a year (at least) now. If these are the kind of people who make up the industry building these tools, I want nothing to do with the tools.
Economic: Related to ethics, but somewhat separate. GenAI and other LLM/AI tools could benefit people. I acknowledge, for example, there's real promise in using various related tech to do better medical diagnostics. That would be wonderful. But, the primary motivation of the companies pushing AI right now is to 1) get people hooked on the tools and jack up prices, 2) sell tech that can be used to lower wages or reduce employment, and 3) create another hype technology so they can stuff their pockets, and the coming crash be damned.
Again, what is driving AI/LLM is not well intentioned. Ignore that at your own peril. Probably everybody else's peril, too.
Adams no doubt knew people who were aghast at PCs or mobile phones because they were not around when they were younger. I get it. But, well, I wonder how Adams would feel about GenAI tools that spit out "write blah in the style of Douglas Adams" after being trained on all of his work.
And isn’t that the entire point of the quote?
I’m not trying to be dismissive of your point, just to pose a counterpoint.
I don't know what will be the final form of this, how our jobs will be impacted, and how much more productive we really are with the tool. But it's not a hype, these tools are here to stay and have changed the way we code. I don't think they will replace coders but they will make the best programmers more efficient.
As you said, it's easy to lose time with the generated slop, but someone who use the tools wisely is more efficient.
After falling in love and hacking away with Claude for a few weeks, I'm now in the hangover phase, and barely using any AI at all.
AI works well to build boilerplate code and solve easy problems, while confidently driving full-speed into the wall as soon as complexity increases.
I also noticed that it makes me subtly lazier and dumber. I started thinking like a manager, at a higher-level, believing I could ignore the details. It turns out I cannot, and details came back to bite me quickly.
So, no AI for me right now, but I'm keeping an eye out for the next gens.
It's a shit show.
curvaturearth•3mo ago