And why did they make this change? Because it makes users spend more time on the platform. ChatGPT wasn't just one of the fastest-growing web services ever, it's now also speedrunning enshittification.
I think that the solution is to somehow move towards an "intent" layer that sits above the code.
I still use Cursor, but I use AI judiciously so that I don't wreck the project and it is only acts as an aid.
It's hard to say without specifics, but simply upgrading from MySQL to PostgreSQL without rewriting the PHP codebase in Go might resolve most of the potential issues.
1) I do a lot of scraping, and Go concurrency + Colly has better performance
2) My DB size is exploding and I have limited budget, and it looks like CH is so much better at compressing data. I recently did a test and for the same table with same exact data, MySQL was using 11GB, ClickHouse 500MB
That's pretty impressive
That's pretty typical best case size for weblogs and other time ordered data where column data correlate with time values. You do have to tweak the schema a bit to get there. (Specifically a "good" sort order, codecs, ZSTD instead of LZ4 compression, etc.)
Yes, imo, too many people believe current LLMs are capable of doing this well, They aren't. Perhaps soon! But not today, so you shouldn't try to use LLMs to do this for serious projects. Writing that MDN file sounds like a wonderful way to get your own head around the infrastructure and chat with other devs / mentors about it though, that's a great exercise, I'm going to steal it.
Anyway, LLMs are, as we all know, really good text predictors. Asking a text predictor to predict too much will be stretching the predictions too thin: give it one file, one function, and very specific instructions, and an LLM is a great way to increase your productivity and reducing mental overload on menial tasks (e.g., swap out all uses of "Button" in @ExperimentModal.tsx with @components/CustomButton.tsx). I think LLM usage in this sense is basically necessary to stay competitively productive unless you're already a super-genius in which case you probably don't read HN comments anyway so who cares. For the rest of us mortals, I argue that getting good at using LLM co-pilots is as important as learning the key bindings in your IDE and OS of choice. Peel away another layer of mental friction between you and accomplishing your tasks!
I also use it for building things like app landing pages. I hate web development, and LLMs are pretty good at it because I'd guess that is 90% of their training data related to software development. For that I make larger changes, review them manually, and commit them to git, like any other project. It's crazy to me that people will just go completely off the rails for multiple hours and run into a major issue, then just start over when instead you can use a measured approach and always continue forward momentum.
To be uncharitable and cynical for a moment (and talking generally rather than about this specific post), it yields content. It gives people something to talk about. Defining their personality by their absolutes, when in reality the world is an infinite shades of gradients.
Go "all in" on something and write about how amazing it is. In a month you can write your "why I'm giving up" the thing you went all in on and write about how relieved/better it is. It's such an incredibly tired gimmick.
"Why I dumped SQL for NoSQL and am never looking back" "Why NoSQL failed me"
"Why we at FlakeyCo are all in on this new JavaScript framework!" "Why we dumped that new JavaScript framework"
This same incredibly boring cycle is seen on here over and over and over again, and somehow people fall for it. Like, it's a huge indicator that the writer more than likely has bad judgment and probably shouldn't be the person to listen to about much.
Like most rational people that use decent judgement (rather than feeling I need to "all in" on something, as if the more I commit the more real the thing I'm committing to is), I leverage LLMs many, many times in my day to day. Yet somehow it has authored approximately zero percentage of my actual code, yet is still a spectacular resource.
IF you're a:
* 10 year python dev
* work almost entirely on a very large, complex python code base
* have a pycharm IDE fine tuned over many years to work perfectly on that code base
* have very low tolerance for bugs (stable product, no room for move fast, break things)
THEN: LLMs aren't going to 10x you. An IDE like Cursor will likely make you slower for a very long time until you've learned to use it.
IF you're a:
* 1 year JS (react, nextjs, etc.) dev
* start mostly from scratch on new ideas
* have little prior IDE preference
* have high tolerance for bugs and just want to ship and try stuff
THEN: LLMs will to 10x you. An IDE like Cursor will immediately make you way faster.
Having a tool that’s embedded into your workflow and shows you how things can be done based on tons of example codebases could help a junior dev quite a lot to learn, not just to produce.
Based on the classmates I had in college who were paying to get a CS degree, I'd be surprised if many junior devs already working a paid job put much effort into learning rather than producing.
~15 years later, I don't think I'm worse off than my peers who stayed away from all those websites. Doing the right searches is probably as important as being able to read manuals properly today.
no they didn't, no one said that
i know that because i was around then and everyone was doing the same thing
also, maybe there's a difference between searching and collating answers and just copy and pasting a solution _without thinking_ at all
I'm not sure there's much of a skillset to speak of for these tools, beyond model-specific tricks that evaporate after a few updates.
But with aider/claude/bolt/whatever your tool of choice is, I can give it a handful of instructions and get a working page to demo my feature. It’s the difference between me pitching the feature or not, as opposed to pitching it with or without the frontend.
they will make you clueless about what the code does and your code will be unmaintanable.
Basically, use LLMs as a tool for specific things, don't let them do whatever and everything.
I get the point you’re trying to make, LLMs can be a force multiplier for less experienced devs, but the sweeping generalizations don’t hold up. If you’re okay with a higher tolerance for bugs or loose guardrails, sure, LLMs can feel magical. But that doesn’t mean they’re less valuable to experienced developers.
I’ve been writing Python and Java professionally for 15+ years. I’ve lived through JetBrains IDEs, and switching to VS Code took me days. If you’re coming from a heavily customized Vim setup, the adjustment will be harder. I don’t tolerate flaky output, and I work on a mix of greenfield and legacy systems. Yes, greenfield is more LLM-friendly, but I still get plenty of value from LLMs when navigating and extending mature codebases.
What frustrates me is how polarized these conversations are. There are valid insights on both sides, but too many posts frame their take as gospel. The reality is more nuanced: LLMs are a tool, not a revolution, and their value depends on how you integrate them into your workflow, regardless of experience level.
Amen. Seriously. They're tools. Sometimes they work wonderfully. Sometimes, not so much. But I have DEFINITELY found value. And I've been building stuff for over 15 years as well.
I'm not "vibe coding", I don't use Cursor or any of the ai-based IDEs. I just use Claude and Copilot since it's integrated.
Yes, but these lax expectation s are what I don't understand.
What other tools in software sometimes work and sometimes don't that you find remotely acceptable? Sure all tools have bugs, but if your compiler had the same failure rate and usability issues as an LLM you'd never use it. Yet for some reason the bar is so low for LLMs. It's insane to me how much people have indulged in the hype koolaid around these tools.
There’s definitely hype out there, but dismissing all AI use as “koolaid” is as lazy as the Medium posts you’re criticizing. It’s not perfect tech, but some of us are integrating it into real production workflows and seeing tangible gains, more code shipped, less fatigue, same standards. If that’s a “low bar,” maybe your expectations have shifted.
It's pretty a really, really simple concept.
If I have a crazy Typescript error, for instance, I can throw it in and get a much better idea of what's happening. Just because that's not perfect, doesn't mean it isn't helpful. Even if it works 90% of the time, it's still better than 0% of the time (Which is where I was at before).
It's like google search without ads and with the ability to compose different resources together. If that's not useful to you, then I don't know what to tell you.
Anyhoo... I find that there are times where you have to really get in there and question the robot's assumptions as they will keep making the same mistake over and over until you truly understand what it is they are actually trying to accomplish. A lot of times the desired goal and their goal are different enough to cause extreme frustration as one tends to think the robot's goal should perfectly align with the prompt. Once it fails a couple times then the interrogation begins since we're not making any further progress, obviously.
Case in point, I have this "Operational Semantics" document, which is correct, and a peg VM, which is tested to be correct, but if you combine the two one of the operators was being compiled incorrectly due to the way backtracking works in the VM. After Claude's many failed attempts we had a long discussion and finally tracked down the problem to be something outside of its creative boundaries and it needed one of those "why don't you do it this way..." moments. Sure, I shouldn't have to do this but that's the reality of the tools and, like they say, "a good craftsman never blames his tools".
In an era where an LLM can hallucinate (present you a defect) with 100% conviction, and vibe coders can ship code of completely unknown quality with 100% conviction, the bar by definition has to have been set lower.
Someone with experience will still bring something more than just LLM-written code to the table, and that bar will stay where it is. The people who don't have experience won't even feel the shortcomings of AI because they won't know what it's getting wrong.
Searching for relevant info on the Internet can take several attempts, and occasionally I end up not finding anything useful.
My ide intellisense tries to guess what identifier I want and put it at the top of the list, sometimes it guesses wrong.
I've heard that the various package repositories will sometimes deliberately refuse to work for a while because of some nonsense called "rate limiting".
Cloud deployments can fail due to resource availability.
Anyway, code generation tools almost always are born unreliable, then improve piecewise into almost reliable, and finally get replaced by something with a mature and robust architecture that is actually reliable. I can't imagine how LLMs could traverse this, but I don't think it's an extraordinary idea.
It's not all or nothing. What you get value out of immediately will vary based on circumstance.
But they drew boundaries with very specific conditions that lead the reader. It’s a common theme in these AI discussions.
Again I think the high level premise is correct as I already said, the delivery falls flat though. Your more junior devs have larger opportunity of extracting value.
I gently suggested that the problem may have not been with his post but with your understanding. Apparently you missed the point again.
Is that not true? That feels sufficiently nuanced and gives a spectrum of utility, not binary one and zero but "10x" on one side and perhaps 1.1x at the other extrema.
The reality is slightly different - "10x" is SLoC, not necessarily good code - but the direction and scale are about right.
* Figuring out how to write small functions (10 lines) in canonical form in a language that I don't have much experience with. This is so I don't end up writing Rust code as if it were Java.
* Writing small shell pipelines that rely on obscure command line arguments, regexes, etc.
You have the copilot mode which takes no learning at all which might give you some speedup, especially if you are doing repetitive stuff, it might even 10x+ you.
You have cmdk mode which you need to prompt and seems to he a lobotomized version of chat. I find putting comments and waiting for the copilot mode to kick in better as then the way we got there is saved.
Then there is agentic editing chat: that is the timewaster you speak off I believe, but what is there to learn? Sometimes it generates a metric ton of code, including in legacy massive code bases, that help, and often it just cannot do whatever.
I don't think these cases you make, or at least, when the second one goes beyond the basics, are different. There is nothing to learn except that you need read all the code, decide what you want in tech detail and ask that of the agentic chat. Anything else fails beyond the basics and 'learning to use it' will be that but if you didn't know that after 5 minutes you definitely didn't do any 'fine tuned pycharm ide', ever.
It is a tool that customizes code it ingested for your case specifically, if it can. That is it. If it never saw a case, it won't solve it, no matter what you 'learn to use'. And I am fine doing that in public: we use LLMs a lot and I can give you very simple cases that, besides (and often even that doesn't work) typing up the exact code, it will never fix with the current models. It just gets stuck doing meaningless changes with confidence.
I have some grey hair and I've been programming since I was a kid. Using CoPilot autocompletion roughly doubles my productivity while cutting my code quality by 10%.
This happens because I can see issues in autocompleted code far faster than I can type, thanks to years of reading code and reviewing other people's code.
The 10% quality loss happens because my code is no longer lovingly hand-crafted single-author code. It effectively becomes a team project shared by me and the autocomplete. That 10% loss was inevitable as soon as I added another engineer, so it's usually a good tradeoff.
Based on observation, I think my productivity boost is usually high compared to other seniors I've paired with. I see a lot of people who gain maybe 40% from Copilot autocomplete.
But there is no world in which current AI is going to give me a 900% productivity boost when working in areas I know well.
I am also quite happy to ask Deep Research tools to look up the most popular Rust libraries for some feature, and to make me a pretty table of pros and cons to skim. It's usually only 90% accurate, but it cuts my research time.
I do know how to drive Claude Code, and I have gotten it to build a non-trivial web front-end and back-end that isn't complete garbage without writing more than a couple of dozen lines myself. This required the same skill set as working with an over-caffeinated intern with a lot of raw knowledge, but who has never written anything longer than 1,000 lines before. (Who is also a cheating cheater.) Maybe I would use it more if my job was to produce an endless succession of halfway decent 5,000-line prototypes that don't require any deep magic.
Auto-complete plus Deep Research is my sweet spot right now.
However it's great for simple "write a function that does X", which I could do myself but it would take longer, be boring and require several iterations to get it right.
Having said that, blindly copying a few lines of ChatGPT code did lead to automated newsletters being sent out with the wrong content.
I can deliver 5k LoC in a day easily on a greenfield project and 10k if I sweat or there's a lot of boilerplate. I can do code reviews of massive multi-thousand line PRs in a few minutes that are better than most of the ones done by engineers I've worked with throughout a long career, the list just goes on and on. I only manually code stuff if there's a small issue that I see the LLM isn't understanding that I can edit faster than I can run another round of the agent, which isn't often.
LLMs are a force multiplier for everyone, really senior devs just need to learn to use them as well as they've learned to use their current tools. It's like saying that a master archer proves bows are as good as guns because the archer doesn't know how to aim a rifle.
my point is exactly inline with your comment. The tools you get immediate value out of will vary based on circumstance. There's no silver bullet.
They key is that I've always had that prompt/edit/verify loop, and I've always leaned heavily on git to be able to roll back bad AI changes. Those are the skills that let me blow past my peers.
The LLMs learn from examples, but if everyone uses LLMs to generate code, there's no new code to learn new features, libraries or methods from. The next generation of models are just going to be trained on the code generated by it's predecessors with now new inputs.
Being an LLM maximalist is basically freeze development in the present, now and forever.
[1]https://deepmind.google/discover/blog/alphaevolve-a-gemini-p...
Would you ever be able to tell e.g. CoPilot: I need a web framework with these specs, go create that framework for me. The later have Claude actually use that framework?
github.com/AlDanial/cloc v 2.04 T=0.05 s (666.3 files/s, 187924.3 lines/s) ------------------------------------------------------------------------------- Language files blank comment code ------------------------------------------------------------------------------- Python 24 1505 1968 5001 Markdown 4 37 0 121 Jinja Template 3 17 2 92 ------------------------------------------------------------------------------- SUM: 31 1559 1970 5214 -------------------------------------------------------------------------------
Note this project also has 199 test cases.
Initial commit for cred:
commit caff2ce26225542cd4ada8e15246c25176a4dc41 Author: redacted <redacted> Date: Thu May 15 11:32:45 2025 +0800
docs: Add README
And when I say easy, I was playing the bass while working on this project for ~3 hours.Yikes. But also lol.
Back then, Javis wasn’t built for code, but it was a surprisingly great coding companion. Yes. It only gave you 80% working code, but because you had to get your hands dirty, you actually understand what was happening. It didn't give me 10x but I'm happy with 2x with good understanding on what's going on.
Fast-forward to now: Copilot, Cursor, roo code, windsurf and the rest are shockingly good at output, but sometimes the more fluent the AI, the sneakier the bugs. They hand you big chunks of code, and I bet most of us don't have a clear picture of what's going on at ground 0 but just an overall idea. It's just too tempting to blindly "accept all" the changes.
It’s still the old wisdom — good devs are the ones not getting paged at 3am to fix bugs. I'm with the OP. I'm more happy with my 2x than waking up at 3am.
LOL
Much of the time I spend writing code, not thinking about the general overview etc but about the code I am about to write itself, and if I actually care about the actual code (eg I am not gonna throw it away anyway by the end of the day) it is about how to make it as concise and understandable to others (incl future me) as possible, what cases to care about, what choices to make so that my code remain maintainable after a few days. It may be about refactoring previous code and all the decisions that go with that. LLM generated code, imo, is too bloated; them putting stuff like asserts is always a hit or miss about what they will think is important or not. Their comments tend to be completely trivial, instead of stating the intention of stuff, and though I have put some effort in getting them use a coding style similar to mine, they often fail there too. In such cases, I only use them if the code they write can be isolated enough, eg write a straightforward, auxiliary function here and there that will be called in some places but does not matter as much what happens in there. There are just too many decisions at each step that LLMs are not great at resolving ime.
I depend more on LLMs if I care less about maintenability of the code itself and more about getting it done as fast as possible, or if I am just exploring and do not actually care about the code at all. For example, it can be I am in a rush to get sth done and care about the rest later (granted they can actually do the task, else I am losing time). But when I tried this for my main work, it soon became a mess that would take more time to fix even if they seem like speeding me up initially. Granted, if my field was different and the languages I was using more popular/represented in training data, I may have found more uses for them, but I still think that after some point it becomes unsustainable to leave decisions to them.
This is a reasonable usage of LLMs up to a certain point, and especially if you're in full control of all the requirements as the dev. If you don't mind missing details related to sales and marketing such as SEO and analytics, I think those are not really "landing pages", but rather just basic web pages.
> I hate web development, and LLMs are pretty good at it because I'd guess that is 90% of their training data related to software development.
Your previous sentence does not support this at all since web development is a much more broad topic than your perception of landing pages. Anything can be a web app, so most things are nowadays.
The worst is the middleground of stacks that are popular enough to be known but not enough for an LLM to know them. I say worst because in these cases the facade that the LLM understands how to create your product will fall before you the software's lifecycle ends (at least, if you're vibe-coding).
For what it's worth, I've mostly been a hobbyist but I'm getting close to graduating with a CS degree. I've avoided using LLMs for classwork because I don't want to rob myself of an education, but I've occasionally used them for personal, weird projects (or tried to at least). I always give up with it because I tend to like trying out niche languages that the LLM will just start to assume work like python (ex: most LLMs struggle with zig in my experience).
there's MCP servers now that should theoretically help with that, but that's its own can of worms.
Most code out there is glue. So there’s a lot of trainning data on integrating/composing stuff.
If you take this as a whole, you could do that 30-60 min into 5 min for most dev work.
I have one project that is very complex and for this I can't and don't use LLMs for.
I've also found it's better if you can get it code generate everything in the one session, if you try other LLMs or sessions it will quickly degrade. That's when you will see duplicate functions and dead end code.
They are being marketed a virtual assistants that will literally do all the work for you. If they become marketed truthfully, however, people will probably realize that they aren't worth the cost and it's largely more beneficial to search the web and/or crowdsource answers.
When I do use the agent, I inspect its output ruthlessly. The idea that pages of code can be written before being inspected is horrifying to me.
For me LLMs are a game changer for devops (API knowledge is way less important now that it's even been) but I'm still doing copy pasting from ChatGPT, however primitive it may seem.
Fundamentally I don't think it's a good idea to outsource your thinking to a bot unless it's truly better than you at long term decision making. If you're still the decision maker, then you probably want to make the final call as to what the interfaces should look like. I've definitely had good experiences carefully defining object oriented interfaces (eg for interfacing with AWS) and having LLMs fill in the implementation details but I'm not sure that's "vibe coding" per se.
But after I got a week into my LLM-led code base, it became clear it was all spaghetti code and progress ground to a halt.
This article is a perfect snapshot of the state of the art. It might improve in the future, but this is where it is in May 2025.
I'm working on a few toy projects and I am using LLM for 90% of it.
The result is 10x faster than if I coded it "by hand", but the architecture is worse and somewhat alien.
I'm still keeping at it, because I'm convinced that LLM driven code is where things are headed, inevitably. These tools are just crazy powerful, but we will have to learn how to use them in a way that does not create a huge mess.
Currently I'm repeatedly prompting it to improve the architecture this way or that way, with mixed results. Maybe better prompt engineering is the answer? Writing down the architecture and guidelines more explicitly?
Imagine how the whole experience will be if the latency was 1/10th of what it is right now and the tools are 10x better.
Just like you're mentioning "maybe better prompt engineering", I feel like we're being conditioned to think "I'm just not using it right" where maybe the tool is just not that good yet.
Yes, very explicit like “if (condition) do (action)” and get more explicit when… oh wait!
It’s an iterative process, not a linear one. And the only hige commits are the scaffolding and the refactorings. It’s more like sculpture than 3d printing, a perpetual refinement of the code instead of adding huge lines of code.
This is the reason I switched to Vim, then Emacs. They allow for fast navigation, and faster editing. And so easy to add your own tool as the code is a repetitive structure. The rare cases I needed to add 10s of lines of code is with a code generator, or copy-pasting from some other file.
This way you're doing the big picture thinking while having the LLM do what's it's good at, generating code within the limits of its context window and ability to reason about larger software design.
I mostly treat the LLM as an overly eager to please junior engineer that types very quickly, who can read the documentation really quickly, but also tends to write too much code and implement features that weren't asked for.
One of the good things is that the code that's generated is so low effort to generate that you can afford to throw away large chunks of it and regenerate it. With LLM assistance, I wrote some code to process a dataset, and when it was too screwy, I just deleted all of it and rewrote it a few times using different approaches until I got something that worked and was performant enough. If I had to type all of that I would've been disappointed at having to start over, and probably more hesitant to do so even if it's the right thing to do.
So I kind of do a top-down design and use the LLM to help me with toil or with unfamiliar things that would require me finding the right documentation, code examples, bug reports, etc, etc... LLMs with web search are great for this kind of toil elimination.
LLMs are useful tools, but they have no notion of hierarchy, of causal relationships, or any other relationship, actually. Any time they seem to have those capabilities in the code they generate, it is merely a coincidence, a very probably coincidence, but still, it is not intentional.
Unfortunately ceo manager types cannot distinguish between bad and good enginners
You can't abdicate your responsibility as a builder to the LLM. You are still responsible for the architecture, for the integrity, for the quality. In the same way you wouldn't abdicate your responsibility if you hired a more junior engineer.
Yup. I've spoken about this on here before. I was a Cursor user for a few months. Whatever efficiency gains I "achieved" were instantly erased in review, as we uncovered all the subtle and not-so-subtle bugs it produced.
Went back to vanilla VSCode and still use copilot but only when I prompt it to do something specific (scaffold a test, write a migration with these columns, etc).
Cursor's tab complete feels like magic at first, but the shine wore off for me.
My favorite thing here watching a co-worker is when Cursor tries to tab complete what he just removed, and sometimes he does it by reflex.
Of course the real question is, is there any reason to be good at coding and writing, if LLMs can just do it instead? Of course it’s hard to sell that we should know arithmetic when calculators are ubiquitous.
Personally, I value being skilled, even if everyone is telling me my skills will be obsolete. I simply think it is inherently good to be a skilled human
For me, the magic of LLMs is that I already get an hour of coding via 30 minutes of prompting and finetuning the generated code. And I know that the ratio will constantly improve as LLMs become better and as I finetune my prompts to define the code style I prefer. I have been coding for pretty much all my life and I never felt more excited about it than I do now.
It would be cool if people shared their prompts and the resuling commits. If someone here is disappointed by the commits LLMs make, I would love to see the prompt, the commit, and which model made it.
if i dont do that it always seems to throw out 3 fresh files ill need to add to make their crazy implementation work.
ive pretty much swapped to using it just for asking for minor syntax stuff i forget. ill take my slower progress in favor of fully grasping everything ive made.
i have one utility that was largely helped by claude in my current project. it drives me nuts, it works but im so terrified of it and its so daunting to change now.
This is the best way to get Gemini to be a really good assistant, unless you want to add System Instructions which precisely describe how it should behave.
Because if you just say it should solve some problem for you, it eagerly will generate a lot of code for you, or add a lot of code to the clean code which you provided.
It returned over 600 lines of code across 3 code blocks, almost all of them commented out for some reason, each with an accompanying essay, and each stuffed with hallucinated and unnecessary helper functions. Apparently Gemini Pro struggles to wrap its weights around recursion more than I do. I just wrote it myself and only needed 26 lines. It's not using tail calls, but hey, my target platform still doesn't support tail call optimization in 2025 anyway.
So someone could come along and say "well don't do that then, duh". But actually a lot of people are doing this, and many of them have far fewer fucks to give than the author and I, and I don't want to inherit their shitty slop in the future.
I don't use LLMs for my main pieces of work exactly due to the issues described by the author of the blogpost.
> I have no concept of Go or Clickhouse best practices.
> One morning, I decide to actually inspect closely what’s all this code that Cursor has been writing. It’s not like I was blindly prompting without looking at the end result, but I was optimizing for speed and I hadn’t actually sat down just to review the code.
> I’m defaulting to coding the first draft of that function on my own.
I feel like he’s learnt the wrong lesson from this. There is a vast gulf between letting an LLM loose without oversight in a language you don’t know and starting from scratch yourself. There’s absolutely nothing wrong with having AI do the first draft. But it actually has to be a first draft, not something you blindly commit without review.
> “Vibe coding”, or whatever “coding with AI without knowing how to code” is called, is as of today a recipe for disaster, if you’re building anything that’s not a quick prototype.
But that’s what vibe coding is. It’s explicitly about quick throwaway prototypes. If you care about the code, you are not vibe coding.
> There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. […] It's not too bad for throwaway weekend projects, but still quite amusing. I'm building a project or webapp, but it's not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.
— https://x.com/karpathy/status/1886192184808149383
He’s basically saying that vibe coding is a disaster after his experience doing something that is not vibe coding.
You can’t pick up vibe coding and then complain that it’s behaving as described or that it isn’t giving you something that wasn’t promised.
Something I do very much love about LLMs is that I can feed it my server logs and get back an analysis of the latest intrusion attempts, etc. That has taught me so much on its own.
Post-LLM era, you can use a shortcut to get a representation of what something could be. However I see a lot of folks shipping that representation straight to prod. By comparison, the mental models created are weaker and have less integrity. You might be able to feel that something is off, but you lack the faculties to express or explain why.
Use the tool when it makes sense or when someone shows you how to use it more effectively. This is exactly like the calculator "ruining people's ability to do arithmetic" when the vast majority of the population has been innumerate for hundreds of thousands of years up til the IR where suddenly dead white european nobility are cool.
There is nothing fun nor interesting about long division as well as software development.
If LLMs don't work for your usecase (yet) then of course you have to stick with the old method, but the "I could have written this script myself, I can feel my brain getting slower" spiel is dreadfully boring.
comparing it to no longer doing the long division portion of a math problem isnt a great 1 to 1 here. long division would be a great metaphor if the user is TRULY only using llms for auto complete of tasks that add 0 complexity to the overall project. if you use it to implement something and dont fully grasp it, you are just creating a weird gap in your overall understanding of the code base.
maybe we are in full agreement and the brunt of your argument is just that if it doesnt fit ur current usecase then dont use it.
i dont think i agree with the conclusion of the article that it is making the non coding population dumber, but i also AGREE that we should not create these gaps in knowledge within our own codebase by just trusting ai, its certainly NOT a calculator and is wrong a lot and regardless if it IS right, that gap is a gap for the coder, and thats an issue.
It may be bad practice, but consider that the median developer does not care at all about the internals of the dependencies that they are using.
They care about the interface and about whether they work or not.
They usually do not care about the implementation.
Code generated by LLM is not that different than pulling in a random npm package or rust crate. We all understand the downsides, but there is a reason that practice is so popular.
[citation needed]
> Code generated by LLM is not that different than pulling in a random npm package or rust crate
It's not random, there's an algorithm for picking "good" packages and it's much simpler than reviewing every single line of LLM code.
Everybody agrees that e.g. `make` and autotools is a pile of garbage. It doesn't matter, it works and people use it.
> It's not random, there's an algorithm for picking "good" packages and it's much simpler than reviewing every single line of LLM code.
But you don't need to review every single line of LLM code just as you don't need to review every single line of dependency code. If it works, it works.
Why does it matter who wrote it?
So I really hope you don't pull in packages randomly. That sounds like a security risk.
Also, good packages tend have a team of people maintaining it. How is that the same exactly?
It absolutely is, but that is besides the point
> Also, good packages tend have a team of people maintaining it. How is that the same exactly?
The famously do not https://xkcd.com/2347/
But this CEO I just met on LinkedIn?
"we already have the possibility to both improve our productivity and increase our joy. To do that we have to look at what software engineering is. That might be harder than it looks because the opportunity was hidden in plain sight for decades. It starts with rethinking how we make decisions and with eliminating the need for reading code by creating and employing contextual tools."
Context is how AI is a whole new layer of complexity that SWE teams have to maintain.
I'm so confused.
I got more success that I hoped for, but I had to adjust my usage to be effective.
First of all, treat the LLM as a less experienced programmer. Don't trust it blindly but always make code review of its changes. This gives several benefits.
1) It keeps you in touch with the code base, so when need arise you can delve into it without too much trouble
2) You catch errors (sometimes huge ones) right away, and you can have them fixed easily
3) You can catch errors on your specification right away. Sometimes I forget some detail and I realize it only when reviewing, or maybe the LLMs did actually handle it, and I can just tell it to update the documentation
4) You can adjust little by little the guidelines for the LLM, so that it won't repeat the same "mistakes" (wrong technical decisions) again.
In time you get a feeling of what it can and cannot do, where you need to be specific and where you know it will get it right, or where you don't need to go into detail. The time required will be higher than vibe coding, but decreases over time and still better than doing by myself.
There is another important benefit for me in using an LLM. I don't only write code, I do in fact many other things. Calls, writing documentation, discussing requirements etc. Going back to writing code requires a change of mental state and to recall into memory all the required knowledge (like how is the project structured, how to use some apis etc.). If I can do two hours of coding it is ok, but if the change is small, it becomes the part where I spend the majority of time and mental energy.
Or I can ask the LLM to make the changes and review them. Seeing the code already done requires less energy and will help me reminding stuff.
This represents a huge security threat too if code is uncritically applied to code bases. We've seen many examples where people try and influence LLM output (eg [1][2]). These attempts have generally varied from crude to laughably bad but they'll get better.
Is it so hard to imagine prompt injection being a serious security threat to a code base?
That aside, I just don't understand being "all in" on LLMs for coding. Is this really productive? How much boilerplate do you actually write? With good knowledge of a language or framework, that tends to be stuff you can spit out really quickly anyway.
[1]: https://www.theguardian.com/technology/2025/may/16/elon-musk...
[2]: https://www.theguardian.com/technology/2025/jan/28/we-tried-...
In the past the quality mattered because maintenance and tech-debt was something that we had to spend time and resources to resolve and it would ultimately slow us down as a result.
But if we have LLMs do we even have "resources" any more? Should we even care if the quality is bad if it is only ever LLMs that touch the code? So long as it works, who cares?
I've heard this positioned in two different ways, from two different directions, but I think they both work as analogies to bring this home:
- do engineers care what machine code a compiler generates, so long as it works? (no, or at least very, very, very rarely does a human look at the machine code output)
- does a CEO care what code their engineers generate, so long as it works? (no)
Its a very very interesting inflection point.
The knee jerk reaction is "yes we care! of course we care about code quality!" but my intuition is that caring about code quality is based on the assumption that bad code = more human engineer time later (bugs, maintenance, refactoring etc).
If we can use a LLM to effectively get an unlimited number of engineer resources whenever we need them, does code quality matter provided it works? Instead of a team of say 5 engineers and having to pick what to prioritise etc, you can just click a button and get the equivalent of 500 engineers work on your feature for 15 minutes and churn out what you need and it works and everyone is happy, should we care about the quality of the code?
We're not there yet - I think the models we have today kinda work for smaller tasks but are still limited with fairly small context windows even for Gemini (I think we'll need at least a 20x-50x increase in context before any meaningfully complex code can be handled, not just ToDo or CRUD etc), but we'll get there one day (and probably sooner than we think)
At this point in history they aren't good enough to just vibe code complex projects as the author figured out in practice.
They can be very useful for most tasks, even niche ones but you can't trust it completely.
If I want to learn something new I won’t vibe code it. And if I vibe code I’ll go with tech I have at least some familiarity with so that I can fix the inevitable issues
> Are they throttling the GPUs? Are these tools just impossible to control? What the fuck is going on?
Money and dreams. As everyone knows there’s an obscene amount of money invested in these tools of course. The capitalist class is optimistic their money can finally do the work for them directly instead of having to hire workers.
But more than that, AI is something that’s been alluring to humans forever. I’m not talking about cyberpunk fantasies I’m talking about The Mechanical Turk, Automata in the Middle Ages, Talos[0]. The desire to create an artificial mind is, if not hardwired into us, at least culturally a strong driver for many. We’re at a point where the test of the computer age for determining if we built AI was so utterly destroyed it’s unclear how to go about judging what comes next.
The hype is understandable if you view if you step back and view it through that lens. Maybe we are at an inflection point and just a bit more scaling will bring us the singularity. Maybe we’ve seen a burst of progress that’ll mostly stall from a consumer perspective for another 5-10 years, like all of the major AI moments before.
If you want to use them effectively it’s the same as any tool. Understand what they are good at and where they flounder. Don’t give up your craft, use them to elevate it.
[0]: https://news.stanford.edu/stories/2019/02/ancient-myths-reve...
Not quite a source but it’s a fun read from 2019 about this.
Once the code base structure and opinions are in place, I think LLMs are decent at writing code that are bounded to a single concern and not cross cutting.
LLM generated code bases work initially but so will code written by college kids for an initial set of requirements. Even a staff+ level engineer will struggle in contributing to a code base that is a mess. Things will break randomly. Don’t see how LLMs are any different.
Think about when chat gpt gives you the side-by-side answers and asks you to rate which is "better".
Now consider the consequence of this at scale with different humans with different needs all weighing in on what "better" looks like.
This is probably why LLM generated code tends to have excessive comments. Those comments would probably get it a higher rating but you as a developer may not want that. It also hints at why there's inconsistency in coding styles.
In my opinion, the most important skill for developers today is not in writing the code but in being able to critically evaluate it.
That aside, I wonder if the OP would have had all the same issues with Grok? In several uses I've seen it surprisingly outperform the competition.
I don't think LLMs are at blame here, it is a tool and it can be used poorly. However, I do wonder what's the long term effects on someone who uses them to work on things they are knowledgeable about. Unfortunately this is not explored in the article.
Luckily in this particular case, being able to parallel park unassisted isn’t all that critical in the overall scheme of things, and as soon as I turned that feature off, my parking skills came back pretty quickly.
But the lesson stuck with me, and when LLMs became capable of generating some degree of functioning code, I resolved not to use them for that purpose. I’m not trying to behave like a stodgy old-timer who increasingly resists new tech out of discomfort or unfamiliarity—it’s because I don’t want to lose that skill myself. I use LLMs for plenty of other things like teaching myself interesting new fields of mathematics or as a sounding board for ideas, but for anything where it would (ostensibly) replace some aspect of my critical thinking, I try to avoid them for those purposes.
There, there's your problem. The problem is not LLMs, the problem is people not using their brain. Can't we use LLMs and our brains as well? Both are amazing tools!
Without access to the code, it's challenging to verify the authors' claims and form an independent opinion. In my view, we should be cautious about trusting articles that lack examples or sources. As someone wiser than me once said:
> Articles without sources are merely the opinions of the author.
Even if the idea that an LLM will help you do it is false, perhaps it is still a good idea if it convinces the experienced programmer to go ahead and use SQL for the query, Go for the async, Javascript for the frontend, etc. Right now, few if any companies would let you use the best tool for the job, if that's not one they already use. Perhaps the best use of LLMs is to convince programmers, and their bosses, to use the best tool for each job.
But, after you've gotten past that part, you will probably (like the author of this article) need to throw away the LLM-generated code and write it yourself.
I never write python in my day to day but every from-scratch project I've knocked out with Claude code has been in python because that's what it seems to default to if I don't specify anything
I wonder if this will also mean that new languages (or even algorithms or code patterns) are harder to get adopted, because the mass of existing code (that LLMs learned from) exerts a gravitational force pulling things back down to the status quo.
2. The ratio increases and programmer layoffs begin
3. Bugs appear, AI handles most but not all
4. Bug complexity increases, company hires programmers to fix
5. Programmers can't decipher AI code mess
6. Programmers quit
7. Company unable to maintain their product
> the horror ensues
I have basically given up on "in IDE" AI for now. I simply have a web interface on my 2nd monitor of whatever the "best" LLM (currently Gemini, was Claude) is and copy paste snippets of code back and forth or ask questions.
This way I have to physically copy and paste everything back - or just redo it "my way", which seems to be enough of an overhead that I have to mentally understand it. When it's in an IDE and I just have to tick accept I just end up getting over eager with it, over accepting things and then wishing I hadn't, and spend more time reverting stuff when the penny drops later this is actually a bad way to do things.
It's really more a UX/psychology problem at the moment. I don't think the 'git diff' view of suggested changes is the right one that many IDEs use for it - I'm psychologically conditioned to review these like pull requests and it seems my critique of these is just not aligned to critiquing LLM code. 99% of PR reviews I do are finding edge cases and clarifying things. They're not looking at very plausible yet subtly completely wrong code (in the most part).
To give a more concrete example; if someone is doing something incredibly wrong in 'human PRs' they will tend to name the variables wrong because they clearly don't understand the concept, at which point the red flag goes off in my head.
In LLM PRs the variables are named often perfectly - but just don't do what they say they will. This means my 'red flag' doesn't fire as quickly.
He lost me here. Sounds like he tried to change from a boring stack he understood, to Go and Clickhouse because it's cooler.
You're asking for trouble. LLMs are great and getting better, but you can't expect them to handle something like this right now
I gave them a fair shake. However, I do not like them for many reasons. Code quality is one major reason. I have found that after around a month of being forced to use them I felt my skill atrophy at an accelerated rate. It became like a drug where instead of thinking through the solution and coming up with something parsimonious I would just go to the LLM and offload all my thinking. For simple things it worked okay but it’s very easy to get stuck in a loop. I don’t feel any more productive but at my company they’ve used it as justification to increase sprint load significantly.
There has been almost a religious quality associated to LLMs. This seems especially true among the worst quality developers and the non-technical morons at the top. There are significant security concerns that extend beyond simple bad code.
To me we have all the indicators of the maximum of the hype cycle. Go visit LinkedIn for confirmation. Unless the big AI companies begin to build nuclear power it will eventually become too expensive and unprofitable to run these models. They will continue to exist as turbo autocomplete but no further. The transformer model has fundamental limitations and much like neural networks in the 80s it’ll become more niche and die everywhere else. Like its cousins WYSIWIG and NoCode in 30 more years it’ll rise again like a phoenix to bring “unemployment” to developers once more. It will be interesting to see who among us was swimming without clothes when the water goes out.
use-it-or-lose-it is the cognitive rule.
I've started a "no Copilot Fridays" rule for myself at $DAYJOB to avoid this specifically happening.
My job has changed from writing code to code reviewing a psychopathic baby.
The argument is that the precision allowed by formal languages for programming, math etc were the key enabler for all of the progress made in information processing.
ie, Vibe-coding with LLMs will make coding into a black-art known only to the shamans who can prompt well.
[1] https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...
I have been building a gallery for my site with custom zoom/panning UX to be exactly how I want it (which by the way does not already exist as a library). Figuring out the equations for it is simply impossible for an LLM to do.
I wouldn't be surprised if after the LLM hype we go back to site/app builders being the entry level option.
Developing without the knowledge what LLM writes is dangerous. For me having LLM as a tool is like having a few Junior Developers around, that I can advise and work with. If I have a complicated logic that I need to write - I write it. After I wrote it, I can ask LLM to review it, it might be good to find some corner cases in some places. When I need to "move things from one bucket to another", like call API and save to DB - that is a perfect task for LLM, that I can easily review after.
At the same time, LLM is able to write pretty good complicated logic as well for my side projects. I might need to give it a few hints, but the results are amazing.
This reminds me of the day of Dreamweaver and the like. Everybody loved how quickly they could drag and drop UI components on a canvas, and the tool generated HTML code for them. It was great at the beginning, but when something didn't work correctly, you spent hours looking at spaghetti HTML code generated by the tool.
At least, back then, Dreamweaver used deterministic logic to generate the code. Now, you have AI with the capability to hallucinate...
Eric Schmidt gave a TED interview about the subject this week. He predicts the US and China bombing each other's data centers.
in which Windsurf is forging ahead with an agentic LLM product that endeavors to not only replace software engineers but actually take over the entire software engineering PROCESS.
We're at a very interesting point, where investors and corporate interests are crystal clear in their intent to use LLMs to replace as many expensive humans as possible, while the technology available to do so is not there yet. And depending on your own perspective, it's not clear it ever will be, or perhaps it'll eventually be "good enough" for them to downsize us anyway.
I keep thinking about compilers. The old timers had been writing assembly by hand for years. The early C compilers were producing defective garbage output that was so incredibly poor that it was faster to keep coding by hand. Then the compilers got better. And better, and better, and now pretty much nobody inspects the output, let alone questions it.
LLM is a tool. I think we still need to learn how to use it, and in different areas it can do either more or less stuff for you. Personally I don't it for most of everyday coding, but if I have something tedious to write, the LLM is the first place I go for a code draft. That works for me and for the LLM I use.
And I feel more productive. I recommend that everyone gives it a try.
As a tenured SW developer in my company my measurements for success are much more than "how much code can I spit out". There are mentoring, refactoring, code readability/mantainability, and quality that are important to my job. I found that LLM generated code was not hitting the necessary bars for me in these areas (agent or code autocompletion) and so I have stepped back from them. The readability point is extra important to me. Having maintained million lines of code products, I have found that readability is more important than writing a ton of code: and LLMs just don't hit the bar here.
When I am playing with side projects that I don't have the same bar on, sure Ill have bolt or lovable generate me some code in combination with cursor or windsurf, but these are low stakes and in some ways I just want to get something on paper.
dmazin•7h ago
I still use LLMs heavily. However, I now follow two rules:
* Do not delegate any deep thought to them. For example, when thinking through a difficult design problem, I do it myself.
* Deeply review and modify any code they generate. I go through it line-by-line and edit it thoroughly. I have to do this because I find that much of what they generate is verbose, overly defensive, etc. I don't care if you can fix this through prompting; I take ownership over future maintainability.
"Vibe coding" (not caring about the generated code) gives me a bad feeling. The above approach leaves me with a good feeling. And, to repeat, I am still using them a lot and coding a lot faster because of it.
rco8786•7h ago
This is the issue, right? If you have to do this, are you saving any time?
jgilias•6h ago
dmazin•6h ago
The other thing is that I have the LLM make the modifications I want.
I know how long it takes to get an extremely bad programmer to do what you want, but the LLM is far better than that, so I do come out ahead.
tasuki•6h ago
I find most benefit in writing tests for a yet-inexistent function I need, then giving the LLM the function signature, and having it implement the function. TDD in the age of LLMs is great!
chuckadams•6h ago
It takes a good hour or two to draw up the plans, but it's the kind of thing that would take me all day to do, possibly several as my ADHD brain rebels against the tedium. AI can do yeomans work when it just wings it, and sometimes I have just pointed at a task and did it in one shot, but they work best when they have detailed plans. Plus it's really satisfying to be able to point at the plan doc and literally just say "make it so".