What are the productivity gains? Obviously, it must vary. The quality of the tool output varies based on numerous criteria, including what programming language is being used and what problem is trying to be solved. The fact that person A gets a 10x productivity increase on their project does not mean that person B will also get a 10x productivity increase on their project, no matter how well they use the tool.
But again, tool usage itself is variable. Person A themselves might get a 10x boost one time, and 8x another time, and 4x another time, and 2x another time.
All ten outputs might be valid. All ten will almost certainly be different -- though even that is not guaranteed.
The OP referred to the notion of there being no manual; we have to figure out how to use the tool ourselves.
A traditional programming tool manual would explain that you can provide input X and expect output Y. Do this, and that will happen. It is not so clear-cut with AI tools, because they are -- by default, in popular configurations -- nondeterministic.
Of course, we maybe never get there :)
A star trek replicator for software.
Obviously we are nowhere near that, and we may never arrive. But this is the big bet.
That's a very interesting way to put it.
Actually, even the post itself reads like a cognitive dissonance with a dash of the usual "if it's not working for you then you are using it wrong" defence.
To use an analogy, it would be like spending all your time before a battle making sure your knife is sharp when your opponent has a tank.
I also like to think that Einstein would be smart enough to explain things from a common point of understanding if you did drop him 2000 years in the past (assuming he also possesses the scientific knowledge humanity accrued in that 2000 year gap). So, your analogy doesn't really make a lot of sense here. I also doubt he'd be able to prove his theories with the technology of the past but that's a different matter.
If we did have AGI models, they would be able to solve our hardest problems (assuming a generous definition of AGI) even if we didn't immediately understand exactly how they got there. We already have a lot of complex systems that most people don't fully understand but can certainly verify the quality of. The whole "too smart for people to understand that they're too smart" is just a tired trope.
you are, for sure.
The mirage is alluring.
I think LLMs are very well marketed but I don't think they're very good at writing code and I don't think they've gotten better at it!
I think they are useful as an augmentation, but largely valueless for directly outputting code. Who knows if that will change. It's still made me more productive as a dev despite not oneshotting entire files. It's just not industry-changing, at least yet.
And since it's way, way less wrong than sonnet4, it might also improve my whole team velocity.
I won't lie, AI coding has been a net negative for the 'lazy devs' on my team who don't delves into their own generated code (by 'lazy devs' here I mean the subset of devs who do the work but often don't bother to truly understand the logic behind what they used/did. They are very good coworkers, add velue and are not really lazy, but I don't see another term for that).
Slop-oriented programming
> coming up with the right projects and producing a vertically differentiated product to what already exists is.
Agreed but not all engineers are involved with this aspect of the business and the concern applies to them.
Using tools before their manual exists is the oldest human trick, not the newest.
Visual Studio Code is a different thing... and claims to be open source, but by intent and approach really is closer to source available.
Wild how much you can get for free now. Amazing free IDEs. Every LLM offers excellent free plans if you are on a zero budget.
$10/mo GitHub Copilot is an absurd deal that has to be a loss in terms of pure compute cost.
AI is here to stay, and the only thing that can stop it at this stage is a Butlerian jihad.
The framing allows the rest of us to get ourselves off the hook. "We didn't have a choice! It was INEVITABLE!"
And so, we have chosen.
> "We didn't have a choice! It was INEVITABLE!"
There is no "we". You can call it the tragedy of the commons, or Moloch, or whatever you want, but I don't see how you can convince every single developer and financial sponsor on the planet to stop using and developing this (clearly very useful) tech. And as long as you can't, it's socially inevitable.
If you want a practice run, see if you can stop everyone in the world from smoking tobacco, which is so much more clearly detrimental. If you manage that, you might have a small chance at stopping implementation of AI.
> see if you can stop everyone in the world from smoking tobacco
this is a logical fallacy i think; nobody needs to stop tobacco full-stop, but we have been extremely successful at making it less and less incentivized/used over time, which is the goal...[1] https://www.lung.org/research/trends-in-lung-disease/tobacco...
I repeatedly rewrite prompts, restate the same constraints, and write detailed acceptance criteria, yet still end up with broken or non-functional code.its very frustrating to say the least Yesterday alone I spent about $200 on generations that now require significant manual rewrites just to make them work.
At that point, the gains are questionable. My biggest success is having the model take over the first Design in my app and I take it from there, but those hundred lines if not thousand lines of code it generates are so Messi, it's insanely painful to refactor the mess afterwards
It’s very easy to spend $100s per dev per day.
You're absolutely right! Limited token allowance for $200/month is actually unlimited tokens when paying for extra from a cash reserve which is also unlimited, of course.
When paying for Claude Max even at $200/month there are limits - you have a limit to the number of tokens you can use per five hour period, and if you run out of that you may have to wait an hour for the reset.
You COULD instead use an API key and avoid that limit and reset, but that would end up costing you significantly more since the $200/month plan represents such a big discount on API costs.
As-of a few weeks ago there's a third option: pay for the $200/month plan but allow it to charge you extra for tokens when you reach those limits. That gives you the discount but means your work isn't interrupted.
Extra Usage for Paid Claude Plans: https://support.claude.com/en/articles/12429409-extra-usage-...
What I don't fully understand is how you can characterize that as "not limited" with a straight face; then again, I can't see your face so maybe you weren't straight faced as you wrote it in the first place.
Hopefully you could see my well meaning smile with the "absolutely right" opening, but apparently that's no longer common so I can understand your confusion as https://absolutelyright.lol/ indicates Opus 4.5 has had it RLHF'd away.
That's why I said "not limited" as opposed to "unlimited" - a subtle difference in word choice, I'll give you that.
Yeah the pain of cleaning up small mess is great too. I had some tests failing and type failing issues, I thought I will fix it later by only using AI prompt. As the size was growing, failing Typescript issues was growing too. At some point it was 5000+ type issues and countless number of failing unit tests. Then more and more. I tried to fix with AI, since it was not possible fixing old way. Then I discarded the whole project when it was around 500k lines of code.
I use Claude Code and Cursor. What I do:
- use statically typed languages: TypeScript, Go, Rust, Python w/ types
- Setup linters. For TS I have a bunch of custom lint rules (authored by AI) for common feedback that I've given. (https://github.com/shepherdjerred/monorepo/tree/main/package...)
- For Cursor, lots of feedback on my desired style. https://github.com/shepherdjerred/scout-for-lol/tree/main/.c...
- Heavy usage of plan mode. Tell AI something like "make at least 20 searches to online documentation", support every claim with a reference, etc. Tell AI "make a task for every little thing you'll implement"
- Have the AI write tests, particularly the more expensive ones like integration and end-to-end, so you have an easy way to verify functionality.
- Setup Claude Code GHA to automatically review PRs. Give the review feedback to the agent that implemented it, either via copy-pasting or tell the agent "fetch review comments and fix them".
Some examples of what I've made:
- Many features for https://scout-for-lol.com/, a League of Legends bot for Discord
- A program to generate TypeScript types for Helm charts (https://github.com/shepherdjerred/homelab/tree/main/src/helm...)
- A program to summarize all of the dependency updates for my Homelab (https://github.com/shepherdjerred/homelab/tree/main/src/deps...)
- A program to manage multiple instances of CLI agents like Claude Code (https://github.com/shepherdjerred/monorepo/tree/main/package...)
- A Discord AI bot in the style of my friends (https://github.com/shepherdjerred/monorepo/tree/main/package...)
Lol sometimes I have to spend two turns convincing Claude to use its goddamn search and look up the damn doc instead of trying to shoot from the hip for the fifth time. ChatGPT at least has forced search mode.
I'm lucky enough that my workplace also uses Cursor + Claude Code, so my experience directly transfers. I most often use Cursor for day-to-day work. Claude has been great as a research assistant when analyzing how data flows between multiple repos. As an example I'm writing a design doc for a new feature and Claude has been helping me with the investigation. My workflow is more or less to say: "here are my repos, here is the DB schema, here are previous design docs, now how does system X work, what would happen if I did Y, etc."
AI is still fallible so you _do_ of course have to do lots of checking and validation which can be boring, but much easier if you add a prompt like "support every claim you make with a concrete reference".
When it comes to implementation, I generally give it smaller, more concrete pieces to work with. e.g. for a personal project I would say something like "here is everything I want to do, make a plan, do part 1, then do part 2, example: https://github.com/shepherdjerred/scout-for-lol/tree/227e784...)
At work, I tend to give it PR-sized units of work. e.g. something very well-scoped and defined. My workflow is: prompt, make a PR on GitHub, add comments on GitHub, tell Cursor "I left comments on your PR, address them", repeat. Essentially I treat AI as a coworker submitting code to me.
I don't really know that I can quantify the productive gain.. I can say that I am _much_ more motivated in the last few months because AI removes so much friction. I think it's backed up by my commit history since June/July which is when I started using Cursor heavily: https://github.com/shepherdjerred
It’s _always_ easier to add more code than it is to fix broken code.
Claude is extremely verbose when it generates code, but this is something that should take a practicing software engineer an hour or so to write with a lot less code than Claude.
I like all the LLM coding tools, they're constantly getting better, but I remain convinced that all the people claiming massive productivity improvements are just not good software engineers.
I think the tools are finally at the point where they are generally a help, rather than a net waste of time for good engineers, but it's still marginal atm.
It's so frustrating, it regularly makes me want to just quit the profession. Which is why I still just write most code by hand.
* LLMs are just matrix multiplication. * SQL is just algebra, which has matrix multiplication as part of it. * Therefore SQL is AI * Now who is ready to invest a billion dollars in our AI SaaS company?
Or it’s just that astronaut with a gun meme: “Wait AI is just SQL?….Alway has been.”
Instead of stuffing the context with DDL I suggest:
1. Reorganize your data warehouse. It needs to be easy to find the correct data. Make sure you use ELT clear layers, meaningful schemas, and have per-model documentation. This is a ton of work, but if done right the payoff is massive.
2. I built a tool for myself to pull our warehouse into a graph for fuzzy search+dependency chain analysis. In the spring I made an MCP server for it and Claude uses that tool incredibly well for almost all queries. I haven't actually used the GUI or scripts since I built the MCP.
Claude and Devstral are the best models I've used for SQL. I cannot get Gemini to write decent modern sql -- even the Gemini data science/engineer agents in Google Cloud. I occasionally try the paid models through the API and still haven't been impressed.
Same. SOTA models crush every SQL question I give them.
I understand we are all in different camps for a multitude of reasons;
- The jouissance of rote coding and abstraction
- The tree of knowledge specifically in programming, and which branches and nodes we each currently sit at in our understanding
- Technical paradigms that humans may have argued about have now shifted to obvious answers for agentic harnesses (think something like TDD, I for one barely used that as a style because I've mostly worked in startups building apps and found the cost of my labour not worth it, but agentic harnesse loops absolutely excel at it)
- The geography and size of the markets we work in
- The complexity of the subject matter / domain expertise
- The cost prohibitive nature of token based programming (not everyone can afford it, and the big fish seemingly have quite the advantage going fourth)
- Agentic coding has proven it can build UI's very easily, and depending on experience, it can build a very very many things easily. it excels in having feedback loops such as linting or simple javascript errors, which are observability problems in my opinion. Once it can do full stack observability (APM, system, network), it's ability to reason and correct problems on the fly for any complex system seems overly easy from my purvue.
- At the human nature level, some individuals prefer to think in 0's and 1's, some in words, some inbetween, and so on, what type of communication do agentic setups prefer?
With some of that above intuition that is easily up for debate, I've decided to lean 100% into agentic coding, I think it will be absolutely everywhere and obviously with humans in the loop but I don't think humans will need to review the pull requests. I am personally treating it as an existential threat to my career after having seen enough of what it's capable of. (with some imagination and a bit of a gambling spirit, as us mere mortals surely can't predict the future)
With my gambit, I'm not choosing to exit the tech scene and instead optimistically investing my mental prowess into figuring out where "humans in the loop" will be positioned. Currently I'm looking into CI level tooling, the known being code quality, and all the various forms of software testing paradigms. The emerging evals in my mind will keep evolving and beyond testing our ideas of model intelligence and chat bot responses will do a lot more.
---
A more practical rant: If you are building a recommendation engine for A and B, the engine could have X amount of modules that return a score which when all combined make up the final decision between A and B. Forgive me but let's just use dating as an example. A product manager would say we need a new module to calculate relevance between A and B based off their food preferences. An agentic harness can easily code that module and create the tests for it. The product manager could ask an LLM to make a list of 1000 reasons why two people might be suitable for dating. The agent could easily go away and code and test all those modules and probably maintain technical consistency but drift from the companies philosophical business model. I am looking into building "semantic linting" for codebases, how can the agent maintain the code so it aligns with the company's business model. And if for whatever reason those 1000 modules need to be refactored, how can the agent maintain the code so it aligns with the company's business model. Essentially trying to make a feedback loop between the companies needs and the code itself. To stop the agent and the business from drifting in either directions, and allowing for automatic feedback loops for the agent to fix them. In short, I think there will be new tools invented that us human's will be mastering as to Karpathy's point.
Is there someone already mastering “agents, subagents, their prompts, contexts, memory, modes, permissions, tools, plugins, skills, hooks, MCP, LSP, slash commands, workflows, IDE integrations, and a need to build an all-encompassing mental model for strengths and pitfalls of fundamentally stochastic, fallible, unintelligible and changing entities suddenly intermingled with what used to be good old fashioned engineering” ?
And do they have a blog?
Why, the other rats in front of you in the race, of course!
As the pithy, if cheese expression goes, read not the times; read the eternities. People who spend so much time frantically chasing superficial ephemera like this are people without any sense of life's purpose. They're cogs in some hellish consumerist machine.
Does anyone have a better way to do this other than spinning up a cloud VM to run goose or claude or whatever poorly isolated agent tool?
I have since added a sandbox around my ~/dev/ folder using sandbox-exec in macOS. It is a pain to configure properly but at least I know where sandbox is controlled.
[1] https://code.claude.com/docs/en/sandboxing#configure-sandbox...
[2] https://github.com/Piebald-AI/claude-code-system-prompts/blo...
"These things are more destructive than your average toddler, so you need to have a fence in place kind of like that one in Jurassic Park, except you need to make sure it absolutely positively cannot be shut off, but all this effort is worthwhile, because, kind of like civets, some of the artifacts they shit out while they are running amok appear to have some value."
1. Create a new Git worktree
2. Create a Docker container w/ bind mount
3. Provide an interface for easily switching between your active worktrees/containers.
For credentials, I have an HTTP/HTTPS mitm [1] that runs on the host with creds, so there are zero secrets in the container.
The end goal is to be able to manage, say, 5-10 Claude instances at a time. I want something like Claude Code for Web, but self-hosted.
[0]: https://github.com/shepherdjerred/monorepo/tree/main/package...
This agentic arms race by C-suite know-nothings feels less like leverage and more like denial. We took a stochastic text generator, noticed it lies confidently, wipes entire databases and harddrives, and responded by wrapping it in managers, sub-agents, memories, tools, permissions, workflows, and orchestration layers so we don’t have to look directly at the fact that it still doesn’t understand anything.
Now we’re expected to maintain a mental model not just of our system, but of a swarm of half-reliable interns talking to each other in a language that isn’t executable, reproducible, or stable.
Work now feels duller than dishwater, enough to have forced me to career pivot for 2026.
For me, I'm planning to ride out this industry for another couple years building cash until I can't stand it, then pivot to driving a city bus.
You seem to be counting on Waymo not obsoleting that occupation. ;)
What we really need is a lot more housing. So construction work is a safer pivot. But, construction work is difficult and dangerous and not something everyone can do. Also, society will collapse (apparently) if we ever make housing affordable, so maybe the powers-that-be wont allow an increase in construction work, even if there are plenty of construction workers.
Who knows... interesting times.
I cannot count the times that I've had essentially this conversation:
"If x happens, then y, and z, it will crash here."
"What are the odds of that happening?"
"If you can even ask that question, the probability that it will occur at a customer site somewhere sometime approaches one."
It's completely crazy. I've had variants on the conversation from hardware designers, too. One time, I was asked to torture a UART, since we had shipped a broken one. (I normally build stuff, but I am your go-to whitebox tester, because I hone in on things that look suspicious rather than shying away from them.) When I was asked the inevitable "Could that really happen in a customer system?" after creating a synthetic scenario where the UART and DMA together failed, my response was:
"I don't know. You have two choices. Either fix it where the test passes, or prove that no customer could ever inadvertently recreate the test conditions."
He fixed it, but not without a lot of grumbling.
They then turned the thing on, it ran for several seconds, encountered the error, and crashed.
Oh, that's right, the CPU can do millions of things a second.
Something I keep in the back of my mind when thinking about the odds in programming. You need to do extra leg work to make sure that you're measuring things in a way that's practical.
The phrasing that usually make it click for them is: "Yes, this is an unlikely bug, but if this bug where to happen how long would it take you to figure out this is the problem and fix it?"
In most cases these are extremely subtle issues that the juniors immediately realize would be nightmares to debug and could easily eat up days of hair-pulling work while someone non-technical above them waiting for the solution is rapidly losing their patience.
The best senior devs I've worked with over my career all have shared an uncanny knack for seeing a problem months before it impacts production. While they are frequently ignored, in those cases more often then not they get an apology a few months down the line when exactly what they predict would happen, happens.
And this is the reason I spent most of the latter part of my career in chip companies.
Because tapeouts are _expensive_, both in dollar cost, and in lost opportunity cost if the chip comes back broken.
So any successful chip company knows to pay attention to potential problems. And the messenger never gets shot.
I think that for some people it is harder to reason about determinism because it is similar to correctness, and correctness can, in many scenarios be something you trade off - for example in relation to scaling and speed you will often trade off correctness.
If you do not think clearly about the difference with determinism and other similar properties like (real-time) correctness which you might be willing to trade off, you might think that trading off determinism is just more of the same.
Note: I'm against trading off determinism, but I am willing to think there might be a reason to trade it off, just I worry that people are not actually thinking through what it is they're trading when they do it.
When you order home delivery, you don’t care about by who and how. Only the end result matters. And we’ve ensured that reliability is good enough that failures are accidents, not common occurrence.
Code generation is not reliable enough to have the same quasi deterministic label.
I think this insight was really the thing that made me understand the limitations of LLMs a lot better. Some people say when it produces things that are incorrect or fabricated it is "hallucinating", but the truth is that everything it produces is a hallucination, and the fact it's sometimes correct is incidental.
Therefore it cannot necessarily discern between two statements that are practically identical in the eyes of humans. This doesnt make the technology useless but its clearly not some AGI nonsense.
The parent comment was making the case that humans are as non-deterministic as the LLM is, and I was explaining why that is not true.
If I use a library, I know it will do the same thing from the same inputs, every time. If I don't understand something about its behavior, then I can look to the documentation. Some are better about this, some are crap. But a good library will continuing doing what I want years or decades later.
An LLM can't decide between one sentence and the next what to do.
It's wild that you think programmers is some kind of caste that makes any decisions.
The average programmer is already being pushed into doing a lot of things they're unhappy about in their day jobs.
Crappy designs, stupid products, tracking, privacy violation, security issues, slowness on customer machines, terrible tooling, crappy dependencies, horrible culture, pointless nitpicks in code reviews.
Half of HN is gonna defend one thing above or the other because $$$.
What's one more thing?
It’s like jacking off, once in a while won’t hurt and may even be beneficial. But if you do it constantly you’re gonna have a problem.
The ubiquitous adoption of LLMs for generating code is mostly a sign of bad abstraction or the absence of abstraction, not the excess of abstraction.
And choosing/making the right abstraction is kind of the name of the game, right? So it's not abstraction per se that's a problem.
If we wanted safety, stability, performance, and polish, the impact of LLMs would be more limited. They have a tendency to pile up code on top of code.
I think the new tech is just accelerating an already existing problem. Most tech products are already rotting, take a look at windows or iOS.
I wonder what will it take for a significant turning point in this mentality.
The ROI of doing this is weak because of how long it takes an expensive human. But if you could clean it up more cheaply, the ROI strengthens considerably- and there’s a lot of it.
I'm now incentivized to use less abstractions.
Why do we code with React? It's because synchronizing state between a UI and a data model is difficult and it's easy to make mistakes, so it's worth paying the React complexity/page-weight tax in order for a "better developer experience" that allows us to build working, reliable software with less typing of code into a text editor.
If an LLM is typing that code - and it can maintain a test suite that shows everything works correctly - maybe we don't need that abstraction after all.
How often have you dropped in a big complex library like Moment.js just because you needed to convert a time from one format to another, and it would take too long to hand-write that one feature (and add tests for it to make sure it's robust)? With an LLM that's a single prompt and a couple of minutes of wait.
Using LLMs to build black box abstraction layers is a choice. We can choose to have them build LESS abstraction layers for us instead.
I'd rather have LLMs build on top of proven, battle-tested production libraries than keep writing their own from scratch. You're going to fill up context with all of its re-invented wheels when it already knows how to use common options.
Not to mention that testing things like this is hard. And why waste time (and context and complexity) for humans and LLMs trying to do something hard like state syncing when you can focus on something else?
This can often be a very solid bet, but it can also occasionally backfire if the library you chose falls out of date and is no longer maintained.
For this reason I lean towards fewer dependencies, and have a high bar for when a dependency is worth adding to a project.
I prefer a dozen well vetted dependencies to hundreds of smaller ones that each solve a problem that I could have solved effectively without them.
In JS, the DOM and time zones are some of the most messed up foundations you’re building on top of ime. (The DOM is amazing for documents but not designed for web apps.)
I think we really need to be careful about adding dependencies that we’re maintaining ourselves, especially when you factor in employee churn and existing options. Unless it’s the differentiator for the business you’re building, my advice to engineers is to strongly consider other options and have a case for why they don’t fit.
AI can play into the engineering blind spot of building it ourselves because it’s fun. But engineering as a discipline requires restraint.
If you're building something simple like a contact form React may not be the right choice. If you're building something like Trello that calculation is different.
Likewise, I wouldn't want Moment for https://tools.simonwillison.net/california-clock-change but I might want it for something that needs its more advanced features.
...is a loaded question, with a complex and nuanced answer. Especially when you continue:
> it's worth paying the React complexity/page-weight tax
All right; then why do we code in React when a smaller alternative, such as Preact, exists, which solves the same problem, but for a much lower page-weight tax?
Why do we code in React when a mechanism to synchronize data with tiny UI fragments through signals exists, as exemplified by Solid?
Why do people use React to code things where data doesn't even change, or changes so little that to sync it with the UI does not present any challenge whatsoever, such as blogs or landing pages?
I don't think the question 'why do we code with React?' has a simple and satisfactory answer anymore. I am sure marketing and educational practices play a large role in it.
My cynical answer is that most web developers who learned their craftsin the last decade learned frontend React-first, and a lot of them genuinely don't have experience working without it.
Which means hiring for a React team is easier. Which means learning React makes you more employable.
That's not cynical, that's the reality.
I do a lot of interviews and mentor juniors, and I can 100% confirm that.
And funny enough, React-only devs was a bigger problem 5 years ago.
Today the problem is developers who can *only* use Next.js. A lot can't use Vite+React or plain React, or whatever.
And about 50% of Ruby developers I interviewed from 2022-2024 were unable to code a FizzBuzz in Ruby without launching a whole Rails project.
> Today the problem is developers who can only use Next.js. A lot can't use Vite+React or plain React, or whatever.
Do you want to hire such developers?
My job during the hiring process is to filter them.
But that's me. Other companies might be interested.
I often choose to work on non-cookie-cutter products, so it's better to have developers with more curiosity to ask questions, like yourself asked above.
If you can do that, then you can probably understand how everything else works.
Let me help you with a context where LLMs actually shine and is a blessing. I think it is also same with Karpathy who comes from research.
In any research, replicating paper is wildy difficult task. It takes 6-24 months of dedicated work across an entire team to replicate a good research paper.
Now, there is a reason why we want to do it. Sometimes the solution actually lies in the research. Most of research is experimental and garbage code anyway.
For each of us working in research, LLM is blessing because of rapid prototyping it provides.
Then there are research engineers whose role is to apply research to production code. We as research engineers really don't care about the popular library. As long as something does the job, we will just roll with it.
The reason is simple because there is nothing out there that solved the problem.
As we move further from research, the tools we build will find all sort of issues and we improve on them.
Idk about what people think about webdev, but this has been my perspective in SWE in general.
Most of the webdevs here who are coping with the fact that their react skill matters are quite delusional because they have never traversed the stack down to foundation. It doesn't matter how you render the document as long as you render it.
Every abstraction originates from research and some small proof of concept. You might reinvent abstraction, but when the cost of reinventing it is essentially zero then you are stilfing your own learning because you are choosing to exploit vs choosing to explore.
There is a balance and good engineers know it. Perhaps all of the people who ganged up on you never approached their work this way.
I'm incentivised to use abstractions that are harder to learn, but execute faster or more safely once compiled. E.g. more Rust, Lean.
> If an LLM is typing that code - and it can maintain a test suite that shows everything works correctly - maybe we don't need that abstraction after all.
LLMs benefit from abstractions the same way as we do.
LLMs currently copy our approaches to solving problems and copy all the problems those approaches bring.
Letting LLMs skip all the abstractions is about as likely to succeed as genetic programming is efficient.
For example, writing more vanilla JS instead of React, you're just reinventing the necessary abstractions more verbosely and with a higher risk of duplicate code or mismatching abstractions.
In a recent interview with Bret Weinstein, a former professor of evolutionary biology, he proposed that one property of evolution that makes the story of one species evolving into another more likely is that it's not just random permutations of single genes; it's also permutations to counter variables encoded as telomeres and possibly microsatellites.
https://podcasts.happyscribe.com/the-joe-rogan-experience/24...
Bret compares this to flipping random bits in a program to make it work better vs. tweaking variables randomly in a high-level language. Mutating parameters at a high-level for something that already works is more likely to result in something else that works than mutating parameters at a low level.
So I believe LLMs benefit from high abstractions, like us.
We just need good ones; and good ones for us might not be the same as good ones for LLMs.
Right, but I'm also getting pages that load faster and don't require a build step, making them more convenient to hack on. I'm enjoying that trade-off a lot.
And yeah, you can't beat the iteration speed.
I feel like there are dozens of us.
I've had plenty of junior devs justify massive code bases of random scripts and 100+ line functions with the same logic. There's a reason senior devs almost always push back on this when it's encountered.
Everything hinges on that "if". But you're baking a tautology into your reasoning: "if LLMs can do everything we need them to, we can use LLMs for everything we need".
The reason we stop junior devs from going down this path is because experience teaches us that things will break and when they do, it will incur a world of pain.
So "LLM as abstraction" might be a possible future, but it assumes LLMs are significantly more capable than a junior dev at managing a growing mess of complex code.
This is clearly not the case with simplistic LLM usage today. "Ah! But you need agents and memory and context management, etc!" But all of these are abstractions. This is what I believe the parent comment is really pointing out.
If AI could do what we originally hoped it could: follow simple instructions to solve complex tasks. We'd be great, and I would agree with your argument. But we are very clearly not in that world. Especially since Karpathy can't even keep up with the sophisticated machinery necessary to properly orchestrate these tools. All of the people decrying "you're not doing it right!" are emphatically proving that LLMs cannot perform these tasks at the level we need them to.
How much if this is still true and exaggerated in our world environment today where the cost of making things is near 0?
I think “Evolution” would say that the cost of producing is near 0 so the possibility of creating what we want is high. The cost of trying again is low so mistakes and pain aren’t super high. For really high stakes situation (which most situations are not) bring the expert human in the loop until the expert better than that human is AI.
Current dependency hell that is modern development, just how wide the openings are for supply chain attacks and seemingly every other week we get a new RCE.
I'd rather 100 loosely coupled scripts peer reviewed by a half a dozen of LLM agents.
Or maybe you can use AI to vendor dependencies, review existing dependencies and updates. Never tried that, maybe that is better than the current approach, which is just trusting the upstream most of the time until something breaks.
Will it be potentially more fragile and less featured? Sure, but it also will not bring in a thousand packages of dependencies.
Ignoring for a second they actually already are indeed, it doesn’t matter because the cost of rewriting the mess drops by an order of magnitude with each frontier model release. You won’t need good code because you’ll be throwing everything away all the time.
I think, though, that for small systems and small parts of systems LLMs do move the repair-replace line in the replace direction, especially if the tests are good.
I'm saying that a key component of the dependency calculation has changed.
It used to be that one of the most influential facts affecting your decision to add a new library was the cost of writing the subset of code that you needed yourself. If writing that code and the accompanying tests represented more than an hour of work, a library was usually a better investment.
If the code and tests take a few minutes those calculations can look very different.
Making these decisions effectively and responsibly is one of the key characteristics of a senior engineer, which is why it's so interesting that all of those years of intuition are being disrupted.
The code we are producing remains the same. The difference is that a senior developer may have written that function + tests in several hours, at a cost of thousands of dollars. Now that same senior developer can produce exactly the same code at a time cost of less than $100.
Personally I love abstraction when it means "generalize these routines to a simple and elegant version". Even if it's harder to understand than a single instance it is worth the investment and gives far better understanding of the code and what it's doing.
But there's also abstraction meaning to make less understandable or more complex and I think LLMs operate this way. It takes a long time to understand code. Not because any single line of code is harder to understand but because they need to be understood in context.
I think part of this is in people misunderstanding elegance. It doesn't mean aesthetically pleasing, but to do something in a simple and efficient way. Yes, write it rough the first round but we should also strive for elegance. It more seems like we are just trying to get the first rough draft and move onto the next thing.
I did that with an HTML generation project to switch from Python strings to Jinja templates just the other day: https://github.com/simonw/claude-code-transcripts/pull/2
The new normal isn't like that. Rewrite an existing cleanly implemented Vanilla JavaScript project (with tests) in React the kind of rote task you can throw at a coding agent like Claude Code and come back the next morning and expect most (and occasionally all) of the work to be done.
That's definitely not something that goes over well on anything other than an incredibly trivial project.
> The new normal isn't like that. Rewrite an existing cleanly implemented Vanilla JavaScript project (with tests) in React the kind of rote task you can throw at a coding agent like Claude Code and come back the next morning and expect most (and occasionally all) of the work to be done.
... meant that person would do it in a clandestine fashion rather than this be an agreed upon task prior? Is this how you operate?
> And everyone else's work has to be completely put on hold
On a big enough team, getting everyone to a stopping point where they can wait for you to do your big bang refactor to the entire code base- even if it is only a day later- is still really disruptive.
The last time I went through something like this, we did it really carefully, migrating a page at a time from a multi page application to a SPA. Even that required ensuring that whichever page transitioned didn't have other people working on it, let alone the whole code base.
Again, I simply don't buy that you're going to be able to AI your way through such a radical transition on anything other than a trivial application with a small or tiny team.
This doesn't mean this at all
In an AI heavy project it's not unusual to have many speculative refactors kicked off and then you come back to see what it is like.
Wonder you can do a Rust SIMD optimized version of that Numpy code you have? Try it! You don't even need to waste review time on it because you have heavy test coverage and can see if it is worth looking at.
He is right. The game has changed. We can now refactor using an agent and have it done by morning. The cost of architectural mistakes is minimal and if it gets out of hand, you refactor and take a nap anyway.
What’s interesting is now it’s about intent. The prompts and specs you write, the documents you keep that outline your intended solution, and you let the agent go. You do research. Agent does code. I’ve seen this at scale.
Do you care to make any concrete predictions on when most developers will embrace this new normal as part of their day to day routine? One year? Five?
And how much of this is just another iteration in the wheel of recarnation[0]? Maybe we're looking at a future where we see return to the monoculture library dense supply chain that we use today but the libraries are made by swarms of AI agents instead and the programmer/user is responsible for guiding other AI agents to create business logic?
I do think there's been a bit of a shift in the last two months, with GPT 5.1 and 5.2 Codex and Opus 4.5.
We have models that can reliably follow complex instructions over multiple hour projects now - that's completely new. Those of us at the cutting edge are still coming to terms with the consequences of this (as illustrated by this Karpathy tweet).
I don't trust my predictions myself, but I think the next few months are going to see some big changes in terms of what mainstream developers understand these tools as being capable of.
At some companies, most developers already are using it in their day to day. IME, the more senior the developer is, the more likely they are to be heavily using LLMs to write all/most of their code these days. Talking to friends and former coworkers at startups and Big Tech (and my own coworkers, and of course my own experience), this isn't a "someday" thing.
People who work at more conservative companies, the kind that don't already have enterprise Cursor/Anthropic/OpenAI agreements, and are maybe still cautiously evaluating Copilot... maybe not so much.
> Opus 4.5 is categorically a much better model from benchmarks and personal experience than Opus 4.1 & Sonnet models. The reason you're seeing a lot of people wax about O4.5 is that it was a real step change in reliable performance. It crossed for me a critical threshold in being able to solve problems by approaching things in systematic ways.
That feels more like chasing than a clear line of improvement. It's interrupted very different from something like "my habits have changed quite a bit since reading The Art of Computer Programming". They're categorically different.
Why do you use the word "chasing" to describe this? I don't understand. Maybe you should try it and compare it to earlier models to see what people mean.
> Why do you use the word "chasing" to describe this?
I think you'll get the answer to this if you read my comment and your response to understand why you didn't address mine.Btw, I have tried it. It's annoying that people think the problem is not trying. It was getting old when GPT 3.5 came out. Let's update the argument...
Some of these improvements have been minor, some of them have been big enough to feel like step changes. Sonnet 3.7 + Claude Code (they came out at the same time) was a big step change; Opus 4.5 similarly feels like a big step change.
(If you don't trust vibes, METR's task completion benchmark shows huge improvements, too.)
If you're sincerely trying these models out with the intention of seeing if you can make them work for you, and doing all the things you should do in those cases, then even if you're getting negative results somehow, you need to keep trying, because there will come a point where the negative turns positive for you.
If you're someone who's been using them productively for a while now, you need to keep changing how you use them, because what used to work is no longer optimal.
So does the comment I critiqued in the sibling comment to yours. I don't know why it's so hard to believe we just haven't tried. I have a Claude subscription. I'm an ML researcher myself. Trust me, I do try.
But that last part also makes me keenly aware of their limitations and failures. Frankly I don't trust experts who aren't critiquing their field. Leave the selling points to the marketing team. The engineer and researcher's job is to be critical. To find problems. I mean how the hell do you solve problems if you're unable to identify them lol. Let the marketing team lead development direction instead? Sounds like a bad way to solve problems
> benchmark shows huge improvements
Benchmarks are often difficult to interpret. It is really problematic that they got incorporated into marketing. If you don't understand what a benchmark measures, and more importantly, what it doesn't measure, then I promise you that you're misunderstanding what those numbers mean.For METR I think they say a lot right here (emphasis my own) that reinforces my point
> Current frontier AIs are vastly better than humans at text prediction and knowledge tasks. They outperform experts on most *exam-style problems* for a fraction of the cost. ... And yet the best AI agents are not currently able to carry out substantive projects by themselves or directly substitute for human labor. *They are unable to reliably handle even relatively low-skill*, computer-based work like remote executive assistance. It is clear that capabilities are increasing very rapidly in some sense, but it is unclear how this corresponds to real-world impact.
So make sure you're really careful to understand what is being measured. What improvement actually means. To understand the bounds.It's great that they include longer tasks but also notice the biases and distribution in the human workers. This is important in properly evaluating.
Also remember what exactly I quoted. For a long time we've all known that being good at leetcode doesn't make one a good engineer. But it's an easy thing to test and the test correlates with other skills that are likely to be learned to be good at those tests (despite being able to metric hack). We're talking about massive compression machines. That pattern match. Pattern matching tends to get much more difficult as task time increases but this is not a necessary condition.
Treat every benchmark adversarialy. If you can't figure out how to metric hack it then you don't know what a benchmark is measuring (and just because you know what can hack it doesn't mean you understand it nor that that's what is being measured)
It seems to be "people keep saying the models are good"?
That's true. They are.
And the reason people keep saying it is because the frontier of what they do keeps getting pushed back.
Actual, working, useful code completion in the GPT 4 days? Amazing! It could automatically write entire functions for me!
The ability to write whole classes and utility programs in the Claude 3.5 days? Amazing! This is like having a junior programmer!
And now, with Opus 4.5 or Codex Max or Gemini 3 Pro we can write substantial programs one-shot from a single prompt and they work. Amazing!
But now we are beginning to see that programming in 6 months time might look very different to now because these AI system code very differently to us. That's exactly the point.
So what is it you are arguing against?
I think you said you didn't like that people are saying the same thing, but in this post it seems more complicated?
People have been doing this parlor trick with various "substantial" programs [1] since GPT 3. And no, the models aren't better today, unless you're talking about being better at the same kinds of programs.
[1] If I have to see one more half-baked demo of a running game or a flight sim...
Can you expand on that? It doesn't match my experience at all.
I think "substantial" is doing a lot of heavy lifting in the sentence I quoted. For example, I’m not going to argue that aspects of the process haven’t improved, or that Claude 4.5 isn't better than GPT 4 at coding, but I still can’t trust any of the things to work on any modestly complex codebase without close supervision, and that is what I understood the broad argument to be about. It's completely irrelevant to me if they slay the benchmarks or make killer one-shot N-body demos, and it's marginally relevant that they have better context windows or now hallucinate 10% less often (in that they're more useful as tools, which I don't dispute at all), but if you want to claim that they're suddenly super-capable robot engineers that I can throw at any "substantial" problem, you have to bring evidence, because that's a claim that defies my day-to-day experience. They're just constantly so full of shit, and that hasn't changed, at all.
FWIW, this line of argument usually turns into a mott and bailey fallacy, where someone makes an outrageous claim (e.g. "models have recently gained the ability to operate independently as a senior engineer!"), and when challenged on the hyperbole, retreats to a more reasonable position ("Claude 4.5 is clearly better than GPT 3!"), but with the speculative caveat that "we don't know where things will be in N years". I'm not interested in that kind of speculation.
I think they represent a meaningful step change in what models can build. For me they are the moment we went from building relatively trivial things unassisted to building quite large and complex system that take multiple hours, often still triggered by a single prompt.
Some personal examples from the past few weeks.
- A spec-compliant HTML5 parsing library by Codex 5.2: https://simonwillison.net/2025/Dec/15/porting-justhtml/
- A CLI-based transcript export and publishing tool by Opus 4.5: https://simonwillison.net/2025/Dec/25/claude-code-transcript...
- A full JavaScript interpreter in dependency/free Python (!) https://github.com/simonw/micro-javascript - and here's that transcript published using the above-mentioned tool: https://static.simonwillison.net/static/2025/claude-code-mic...
- A WebAssembly runtime in Python which I haven't yet published
The above projects all took multiple prompts, but were still mostly built by prompting Claude Code for web on my iPhone in between Christmas family things.
I have a single-prompt one:
- A Datasette plugin that integrates Cloudflare's CAPTCHA system: https://github.com/simonw/datasette-turnstile - transcript: https://gistpreview.github.io/?2d9190335938762f170b0c0eb6060...
I'm not confident any of these projects would have worked with the coding agents and models we had had four months ago. There is no chance they would've worked with the January 2025 available models.
Setting aside the fact that your examples are mostly “replicate this existing thing in language X” [2], again, I’m not saying that the models haven’t gotten better at crapping out code, or that they’re not useful tools. I use them every day. They're great tools, when someone actually intelligent is using them. I also freely concede that they're better tools than a year ago.
The devil is (as always) in the details: how many prompts did it take? what exactly did you have to prompt for? how closely did you look at the code? how closely did you test the end result? Remember that I can, with some amount of prompting, generate perfectly acceptable code for a complex, real-world app, using only GPT 4. But even the newest models generate absolute bullshit on a fairly regular basis. So telling me that you did something complex with an unspecified amount of additional prompting is fine, but not particularly responsive to the original claim.
[1] Copilot, with a liberal sprinkling of ChatGPT in the web UI. Please don’t engage in “you’re holding it wrong” or "you didn't use the right model" with me - I use enough frontier models on a regular basis to have a good sense of their common failings and happy paths. Also, I am trying to do something other than experiment with models, so if I have to switch environments every day, I’m not doing it. If I have to pay for multiple $200 memberships, I’m not doing it. If they require an exact setup to make them “work”, I am unlikely to do it. Finally, if your entire argument here hinges on a point release of a specific model in the last six weeks…yeah. Not gonna take that seriously, because it's the same exact argument, every six weeks. </caveats>
[2] Nothing really wrong with this -- most programming is an iterative exercise of replicating pre-existing things with minor tweaks -- but we're pretty far into the bailey now, I think. The original argument was that you can one-shot a complex application. Now we're in "I can replicate a large pre-existing thing with repeated hand-holding". Fine, and completely within my own envelope for model performance, but not really the original claim.
For me it is something I can describe in a single casual prompt.
For example I wrote a fully working version of https://tools.nicklothian.com/llm_comparator.html in a single prompt. I refined it and added features with more prompts, but it worked from the start.
Think: SaaS application that solves some domain specific problem in corporate accounting, versus "in-browser speadsheet", or "first-person shooter video game with AI, multi-player support, editable levels, networking and high-resolution 3D graphics" vs "flappy bird clone".
When you're working on a product of this size, you're probably solving problems like the ones cited by simonw multiple times a week, if not daily.
I think you'd get close on something like Lovable but that's not really one shot either.
In that case my Vibe-Prolog project would count: https://github.com/nlothian/Vibe-Prolog/
- It's 45K of python code
- It isn't a duplicate of another program (indeed, the reason it isn't finished is because it is stuck between ISO Prolog and SWI Prolog and I need to think about how to resolve this, but I don't know enough Prolog!)
- Not a *single* line of code is hand written.
Ironically this doesn't really prove that the current frontier models are better because large amounts of code were written with non-frontier models (You can sort of get an idea of what models were used with the labels on https://github.com/nlothian/Vibe-Prolog/pulls?q=is%3Apr+is%3...)But - importantly - this project is what convinced me that the frontier models are much better than the previous generation. There were numerous times I tried the same thing in a non-Frontier model which couldn't do it, and then I'd try it in Claude, Codex or Gemini and it would succeed.
Copilot style autocomplete or chatting with a model directly is an entirely different experience from letting the model spend half an hour writing code, running that code and iterating on the result uninterrupted.
Here's an example where I sent a prompt at 2:38pm and it churned away for 7 minutes (executing 17 bash commands), then I gave it another prompt and it churned for half an hour and shipped 7 commits with 160 passing tests: https://static.simonwillison.net/static/2025/claude-code-mic...
I completed most of that project on my phone.
edit: I wrote a different response here, then I realized we might be talking about different things.
Are you asking if I let the agents use tools without my prior approval? I do that for a certain subset of tools (e.g. run tests, do requests, run queries, certain shell commands, even use the browser if possible), but I do not let the agents do branch merges, deploys, etc. I find that the best models are just barely good enough to produce a bad first draft of a multi-file feature (e.g. adding an entirely new controller+view to a web app), and I would never ever consider YOLOing their output to production unless I didn't care at all. I try to get to tests passing clean before even looking at the code.
Also, I am happy to let Copilot burn tokens in this manner and will regularly do it for refactors or initial drafts of new features, I'm honestly not sure if the juice is worth the squeeze -- I still typically have to spend substantial time reworking whatever they create, and the revision time required scales with the amount of time they spend spinning. If I had to pay per token, I'd be much more circumspect about this approach.
Letting it burn tokens on running tests and refactors (but not letting it merge branches or deploy) is the thing that feels like a huge leap forward to me. We are talking about the same set of capabilities.
I mainly eat it clear tasks like "keep going until all these tests pass", but I do keep an eye on it and occasionally tell it to keep going.
"AI, I don't like paying for my SAP license, make me a clone with just the features I need".
- Models keep getting better[0]
- Models since GPT 3 are able to replace junior developers
It's true that both of these can be true at the same time but they are still in contention. We're not seeing agents ready to replace mid level engineersand quite frankly I've yet to see a model actually ready to replace juniors. Possibly low end interns but the major utility of interns is to trial run employment. Frankly it still seems like interns and juniors are advancing faster than these models in the type of skills that matter for companies (not to mention that institutional knowledge is quite valuable). But there's interns that started when GPT 3.5 came out that are seniors now.The problem is we've been promised that these employees would be replaced[1] any day now, yet that's not happening.
People forget, it is harder to advance when you're already skilled. It's not hard to go from non-programmer to a junior level. Hard to go from junior to senior. And even harder to advance to staff. The difficulty level only increases. This is true for most skills and this is where there's a lot of naivity. We can be advancing faster while the actual capabilities begin to crawl forward rather than leap.
[0] Implication is not just at coding test style questions but also in more general coding development.
[1] Which has another problem in the pipeline. If you don't have junior devs and are unable to replace both mid and seniors by the time that a junior would advance to a senior then you have built a bubble. There's a lot of big bets being made that this will happen yet the evidence is not pointing that way.
The answer is: Exactly what we are saying. This is also why people keep suggesting that you need to try them out with a more open mind, or with different techniques: Because we know with absolute first-person iron-clad certainty what is possible, and if you don't think it's possible, you're missing something.
Even IF llms don't get any better there is a mountain of lemons left to squeeze in their current state.
Most of which are irrelevant to my project. It's easier to maintain a few hundred lines of self written code than to carry the react-kitchen-sink around for all eternity.
They're not being disrupted. This is exactly why some people don't trust LLMs to re-invent wheels. It doesn't matter if it can one-shot some code and tests - what matters is that some problems require experience to know what exactly is needed to solve that problem. Libraries enable this experience and knowledge to centralize.
When considering whether inventing something in-house is a good idea vs using a library, "up front dev cost" factors relatively little to me.
If its true, the market will soon reward it. Being able to competently write good code cheaper will be rewarded. People don't employ programmers because they care about them, they are employed to produce output. If someone can use llms to produce more output for less $$ they will quickly make the people that don't understand the technology less competitive in the workplace.
That's a trap: it's not obvious for those without experience in both business and engineering on how to estimate or later calculate this $$. The trap is in the cost of changes and fix budget when things will break. And things will break. Often. Also, the requirements will change often, that's normal (our world is not static). So the cost has some tendency to change (guess which direction). The thoughtless copy-paste and rewrite-everything approach is nice, but the cost goes up steep with time soon. Those who don't know it will be trapped dead and lose business.
It will not just hurt, it will kill a business.
the people are telling you “you are not doing it right!” - that’s it, there is nothing to interpret addition to this basic sentence
Hyperbole. It's also very often a "world of pain" with a lot of senior code.
Indeed, part of becoming a senior developer is learning why you should avoid left-pad but accept date-fns.
We’re still in the early stages of operationalising LLMs. This is like mobile apps in 2010 or SPA web dev in 2014. People are throwing a lot of stuff at the wall and there’s going be a ton of churn and chaos before we figure out how to use it and it settles down a bit. I used to joke that I didn’t like taking vacations because the entire front end stack will have been chucked out and replaced with something new by the time I get back, but it’s pretty stable now.
Also I find it odd you’d characterise the current LLM progress as somehow being below where we hoped it would be. A few years back, people would have said you were absolutely nuts if you’d have predicted how good these models would become. Very few people (apart from those trying to sell you something) were exclaiming we’d be imminently entering a world where you enter an idea and out comes a complex solution without any further guidance or refining. When the AI can do that, we can just tell it to improve itself in a loop and AGI is just some GPU cycles away. Most people still expect - and hope - that’s a little way off yet.
That doesn’t mean the relative cost of abstracting and inlining hasn’t changed dramatically or that these tools aren’t incredibly useful when you figure out how to hold them.
Or you could just do what most people always do and wait for the trailblazers to either get burnt or figure out what works, and then jump on the bandwagon when it stabilises - but accept that when it does stabilise, you’ll be a few years behind those who have been picking shrapnel out of their hands for the last few years.
I'm instructing my agents to doing old school boring form POST, SSR templates, and vanilla JS / CSS.
I previously shifted away from this to abstractions because typing all the boilerplate was tedious.
But now that I'm not typing, the tedious but simple approach is great for the agent writing the code, and great for the the people doing code reviews.
... or decides to redesign the API you were using.
The role of abstractions *IS* to prevent (eg "compress") the need for a test suite, because you have an easy model to understand and reason about
Makes upgrading dependencies so much less painful!
However, the quality of this code is fucking terrible, no one is reading what they push deeply, and these models don't have enough 'sense' to make really robust and effective test suites. Even if they did, a comprehensive test suite is not the solution to poorly designed code, it's a band aid -- and an expensive one at scale.
Most likely we will see some disasters happening in the next few years due to this mode of software development, and only then will people understand to use these agents as tools and not replacements.
...Or maybe we'll get AGI and it will fix/maintain the trash going out there today.
Are you able to efficiently verify that the test suite is testing what it should be testing? (I would not count "manually reviewing all the test code" as efficient if you have a similar amount of test code to actual code.)
Sometimes a change to the code under test means that a (perhaps unavoidably brittle) test needs to be changed. In this case, the LLM should change the test to match the behaviour of the code under test. Other times, a change to the code under test represents a bug that a failing test should catch -- in this case, the LLM should fix the code under test, and leave the test unchanged. How do you have confidence that the LLM chooses the right path in each case?
We seem to have created a large bureaucracy for software development, where telling a computer how to execute an app involves keeping a lot of cogs in a big complicated machine happy. But why use the automation to just roll the cogs? Why not just simplify/streamline? Does an LLM need to worry about using the latest and greatest abstractions? I have to assume this has been tried already...
i.e competition. If there were only one AI company, they would probably not release anything close to their most capable version to the public. ala Google pre-chatgpt.
If (say) the code generation technology of Anthropic is so good, why be in the business of selling access to AI systems? Why not instead conquer every other software industry overnight?
Have Claude churn out the best office application suite ever. Have Claude make the best operating system ever. Have Claude make the best photo editing software, music production software, 3D rendering software, DNA analysis software, banking software, etc.
Why be merely the best AI software company when you can be the best at all software everywhere for all time?
Getting sick and tired of people talk about their productivity gains when not much is actually happening out there in terms of real value creation.
I’m not a SWE either fyi. Therefore I have no vested interest.
If I just wanted the equivalent of Lodash's _.intersection() method, I get it. The requirements are pretty straightforward and I can verify the LLM code & tests myself. One less dependency is great. But with time, I know I don't know enough to verify the LLM's output.
Similar to encryption libraries, it's a common recommendation to leave time-based code to developers who live and breathe those black boxes. I trust the community verify the correctness of those concepts, something I can't do myself with LLM output.
Can you manually wade through thousands of functions and fix the issue?
for simple stuff, sure, React was ALWAYS inefficient. Even Javascript/client-side logic is still overkill a lot of the times except for that pesky "user expectations" thing.
for anything codebase that's long-lived and complex, combinatorics tells us how it'll near-impossible to have good+fast test coverage on all that.
part of the reason people don't roll their own is because being able to assume that the library won't have major bugs leads to an incredible reduction in necessary test service, and generally people have found it a safe-enough assumption.
throwing that out and trying to just cover the necessary stuff instead - because you're also throwing out your ability to quickly recognize risky changes since you aren't familiar with all the code - has a high chance of painting you into messy corners.
"just hire a thousand low-skilled people and force them to write tests" had more problems as a hiring plan then just "people are expensive."
> I'm now incentivized to use less abstractions.
I'd argue it's a different category of abstractionBut this is a highly non-trivial problem. How do you even possibly manually verify that the test suite is complete and tests all possible corner cases (of which there are so many because synchronizing state is a hard problem)?
At least React solves this problem in a non-stochastic, deterministic manner. What can be a good reason to replace something like React that works determinstically with LLM-assisted code that is generated stochastically and there's no easy way to manually verify if the implementation or the test suite is correct and complete?
I've come to realize fighting this is useless, people will do this, its going to create large fuck ups and there will be heaps of money to be made on the cleanup jobs.
I think overall for engineering this is going to be a net positive.
I also think that any company creating a reverse-centaur workforce of blind and dumb half baked devs ritualistically shaking chicken bones at their pay-as-you-go automaton has effectively outsourced their core business to OpenAI/MS while paying for the privilege. And, on the twenty year timeline as service and capital costs create crunches, those mega corps will literally be sitting on whole copies of internal business schematics and critical code of their subservient customers…
They say things, they do other things. Trusting Microsoft not to eat your sector through abusive partner programs and licensing entanglements backed with government capture? Surely the LLMs can explain how that has gone historically and how smart that is going forward.
I’m starting to think actually knowing how to write code might end up being a superpower with so many people completely lost to the stochastic parrots. I’m already getting inbounds from friends and acquaintances that need “help” with their generated shit, gonna start asking for money for it.
And if I have to closely supervise every single change, I don't believe my development process will be any better. If not worse.
Let alone new engineers who join the team and all of a sudden have to deal with a unique solution layer which doesn't exist anywhere else.
My code is constantly shrinking, becoming better quality, more performant, more best-practice on a daily basis. And I'm learning like crazy. I'm constantly looking up changes it recommends to see why and what the reasons are behind them.
It can be a big damned dummy too, though. Just today it was proposing a massive server-side script to workaround an issue with my app I was deploying, when the actual solution was to just make a simple one-line change to the app. ("You're absolutely right!")
I'm worried that there is a tendency in LLM-generated code to avoid even local abstractions, such as putting common code into separate (local functions), and even use records/structures. You end up with code that is best maintained with an LLM, which is good for the LLM provider and their future revenue. But we humans as reviewers and ultimate long-term maintainers benefit from those minor abstractions.
Unless you’re writing literal memory instructions then you’re operating on between 4 and 10 levels of abstraction already as an engineer
It has never been tractable for humans to program a series of switches without incredible number of abstractions
The vast majority of programmers never understood how computers work to begin with
Jensen is someone I trust to understand the business side and some of those lower technical layers, so I'm not too concerned.
It sounds crazy to say this, but I've been thinking about this myself. Not for the immediate future (eg 2026), but somewhere later.
No profession collectively made such a decision. Programming was always very splitted into many, many subcultures, each with their own (mutually incompatible over the whole profession) ideas what makes a good program.
So, I guess rather some programmers inside some part of a Silicon Valley echo chamber in which you also live made such a decision.
[1] Don't Call Yourself A Programmer, And Other Career Advice:
https://www.kalzumeus.com/2011/10/28/dont-call-yourself-a-pr...
[2] Don't Call Yourself A Programmer, And Other Career Advice (2011):
> the solution to uncertainty is to pile abstraction on top of abstraction until no one can explain what’s actually happening anymore.
I've usually found complaints about abstraction in programming odd because frankly, all we do is abstraction. It often seems to be used to mean /I/ don't understand, therefore we should do something more complicated and with many more lines of code that's less flexible.But this usage? I'm fully on board. Too much abstraction is when it's incomprehensible. To who is the next question (my usual complaint is that a junior should not be that level) and I think you're right to point out that the "who" here is everyone.
We're killing a whole side of creativity and elegance while only slightly aiding another side. There's utility to this, but also a cost.
I think what frustrates me most about CS is that as a community we tend to go all in on something. We went all in on VR then crypto, and now AI. We should be trying new things but it more feels like we take these sides as if they're objective and anyone not hopping on the hype train is an idiots or luddite. The way the whole industry jumps to these things just feels more like FOMO than intelligent strategy. Like making a sparkling water company an "AI first" company... its like we love solutions looking for problems
We need to have a scrum with 3 agents each from the top 4 AI vendors, with each agent adhering to instructions given by a different programmer.
It's kind of like Robot Wars, except the damage is less physical and more costly.
It sounds ridiculous and easy to say spending time walking and thinking will improve your decisions and priorities that no productivity hack will.
I only actually did slow down for a while because I had to for the well-being of my family. Sure feels important to not always be on top of everyone else’s business.
A couple weeks ago, under a freshly made account "llmslave", you said it's already replacing devs and the field is cooked, and anyone who doesn't see that lacks the skills to adopt AI [1]
I pointed out that given your name and low quality comments, you were likely an LLM run account. As SOON as I made that comment, you abandoned the account and have now made a duplicate llmslave2 account, with a different opinion
Are you doing an experiment or something?
Edit: Corrected since/for. :-)
('since' takes time_point - 'for' takes time_duration)
I'm increasingly seeing that this is the real threat of AI. I've personally known people who have started to strain relationships with friends and family because they sincerely believe they are evolving into something new. While not as dramatic, the normalization of the use of "AI as therapist" is equally concerning. I know tons of people that rely on LLMs to guide them in difficult family decisions, career decisions, etc on an almost daily basis. If I'm honest, I myself have had times where I've leaned into this too much. I've also had times where AI starts telling me how clever I am, but thankfully a lifetime of low self worth signals warning flags in my brain when I hear this stuff! For most people, there is real temptation to buy into the praise.
Seeing Karpathy claim he can't keep up was shocking. It also immediately raises the question to anyone with a clear head: "Wait, if even Karpathy cannot use these tools effectively... just what is so useful about AI?" Isn't the entire point of AI that I can merely describe my problem and have a solution in a fraction of the time.
The fact that so many true believers in AI seem to forever be just a few more tricks away from really unleashing this power, starts to make it feel very much like magical thinking on a huge scale.
The real danger of AI is that we're entering into an era of mass hallucination across multiple fields and areas of human activity.
0. https://www.wsj.com/tech/ai/ai-chatbot-psychosis-link-1abf9d...
Cryptoboys did it first, please recognize their innovation ty
AI psychosis is getting lost in the sauce and becoming too intimate with your ChatGPT instance, or believing it's something it's not.
Skepticism, or a fear of being outside the core loop is the exact opposite, and that's what Karpathy is talking about here. If anything, this kind of post is an indicator that you're absolutely NOT in AI psychosis.
If you really think Karpathy is psychotic you should explain why, but I don't think anything in the Tweet suggests that. My read of his tweet is that there is a lot of churn and new concepts in the software engineering industry, and that doesn't seem like a very psychotic thing to say.
Looks like AI companies spend enough on marketing budgets to create the illusion that AI makes development better.
Let's wait one more year, and perhaps everyone who didn't fall victim to these "slimming pills” for developers' brains will be glad about the choice they made.
With Claude, all it took to fix all of that drudge was a single sentence. In the last two weeks, I implemented several big features, fixed long standing issues and did migrations to new major versions of library dependencies that I wouldn’t have tackled at all on my own—I do this for fun after all, and updating Zod isn’t fun. Claude just does it for me, while I focus on high-level feature descriptions.
I’m still validating and tweaking my workflow, but if I can keep up that pace and transfer it to other projects, I just got several times more effective.
As a creator of an open-source platform myself, I find trusting a semi-random word generator in front of users unreliable.
Moreover, I believe it creates a bad habit. I've seen developers forget how to read documentation and instead trust AI, and of course, as a result AI makes mistakes that are hard to debug or provokes security issues that are easy to overlook.
I know this sounds like a luddite talking, but I'm still not convinced that AI in its current state can be reliable in any way. However, because of engineers like you, AI is learning to make better choices, and that might change in the future.
Yeah this sounds interesting, and matches my experience a bit. I was trying out AI for the Christmas cuz people I know are talking about it. I asked it to implement something (refactoring for better performance) that I think should be simple, it did that and looks amazing, all tests passed too! When I look into the implementation, AI got the shape right, but the internals were more complicated than needed and were wrong. Nonetheless it got me started into fixing things, and it got fixed quite quickly.
The performance of the model in this case is not great, perhaps it is also because I am new to this and don't know how to prompt it properly. But at least it is interesting.
Calling tools like Claude Code a "semi-random word generator" is certainly a choice, and I suspect it won't age well.
I think AI coding should not be permitted in the first two years of training in CS. One should have to learn the basics of reading quality documentation, creating quality code and documentation, learning how the different pieces of software work together, and learning how to work with others.
LLMs are great for people with some idea of what they're doing, and need "someone else" to pair program with. I agree it will cripple the architectural thinking of new learners if they never learn how to think about code on their own.
In that year, AI will get better. Will you?
Answering your question, no matter how much I personally degrade or improve, I will not be able to produce anything even remotely comparable in terms of negative impact that AI brings to humanity these days.
1) AI is basically useless, a mere semi-random word generator. 2) And it is so powerful that it is going to hurt (or even destroy) humanity.
This is this is called "having your cake, and letting it eat you too".
You're inserting "destroy humanity" when OP is suggesting the problem is offloading all thinking to an unreliable tool (I don't entirely agree with their position but it's defensible and not as you stated).
There are basically no conditions under which one party can or will reach a legitimate common ground with the other. Sucks, but that's HN nowadays.
My input is: water, nutrition, a bit of electricity, and beliefs and the output is a fairly complex logical system like software. AI's input is billions of dollars, hundreds of thousands of people's lives spent in screen time daily, gigawatts of electricity, and still produces very questionable results.
To answer your question in other words: if you spent the same amount of resources on human intelligence, it might bring much more impressive results in one year. However, taking into account the resources already paid into these AI technologies, humanity is unlikely to have a chance to buy out of this new 'dependency'.
If AI tools don't amplify and magnify your own intelligence, it's not their fault.
If the advances turn out to be illusory, on the other hand, they'll be unwound soon enough. We generally don't stick with expensive technology that doesn't work. At the same time, fortunately, we also don't generally wait for your approval before trying new things.
Homeopathy is still around…
With LLMs, the destruction is less immediate and overt, but chatbots do provable harm to people, and can be manipulated to warp our sense of reality.
https://en.wikipedia.org/wiki/Chatbot_psychosis
People are having romantic relationships with their chatbots and committing suicide because of them. That is harm.
Let's ask your friendly local Ukrainian refugee about that.
People are having romantic relationships with their chatbots and committing suicide because of them. That is harm.
So the only permissible technologies are those suitable for use by children and the mentally disturbed. I see.
You understand “basically useless” does not mean “entirely useless”, right? That’s why the word “basically” is there.
I know Ukrainian people. I know Ukrainian people who are in attacked cities right now. They are friendly, and all of them would understand my point.
> So the only permissible technologies are those suitable for use by children and the mentally disturbed. I see.
That is a bad faith argument. HN rules ask you to not do that and steel man. It is obvious that is not what I said, “permissible” isn’t part of the argument at all. And if you think one needs to be “mentally disturbed” to be affected, you are high on arrogance and low on empathy and information. There are numerous stories of sane people becoming affected.
https://archive.ph/2025.09.24-025805/https://www.nytimes.com...
You're right, I don't have much empathy for bullshit pop-psych as an instrument of motivated reasoning. If ChatGPT can convince you to kill yourself, you weren't mentally healthy to begin with, and something else would have eventually had the same effect on you. Either that, or you were an unsupervised child, victimized not by a chatbot but by your parents. A tragedy either way, but good faith requires us to place the blame where it's actually due.
All ask you again to not engage in bad faith.
> If ChatGPT can convince you to kill yourself, you weren't mentally healthy to begin with, and something else would have eventually had the same effect on you.
That is false.
https://en.wikipedia.org/wiki/Suicide_barrier#Efficacy
> Research has shown suicidal thinking is often short-lived. Those who attempted suicide from the Golden Gate Bridge and were stopped in the process by a person did not go on to die by suicide by some other means. There are also a variety of examples that show restricting means of suicide have been associated with the overall reduction of it.
You started by comparing ChatGPT to thermonuclear weapons, inferring that it's a useless thing yet also an existential threat to humanity. State your position and desired outcome. You're all over the place here.
I believe they include the costs of free ChatGPT user's in that $2B. Worth it considering the conversion rate they are getting (5-6% in Oct 2024[1]).
[1] https://www.cnet.com/tech/services-and-software/openai-cfo-p...
"AI" is literally models trained to make you think it's intelligent. That's it. It's like the ultimate "algorithm" or addiction machine. It's trained to make you think it's amazing and magical and therefore you think it's amazing and magical.
"It's trained to make you think it's amazing and magical and therefore you think it's amazing and magical."
is the dark pattern underlying the entire LLM hype cycle IMO.
What's the difference? I try to make people think I'm intelligent all the time.
I spent about a minute composing the prompt for this task, and then went for a cup of coffee. When I got back the task was done. I spot-checked the summaries and they were excellent.
I thought this was amazing and magical at the time. Am I wrong? Or is it simply the AI making me think this result was amazing and magical?
You just spot checked it, so how can you be sure how accurate it is. Was it 80% accurate? 90%? 99%? And how does the domain influence the accuracy requirements?
Now that you have unlocked this secret, you're cursed forever. They look at the machine and say: hey, look, the machine is just like me! You're left confused for the best part of 3 years and then you start realizing it was true all along...they are..very much similar to the machine. For a moment we were not surprised by how capable the machine was at reasoning. And then it dawned on us, the machine had human level intelligence and cognition from the beginning, just from a slightly different perspective.
Sounds fever dreamish. Thank you sincerely (not) for creating it!
1) These tools obviously improved significantly over the past 12 months. They can churn out code that makes sense in the context of the codebase, meaning there is more grounding to the codebase they are working on as opposed to codebases they have been trained on.
2) On the surface they are pretty good at solving known problems. You are not going to make them write well-optimized renderer or an RL algorithm but they can write run-of-the-mill business logic better _and_ faster than I can-- if you optimize for both speed of production and quality.
3) Out of the box, their personality is to just solve the problem in front of them as quickly as possible and move on. This leads them to make suboptimal decisions (e.g. solving a deadlock by sleeping for 2 seconds, CC Opus 4.5 just last night). This personality can be altered with appropriate guidance. For example, a shortcut I use is to append "idiomatic" to my request-- "come up with an idiomatic solution" or "is that the most idiomatic solution we can think of." Similarly when writing tests or reviewing tests I use "intent of the function under test" which makes the model output better solution or code.
4) These models, esp. Opus 4.5 and GPT 5.2, are remarkable bug hunters. I can point at a symptom and they come away with the bug. I then ask them to explain me why the bug happens and I follow the code to see if it's true. I have not come across a bad bug, yet. They can find deadlocks and starvations, you then have to guide them to a good fix (see #3).
5) Code quality is not sufficient to create product quality, but it is often necessary to sustain it. Sustainability window is shorter nowadays. Therefore, more than ever, quality of the code matters. I can see Claude Code slowly degrading in quality every single day--and I use it every single day for many hours. As much as it pains me to say this, compared to Opencode, Amp, and Toad I can feel the "slop" in Claude Code. I would love to study the codebases of these tools overtime to measure their quality--I know it's possible for all but Claude Code.
6) I used to worry I don't have a good mental model of the software I build. Much like journaling, I think there is something to be said about the process of writing/making actually gives you a very precise mental model. However, I have been trying to let that go and use the model as a tool to query and develop the mental model post facto. It's not the same but I think it is going to be the new norm. We need tooling in this space.
7) Despite your own experiences with these tools it is imperative that they be in your toolbox. If you have abstained from them thus far, perhaps best way to get them incorporated is by starting to use them for attending to your toil.
8) You can still handcraft code. There is so much fun, beauty and pleasure it in to deny doing it. Don't expect this to be your job. This is your passion.
Why is it imperative? Whenever I read comments like this I just think the author is cynically drumming up hype because of the looming AI bubble collapse.
If you truly believe AI is simply going to collapse and disappear, you are deep in some serious cope and are going to be unpleasantly surprised.
In terms of bubbles: Bubbles are economic concepts and they will burst but the underlying technology find its market. There are plenty of good open source models and open source projects like OpenCode/Toad that support them. We can use those without contributing (too much) to the bubble.
Imagine someone in the 90s saying "if you don't master the web NOW you will be forever behind!" and yet 20 years later kids who weren't even born then are building web apps and frameworks.
Waiting for it to all shake out and "mastering" it then is still a strategy. The only thing you'll sacrifice is an AI funding lottery ticket.
Unless your gunning for a top position as a vibe coder, this whole concept of "falling behind" is just pure FOMO.
Unless you're in web dev because it seems like that's one of the few domains where AI actually works pretty well today.
???
The classic line (which I've quoted a few times here) by Charles Mackay from 1841 comes to mind:
"Men, it has been well said, think in herds; it will be seen that they go mad in herds, while they only recover their senses slowly, and one by one.
"[...] In reading The History of Nations, we find that, like individuals, they have their whims and their peculiarities, their seasons of excitement and recklessness, when they care not what they do. We find that whole communities suddenly fix their minds upon one object and go mad in its pursuit; that millions of people become simultaneously impressed with one delusion, and run after it, till their attention is caught by some new folly more captivating than the first."
— Extraordinary Popular Delusions and the Madness of Crowds
And rest of my field. Automated tools do part of work. AI probably some, but not enough of actually verifying findings and then properly explaining the context and implications.
Earlier this year the ecosystem was still a mess I didn't have time to untangle. Now things are relatively streamlined and simple. Arguably stable, even.
I feel behind, sure, but I also don't think people on the bleeding edge are getting that much more utility that it's worth sinking dozens or hundreds of my very limited hours into understanding.
Besides, I'm a C programmer. I'll always be several decades behind the trend. I'm fine with that.
It's high dose copium. Please keep the good times rolling! Buy my books! Sub to my stack!
Meanwhile, with local models, local RAG, and shell scripts, I am wandering 3D immersive worlds via a GPU accelerated presentation layer I vibe coded with a single 24GB GPU. Natural language driven Unreal engines are viable outputs today given local only code gen.
Karpathy and the SV VC world thought this would be the next big thing to pump for a decade plus; like web pages and SaaS. But the world is smarter, more adept at catching up that it is just state management in a typical machine. The semantics are well known and do not need re-invention.
The hilarity at an entire industry unintentionally training their replacements.
I've tried at every new model release (that can run on my 24GB card) and everything is still entirely useless.
I'm not writing web stuff though.
what drugs are you using?
Sure, I can write code manually, but in my case I’m working full time on my own SaaS and I am absolutely faster and more effective with AI. It’s not even close. And the gains are so extreme that I can’t justify writing beautiful hand-crafted artisanal code anymore. It turns out that code that’s “good enough” will do, and that’s all I can afford right now.
But long-term, I don’t know that I want to do that work, especially for some corporation. It feels like the difference between being a master furniture craftsman, and then going work in an IKEA factory.
AI supported coding is like four wheel drive: it will get you stuck but in harder places. The people that use these tools to reach above the level of their actual understanding are creating some very expensive problems. If you're an expert level coder and you use AI to speed up the drudgework you can get good mileage out of them, but if you're a junior pretending to be a senior you're about to cost your employer a lot of $ hiring an actual senior.
And for good reason - the ill disciplined human body optimises for short term benefits. The disciplined body recognises the flaw in this and thinks much broader.
And its actually not well paid because client now has the expectation that mostly everything is now done, you have to just only fix few things and you even have AI at your disposal so expect that you just write a better magic prompt.
I think actually often its faster and cheaper to start from scratch or at least rewrite whole module (of course still with AI with just better vibe engineering rather than vibe coding).
It's similar with house renovation - often its just cheaper and faster to tear whole building down rather than fixing it.
I'm just very curious where we are at the moment with in this profession.
However it was just adding pile of feature after features without taking time to refactor it. Client most likely did some few different attempts to add some specific feature or fixing something and there was a lot of dead code that haven't been used. This dead code actually confused AI and often tried to modify part of code that have been abandoned.
There was completely no tests. No performance tests. And some part of my job was to improve performance (cv/ai model inference) and robustness (crashes, memory leaks).
I think AI is fine and useful but whats bad with such vibe coded project if somebody hand over to you is you have completely no clue what part of the code are written/designed properly with good foundation if previous developer didn't test extensively and didn't refactor continuously. Even worse if you cannot talk to previous developer responsible for the project.
Second, do you actually want to do that work? I don’t. I spent years working as a freelancer and I cleaned up a lot of shitty code from other freelancers. Not really what I want to spend my 50s doing.
What burden are you talking about? Using LLMs isn't that hard, we have done harder things before.
Sure, there will be people that refuses to "let go" and want to keep doing things the way the like them, but hey! I've been productive with vim (now neovim) for 25 years and I work with engineers that haven't mastered their IDEs at the same level. Not even close!
Sure, they have have never been "burdened" by knowing other editors before those IDEs existed, but claiming that I would have it harder to use any of those because I've mastered other tools before is ridiculous.
I'm very happy being decades behind the curve here. C's slowness is perfect for me.
I took this approach when the Kubernetes hype hit and it never limited my prospects.
As long as more software developers are needed your logic obviously holds, it is irrelevant whether you are a master. There are enough jobs for "good enough". But what if "good enough" is no longer a viable economic niche? Since that niche is now entirely occupied by LLMs.
The actually productive programmers, who wrote the stack that powers the economy before and after 2023 need not listen to these cheap commercials.
To be fair, which open source project can really claim that it is "finished", and what does "finished" even mean?
The only projects that I can truly call "finished" are those that I have laid to rest because they have been superseded by newer technologies, not because they have achieved completeness, because there is always more to do.
this is because SWEs love bloat and any good idea eventually needs to balloon into some ever-growing monstrosity :)
That's a bummer if true. Is there a reliable source that lays that decision at Karpathy's feet?
https://www.teslarati.com/tesla-ai-director-hiring-autopilot...
He gave a glowing recommendation for camera-only FSD in 2021:
https://thenextweb.com/news/tesla-ai-chief-explains-self-dri...
Then he left Tesla in 2022. So yes, you could argue that it was all Elon's fault and he just followed for 5 years. We won't know with 100% certainty, I'd find it odd to stay 5 years if you think it doesn't work.
What a weird, dumb call that was. "I don't always tackle the toughest engineering problems where lawsuits and lives are at stake, but when I do, I chug a few beers first and tie one hand behind my back."
That's why I've never understood HN's continuing infatuation with him. He failed to deliver FSD to Tesla, and arguably even sent them down a R&D dead end, and he doesn't seem to have played a significant role in the generative AI revolution, only joining OpenAI after they developed ChatGPT. Yet when his talks or blog posts get posted here, they're met with almost uniformly positive comments, often many.
He reminds me of Sam Altman, where for a while, pointing out that pg's emperor was naked, that his first big "success" was a startup, Loopt, that devolved into a seedy, gaunt gay hookup app, slowly wasting away, that only got acquired thanks to face-saving VC string-pulling, and that that "success" was the springboard of all that followed (YC presidency, feeling out a gubernatorial campaign, OpenAI CEO)--that would get you swiftly flagged.
If you dont understand AWS you can't vibe code a terraform codebase that creates a complex infrastructure .. etc
This sounds unbearable. It doesn't sound like software development, it sounds like spending a thousand hours tinkering with your vim config. It reminds me of the insane patchwork of sprawl you often get in DevOps - but now brought to your local machine.
I honestly don't see the upside, or how it's supposed to make any programmer worth their weight in salt 10x better.
It doesn't. The only people I've seen claim such speedups are either not generally fluent in programming or stand to benefit financially from reinforcing this meme.
And in any case, you are moving goalposts. OP said he had never seen anyone serious claim that they got productivity gains from AI. When I claim that, you say “well it’s not the next level of abstraction for all SWE”. Obviously - I never claimed that?
Your site is case in point of why LLMs demo well but kind of fall apart in the real world. It's pretty good at fitting lego blocks together based on a ton of work other people have put into React and node or the SSE library you used, etc. But that's not what Karpathy is saying, he's saying "the hottest programming language is english".
That's bonkers. In my experience it can actually slow you down as much as speed you up, and when you try to do more complicated things it falls apart.
This is what the parent said.
> some simple code for your personal website
This is your (reductive) characterization of their work. That's fine, but please keep in mind that that's your inference, not what the parent said.
I think what you're working on has a huge impact on AI's usability. If you're working on things that are simple conceptually and simple to implement, AI will do very well (including handling edge cases). If it's a hard concept, but simple execution, you can use AI to only do the execution and still get a pretty good speed boost, but not transformational. If it's a hard concept and a hard execution (as my latest project has been), then AI is really just not very good at it.
Then figure out which one of the two you are. Years of experience have never equated competence.
Of course I wouldn't use an LLM to #yolo some Next.js monstrosity with a flavor-of-the-week ORM and random Tailwind. I have, however, had it build numerous parts of my apps after telling it all about the mise targets and tests and architecture of the code that I came up with up front. In a way it vindicates my approach to software engineering because it's able to use the tools available to it to (reasonably) ensure correctness before it says it's done.
Disclaimer because I sound pessimistic: I do use a lot of AI to write code.
I do feel behind on the usage of it.
Just the other day ChatGPT implemented something that would have taken me a week of research to figure out: in 10 minutes. What do you call that speedup? It's a lot more than 10x.
On other days I barely touch AI because I can write easy code faster than I can write prompts for easy code, though the autocomplete definitely helps me type faster.
The "10x" is just a placeholder for averaging over a series of stochastic exponents. It's a way of saying "somewhere between 1 and infinity"
Can you share what exactly this was? Perhaps I don't do anything exciting or challenging, but personally this hasn't happened to me so I find it hard to imagine what this could be.
Instead of AI companies talking about their products, I think the thing to really sell it for me would be an 8 hour long video of an extremely proficient programmer using AI to build something that would have taken them a very long time if they were unassisted.
I'm a half-decent developer with 40 years experience. AI regularly gives me somewhere in the range of 10-100X speed-up of development. I don't benefit from a meme, I do benefit from better code delivered faster.
Sometimes AI is a piece of crap and I work at 0.5X for an hour flogging a dead horse. But those are rarer these days.
Can you share what exactly this was (that got you the 10-100x speedup)? Perhaps I don't do anything exciting or challenging, but personally this hasn't happened to me so I find it hard to imagine what this could be.
Instead of AI companies talking about their products, I think the thing to really sell it for me would be an 8 hour long video of an extremely proficient programmer using AI to build something that would have taken them a very long time if they were unassisted.
> I can vibe code at any scale.
That's the thing - I know what 'vibe coding' is because that's pretty much how I use AI, as an exploratory tool or interactive documentation or a search engine for topics I want surface level information about.
It does not make me a 10x-100x more efficient. It's a toy and a learning tool. It could be replaced or removed and I wouldn't miss it that much.
Clearly I am missing something. I care about quality software, so if it's making someone 100x more productive but their producing the same subpar nonsense they would anyway then I am not interested. Hence I want to see a really proficient programmer use it, be 10x+ more productive, and have a quality product at the end. That's what I want to see demonstrated.
Everything else I’ve used has been over engineered and far less impactful. What I just said above is already what many of us do anyway.
The point is, you can get lots of quality work out of this team if you learn to manage them well.
If that sounds like a “complete and utter nightmare”, then don’t use AI. Hopefully you can keep up without it in the long run.
Mass production however won’t stop, it’s barely started literally a couple months ago and it’s the slowest and worst it’ll ever be.
Are you complaining about code formatters or auto fix linters? What about codegen based on APIs specs? A code agent can do all of those and more. It can do all the boring parts while I get to focus on the interesting bits. It’s great.
Here’s another fantastic use case: have an agent gen the code, think about its prototype, delete, and then rewrite it. I did that on a project with huge success: https://github.com/neurosnap/zmx
I can't see the original post because my browser settings break Twitter (I also haven't liked much of Karpathy's output), but I agree. I call this style of software development 'meeting-based programming,' because that seems to be the mental model that the designers of the tools are pursuing. This probably explains, in part, why c-suite/MBA types are so excited about the tools: meetings are how they think and work.
In a way LLMs/chatbots and 'agents' are just the latest phase of a trend that the internet has been encouraging for decades: the elimination of mental privacy. I don't mean 'privacy' in an everyday sense -- i.e. things I keep to myself and don't share. I mean 'privacy' in a more basic sense: private experience -- sitting by oneself; having a mental space that doesn't include anybody else; simply spending time with one's own thoughts.
The internet encourages us to direct our thoughts and questions outward: look things up; find out what others have said; go to wikipedia; etc. This is, I think, horribly corrosive to the very essence of being a thinking, sentient being. It's also unsurprising, I guess. Humans are social animals. We're going to find ourselves easily seduced by anything that lets us replace private experience with social experience. I suppose it was only a matter of time until someone did this with programming tools, too.
(FYI: you can easily bypass the awful logged out view by replacing x.com with xcancel.com, I use a URL Autoredirector rule to do it automatically in Chromium browsers)
Before LLM programming, this was at least 30-50% of my time spent programming, fixing one config and build issue after another. Now I can spend way more time thinking about more interesting things.
No decades of research and massive allocation of resources over the last few years as well as very intentional decision making by tech leadership to develop this specific technology.
Nope, it just mysteriously dropped from the sky one day.
He's working on a more formal educational framework/service of some kind, which will presumably not be free, but what he's already posted is some of the most effective CS pedagogy I've ever encountered (and personally benefited from.)
But I think if I had started learning today instead of a year ago, I'd get up to speed in more like 6 months instead of a year. A lot of stuff I learned a year ago is not really necessary anymore, but furthermore, there's just a lot more information out there about how to use these from people who have been learning it on their own.
I just don't think people who have ignored it up until now are really that far behind.
>Good question, it's basically entirely hand-written (with tab autocomplete). I tried to use claude/codex agents a few times but they just didn't work well enough at all and net unhelpful, possibly the repo is too far off the data distribution.
And a lot of the tooling he mentioned in OP seems like self-imposed unnecessarily complexity/churn. For the longest time you could say the same about frontend, that you're so behind if you're not adopting {tailwind, react, nodejs, angular, svelte, vue}.
At the end of the day, for the things that an LLM does well, you can achieve roughly the same quality of results by "manually" pasting in relevant code context and asking your question. In cases where this doesn't work, I'm not convinced that wrapping it in an agentic harness will give you that much better results.
Most bespoke agent harnesses are obsoleted by the time of the next model release anyway, the two paradigms that seem to reliably work are "manual" LLM invocation and LLM with access to CLI.
Exactly! If people have 'never felt this far behind' and the LLM's are that good. Ask the LLM to teach you.
Like so many articles on 'prompt engineers' this (never felt this behind) take too is laughable. Programmers having learnt how to program (writing algorithms, understanding data structures, reading source code and API docs) are now completely incapable of using a text box to input prompts? Nor can they learn how to quickly enough! And it's somehow more difficult than what they have routinely been doing? LOL
Douglas Adams on age and relating to technology:
"1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
2. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
3. Anything invented after you’re thirty-five is against the natural order of things."
From 'The Salmon of Doubt' (2002)
He's not against the technology, I think he's just feeling like there's a lot of potential that he's not quite grasping yet.
Big picture it’s about emotional intelligence and if you are losing your shit you’re going to flail around. I think you should pick up some near-frontier tools and use them to improve your usual process, always keeping your feet on the ground. “Vibe coding” was always about getting you and keeping you over your head. Resist it!
Maybe Devs should handle copilots as Swiss prana-bindu their shots
(Therefore gun laws at a longer timescale)
Of course we have to ask aeb if he has ever run into someone who trips (only, of course) while hunting ;) have you?
given that the 3 hares seem to currently lack a signification, I'd be up for squatting? Or would Paul prefer 3 fennecs? Should anyone wish to oppose us, as Bigwig said: "silflay hraka, u embleer rah"
a slightly more pragmatic story for shunya as better mousetrap: just as we now routinely have our calculations done for us in binary, but record results in decimal (in PDF invoices, say), ancient romans (among other cultures) would have someone do their calculations on a counting https://en.wikipedia.org/wiki/Counting_board board, but recorded (only the non-zero) results in roman numerals.
(these days we can spot the algebraists via a sibboleth: they start their papers and books with section/chapter 0)
> « Les hommes sont comme les chiffres : ils n'acquièrent de valeur que par leur position. » —NB
https://www.neatorama.com/2012/05/18/10-facts-you-might-not-...
Re boney quote, that's one heuristic for HN mods
TIL Mozilla would have done better channelling the Finnic fennec (Vs rebranding "pinko"). Globe-wrappin Oxygen Auroras it wasn't.
Haploid fox
https://en.wikipedia.org/wiki/Inari_%C5%8Ckami#:~:text=The%2...
LLM forward development has a lot of things going on, and it really isn't clear yet what is going be the common standard in a few years time in terms of dev ux, async tools, ci/cd tools, in production and offline workflows, etc.
its an easy time to hop down a wrong path picking subpar tools or not experimenting further, but if you just wait, the people who try the right tools are going to be way ahead on making products for their customers.
No it’s absolutely not. But I thought it’d be fun to offer Adams’ brilliant hyperbole for an affectionate ribbing of Karpathy. Both of them are great communicators of ideas.
Case in point: fax machines are still an important part of business communication in Germany, and many IT projects are genuinely amateurish garbage — because the underlying mindset is "everything should stay exactly as it is."
This is particularly visible in the 45+ generation. It mostly doesn't apply to programmers, since they tend to find new things interesting. But in the rest of society, the effects are painful to watch: if nothing changes, nothing improves.
And then there's mobile infrastructure. It's not even a technical problem — it's purely political. The networks simply don't get expanded. It's honestly embarrassing how far behind Germany is compared to the rest of Europe.
Something lke the PDF's produced from sent(1) under Unix or MagicPont presentations are many times less fancier and they allow to produce effective no-bullshit ACTUAL product based presentations. But then half of the commercials and managers would actually useless (as they are) and they would be kicked out fast. And don't let me start on nepotism...
I am sure Karparthy can and does everage AI as well or better than you. Probably I do also and I am 48.
Two years ago I was a human USB cable: copy, paste, pray. IDE <-> chat window, piece by piece. Now the loop is tighter. The distance is shorter.
There’s still hand-holding. Still judgment. Still cleanup. But the shift is real.
We’ve come a long way. And we’re not done.
It's death though to be excessively reading tweets and blogs about this stuff, this will have you exhausted before you even try a real project and comparing yourself to other people's claims which are sometimes lies, often delusional, ungrounded and almost always self-serving. In sofar someone is getting things done with any consistency they are practicing basic PM, treating feelings of exhaustion, ungroundedness and especially going in circles as a sign to regroup, slow down and focus on the end you have in mind.
If the point really is to research tools than what you do is break down that work into attainable chunks, the way you break down any other kind of work.
I've yet to see examples of folks using this in a team of 4+ folks working together in a production env with users, and just using AI for their regular development.
Claude code creator only using claude code doesn't count. That's more like dog-fooding.
Is not only that the.code quality is bad, to be fair in most projects is.
The biggest problem is every single component of the stack uses different conventions and names for everything.
When nobody looks at the code naming things becomes harder until everything is <generic name>
At first it kind of depressed me, but now I realised that actually writing code is only part of my day job, the rest is integrating infrastructure and managing people and enabling them to do their job as well, and if I can do the coding/integration part faster and give them better tools more quickly, that's a huge win.
This means I can spend more time at the beach and on my physical and mental well being as well. I was stubborn and skeptical a year ago, but now I'm just really enjoying the process of learning new things.
He knows the tools, he's efficient with them and yet he just now understands how much he's unable to harness at this point that makes him feel left behind.
Looking forward to see what comes out of him climbing that slope.
I haven't used agents much for coding, but I noticed that when I do have something created with the slightest complexity, it's never perfect and I have to go back and change it. This is mostly fine, but when large chunks of code are created, I don't have much context for editing things manually.
It's like waking up in a new house that you've never seen before. Sure I recognize the type of rooms, the furniture, the outlets, appliances, plumbing, and so on when I see them; but my sense of orientation is strained.
This is my main issue at the moment.
Every time, unless my initial request was perfectly outlined in unambiguous pseudocode. It's just too easy to write ambiguous requests.
Unambiguous but human-readable pseudocode is what I strive for now, though I will often ask AI to help edit the pseudocode to remove ambiguities prior to generating code.
All of the stuff he feels he is falling behind on? Almost completely irrelevant in our domain.
I empathize with his sense that if we could just provide the right context and development harness to an AI model, we could be *that* much more productive, but it might just be misplaced hope. Claude Code and Cursor are probably not that far from the current frontier for LLM development environments.
I've seen that these tools have different uses for different devs. I know on my current team, each of us devs works very differently to one another, and we make significant allowances to accommodate for one another's different styles. Certain tasks always go to certain devs; one dev is like a steel trap, another is the chaos explorer, another's a beginner, another has great big-picture perspective, etc. (not sure why but there's even space for myself ;)
In the same way, different devs use these powerful tools in very different ways. So don't imagine you're falling behind, because the only useful benchmark is yourself. And don't imagine you can wait for consensus: you'll still need to identify your personal relationship to the tools.
Most of all, don't be discouraged. Even if you never embrace these tools, there will remain space for your skills and your style of approaching our shared work.
Give it another 10 years and I'm sure this will all become clearer...
I am not [yet] ready to just let an agent write a whole app or server for me, but I am increasingly letting them write a whole function for me.
They are also great “bug finders.” I can just feed some code, describe the symptoms, and ask for an observation. I often get great suggestions, including things like finding typos and copy/pasta problems.
I find that just this limited application has significantly increased my development velocity, and, I believe, the quality of my work.
In the backend, we're mostly just pushing data around from one place to another. Not much changes, there's only a few ways to really do that. Your data structures change, but ultimately the work is the same. You don't even really need an LLM at all, or super complex frameworks and ORMs, etc.
We don't need you.
The end goal is to get rid of all frontends anyway, just have apps that you interact with through LLM prompts. A more advanced command line.
"Vibe programming" is less than a year old. What is programming going to look like in a few years?
He is brilliant no doubt, but not in that field.
FWIW though I think his predicted worldview will render it very difficult to acquire this skill, as people grow reliant on gen AI for programming rudiments.
Complete bullshit. Beginning programmers writing good and idiomatic Python isn't "bottom of the barrel", or did you think I was recommending his videos to 20 year seasoned pros to improve their coding?
Some people on this site need to check their arrogance and humble themselves a bit before opening their mouths.
It's interesting that some months ago when his nanochat project came out the HN Anti-AI crowd celebrated him saying "I tried to use claude/codex agents a few times but they just didn't work well enough at all and net unhelpful, possibly the repo is too far off the data distribution"
But now it is working for him he's suddenly not an expert...
You can’t, in an honest argument, lump different strangers into a group you invented to accuse them of duplicity or hypocrisy.
Or maybe he didn't lie then but is lying now?
Coding agents are eating up programming from the lowest end, starting from pressing button on the keyboard to type the code in: completion was literally their first application. I don't think it will go all the way to the top, though, the essential part of the profession will remain until true AGI.
Metaphorically, think how integrated chips didn't replace electrical engineering, just changed which production tools and components engineers deal with and how.
Obviously we all are adapting to changes, but if he or someone are panicking about being behind, that can only be because they've never been in too deep.
Anyway:
> agents, subagents, their prompts, contexts, memory, modes, permissions, tools, plugins, skills, hooks, MCP, LSP, slash commands, workflows, IDE integrations,
give me extreme Emacs 'setup' feelings: I was at a meetup in hk recently where there was someone advocating this and it was just depressing; spending hours on stuff that changes daily while just my vanilla claude code with playwright mcp runs circles around it, even after it has been set up. It is just not better at all and until someone can show that it is actually an improvement WITH the caveat that when it is an improvement on t(1), it doesn't need a complete overhaul at t(n) where n is a few days or weeks just because the hype machine says so. This measured against a vanilla CC without any added tooling except maybe playwright mcp.
People just want to scam themselves in feeling useful: if the ai does the work, then you find some way of feeling busy by adding and finetuning stuff to feel useful.
AI code is the Canadian girlfriend of programming.
If only more people understood what quadratic attention means in the real world.
This confirms AI bubble for me and it now being entirely FUD driven. "Not fall behind" should only apply to technologies where you have to put active effort to learn as it requires years to hone and master the craft. AI is supposed to remove this "active effort" part so as to get you upto speed with the latest and bridge the gap between those "who know" and those "who do not". The fact you need to say "roll up your sleeves to not fall behind" confirms we are not in that situation yet.
In other words, it is the same old learning curve that everyone has to cross EXCEPT this time it is probabilistic instead of linear/exponential. It is quite literally a slightly better than coin toss situation when it comes to you learning the right way or not.
For me personally, we are truly in that zone of zero active effort and total replacement when AI can hit a 100% on ALL METRICS consistently, every single time, even on fresh datasets with challenging questions NOT SEEN/TRAINED by the model. Even better if it can come up with novel discoveries to remove any doubts. Chances of achieving that with current tech is 0%.
You’re not doing it wrong, the tools just aren’t all they’re cracked up to be. They are annoying good enough to get you to waste a load of time trying to get them to do what it looks like they should be able to do.
And a failure to clarify the project you're currently working on and the actual results feels decidedly like a propaganda issue.
Take all the digs at my skills you want. I'd rather not be a bald faced liar.
He is also great at explaining AI related concepts to the masses.
However his takes on software engineering show someone that hasn’t spend a significant amount of time doing production grade software engineering, and that is perfectly fine and completely normal given his background.
But that also means that we should not take his software engineering opinions as gospel.
> Building @EurekaLabsAI. Previously Director of AI @ Tesla, founding team @ OpenAI
maybe i am too ignorant and don‘t see what i am missing. and i am still writing code and enjoying it.
just the terminology of agents, vibe coding, prompt engineering etc is weirdly offputting to me.
This chaps will continue until something moderately productive and easily adoptable comes out. FOMO will strike all of us from time to time. Some of us will even try out the latest and greatest and see if it sticks.
Some companies will mandate arbitrary code generation standards because "it's the basis of their success", it will polarize their talent pool. Later, it will be impossible to determine if they were (not) successful "inspite of" or "because of" such wild decisions.
rishabhaiover•1mo ago
condensedcrab•1mo ago
That being said, Welch’s grape juice hasn’t put Napa valley out of business. Human taste is still the subjective filter that LLMs can only imitate, not replace.
I view LLM assisted coding (on the sliding scale from vibe coding to fancy auto complete) similar to how Ableton and other DAW software have empowered good musicians that might not have made it otherwise due to lack of connections or money, but the music industry hasn’t collapsed completely.
tjr•1mo ago
design2203•1mo ago
nextworddev•1mo ago
m463•1mo ago
Can you do some code reviews while you're running?
skybrian•1mo ago
zephen•1mo ago
But now you've got me thinking. Has anyone studied whether the programmers who are more enamored of AI are also into RPGs?