Doesn't really seem fair, I'm gonna be a old white man some day, ain't really that much I can do about it...(Well, I suppose sex changes are a thing now, but really?)
Alternatively, you could take a crack at deconstructing whiteness. Depending how young you currently are, you might be able to make a dent by the time you're an old man. That's trickier though, because it involves serious social reform. Or if sociology isn't your deal, maybe you could become a biologist, and cure old age?
It comes as no surprise to me that the guy who has bad opinions about software architecture, has worse opinions about vibe coding.
https://blog.cleancoder.com/uncle-bob/2021/11/28/Spacewar.ht...
I use Claude code and codex daily. They have become an integral part of my workflow.
There is no task that takes me a day that they can complete in five minutes.
Even with the lightning fast progress being made, it looks like LLMs are a decade or more away from being that good.
If AI can do your job for you, you should be the first to know. Just try it and see!
It’s always gonna be a multi shot process. And it can already write code good enough. That’s no longer the bottleneck.
Further, Qwen 27b is such an incredible masterpiece for coding and it can run on consumer hardware today. Anthropic/OpenAI are gonna give up on coding models very soon. There’s not gonna be any money in it when you can run your own local model for significantly cheaper.
Qwen27b is not SOTA but the value is insane. You can basically use it for small tasks and then route harder problems to opus or sonnet and boom you’ve said a lot of money.
Five minutes is pushing it, but 15 minutes? Absolutely.
I also like to use LLMs for background work on iterative tasks, but the way some people talk about work in the days before LLMs make me realize how we’re arriving at these claims that LLMs make us 10X more productive. If it took someone all day to do a few minutes of active work then I could see how LLMs would feel like a 10X or 50X productivity unlocker simply by not shutting down and doing nothing at the first sign of a pause.
For me, there may be one thing I do every few months that AI is really good at.
The overwhelming majority of the work I do, LLM tooling is just ok at. Definitely faster overall, but with lots of human planning, hand holding and course correction.
I would estimate LLMs make me, on average 50% more productive , which is huge! But from my experience I cannot believe anyone is experiencing a 8h/5m multiple productivity boost overall
To me, the lack of amazing productivity gains is that we have done nothing to speed up figuring out what to build and nothing to speed up getting code into production from pull request and in a lot of companies, code review is already saturated.
Also, the agents are good at figuring out problems for themselves, so I can ask it to set up a CI/CD pipeline, give it GitHub access, and it will just try things until it succeeds.
The results are always so ridiculously different.
Well... yes! It's not the same as running a program through a compiler 100k times and getting the same binary, it's... different: https://www.lelanthran.com/chap15/content.html
In any case, on that one time that AI works perfectly, it saves me hours of coding. So the potential is there...
I agree to some extent with regards to writing new code. One piece where I have been perpetually impressed is at asking it to put together a plausible explanation of how something weird has happened. I have been blown away, multiple times, by Codex and Claude’s ability to take a prompt like “When I did X, I expected Y to happen but instead observed Z. Put together an explanation for how that could happen, including the individual lines of code that can lead to ending up in that state.”
In one notable case, it traced through a pretty complex sensor fusion -> computational geometry problem and identified a particular calculation far upstream that could go negative in certain circumstances, which would lead to a function far downstream generating a polygon with incorrect winding order (clockwise instead of CCW).
In another, it identified a variable that was being initialized to 0 instead of initialized to (a specific runtime value that it should’ve been initialized to during a state transition). The downstream effect, minutes later, would be pathological behaviour that would happen exactly once per boot.
In both cases I was provided with a specific causal chain of events with individual source files and line numbers so that I could verify the plausibility of the explanation myself.
I don't mean to completely dismiss their utility. I realized recently that I was having more fun coding than I ever remember. It is a strange feeling to go along with vibe out there that software developers are becoming obsolete.
But I found myself laughing at the style; just ranting about software like a cartoon villain in his bathrobe. No fucks given.
As for AI-written code, I wouldn't fly on a plane controlled by AI-designed and AI-tested code, but much of development is busy work, not problem solving or design. AI excels at turning a protocol spec into a parser for example. I'll take that any day. AI excels at finding stuff, particularly non-code, thesis-level ideas for algorithms and also at about the same level, what's been shown not to work when solving a non-deterministic problem.
If we're lucky, AI will fill in after exposing who is only doing busy work and who is creating.
also, his prediction assumes that ai will be able to learn from its own code going forward. will it also create its new programming languages and tools?
but it's a funny rant.
monkpit•1h ago
nine_k•59m ago
renticulous•48m ago
https://x.com/stevesi/status/2050325415793951124
Here's how history rhymes with this logic. The development of compilers v writing assembly language was not without a very similar "controversy" — that is, are the new tools more efficient or less efficient.
The first compilers were measured relative to hand-tuned assembly language efficiency. The existing world of compute was very much "compute bound" and inefficient code was being chased out of every system.
The introduction of the first compilers generally delivered code "within 10-30%" as efficient as standard professional assembly. This "benchmark" was enough for almost a generation of Fortran programmers to dismiss the capabilities of compilers.
Also worth noting, early compilers (all through the 1980s) routinely had bugs that generated incorrect code. Debugging a compiler is a nightmare (personal experience). This only provided more "ammo."
With the arrival of COBOL the debate started to shift. COBOL generated decidedly "bloated" code so there was no way to win the efficiency argument. But what people started to realize was that a "modern" programming language made it possible to deliver vastly more software and for many more people to work on the same code (ASM notorious for being challenging for multiple engineers on the same portion of code). So the metric slowly started to move from "as good as hand tuned assembler" to "able to write bigger, more sophisticated code in less time with more people). Computers gained timesharing, more memory, and faster CPUs which made the efficiency argument far less compelling (only to repeat with the first 8K or 64K PCs).
This entire transition is capped off with a description in Fred Brooks "Mythical Man Month" book, one of the seminal books in the field of programming and standard issue book sitting in my office waiting for me on my first day at Microsoft. (See full book free here https://web.eecs.umich.edu/~weimerw/2018-481/readings/mythic...)
It is very early. I was not a programmer when the above happened though I did join the professional ranks while many still held these beliefs. For example, I interned writing COBOL on mainframes while PCs were using C and Pascal which were buggy and viewed as inefficient on processor/space-constrained PCs.
The debate would continue with C++, garbage collection, interpreted v compiled (Visual Basic) and more. As a fairly consistent observation over decades, every new tool is viewed through a lens (at first) by experienced programmers over what is worse while new programmers use the tool and operate in a new context (eg "more software" or "bigger projects"). The excerpt below shows this debate as captured in 1972.
lelanthran•4m ago
Incorrect. They had bugs that generated incorrect code. They didn't routinely have bugs that generated incorrect code :-/
And the bugs they had were reproducible.
monkpit•27m ago