I couldn’t create a more perfect artifact of the AI effect than what this poster posted
You may disagree. But disagree realistically. Have a valid reason why you disagree instead of attacking their competence.
Or if after you thought about it and can’t find adequate reasoning other than google engineers are total shit…. Then maybe consider that your opinion is incorrect.
The article says the prompt was 3 paragraphs, so why is it not shared? Without that it's just a talking head presenting a nothingburger. Put up or shut up.
Not, “All google engineers are clearly incompetent even though google has one of the highest bars in hiring.”
There are lots of distributed agent orchestators out there. Rarely is any of them going to do exactly what you want. The hard part is commonly defining the problem to be solved.
The funny thing is: the abilities that people thought of as clever at the time (doing maths) can be done with a $1 component these days. The thing Clever Hans actually did (reading the audience) is something we spend billions on, and in some circles it's still up for debate whether anything can do it.
see also: Moravec's Paradox [2]
It's time to sell Google stocks.
Every time someone says “AI built in an hour what took us a year,” what they really mean is that humans spent a year doing the hard thinking and the AI merely regurgitated it at silicon speed. Which is, of course, completely different from productivity.
Also, if it truly took your team a year, that probably says more about your process than about AI. But not in a way that threatens my worldview. In a different way. A safer way.
Let’s be clear: writing the code is the easy part. The real work is the meetings, the alignment, the architectural debates, the Jira grooming, the moral struggle of choosing snake_case vs camelCase. Claude didn’t do any of that. Therefore it didn’t actually do anything.
I, personally, have spent years cultivating intuition, judgment, and taste. These are things that cannot be automated, except apparently by a probabilistic text model that keeps outperforming me in domains I insist are “subtle.”
Sure, the output works. Sure, it passes tests. Sure, it replaces months of effort. But it doesn’t understand what it’s doing. Unlike me, who definitely understands everything I copy from Stack Overflow.
Also, I tried AI last year and it hallucinated once, so I’ve concluded the entire field has plateaued permanently. Technology famously never improves after an early bad demo.
Anyway, I remain unconcerned. If AI really were that powerful, it would have already made me irrelevant, and since I still have a job, this must all be hype. QED.
Now if you’ll excuse me, I need to spend the afternoon explaining why a tool that just invalidated a year of human labor is “just autocomplete.”
exactly. I am using AI to make tons of good code and I love it. but the AI makes silly oversight or has gaps in logic that to someone with 'hands on' experience thinks of right away.
im debugging some web server stuff for hours and ai never ask me for the logs or --verbose output, which is insane. instead the ai comes up with hypothetical causes for the problem then confidently states the solution.
This is silly, they didn't spend a year trying to crank out the right code, they spent a year not being aligned, disagreeing on approach, getting people to commit, support from other teams providing necessary integrations or platform etc. - Claude didn't help with any of that, it just did the code part, which wasn't the problem in the first place.
It is different. You suggested that rakyll told Claude to simply implement a solution that her team already put the legwork into designing. I'm saying that it sounds like Claude produced the solution independently based on a description of the problem. Those two are completely different and if you can't see that, I'm not sure what to say.
> having Claude produce it doesn't help!
Sure. Also, it could be a coincidence that it came to the correct solution, we can't discount that possibility.
The description that came out of the year of project refinement that had already occurred?
(I've personally never applied to somewhere that advertised that it would call me a 'developer', because that just sounds like a very boring factory-line 'churn out the code as described in this work ticket' role to me, that I don't want and didn't get a professional degree for, even before all this LLM/'agentic' possibility.)
This mirrors our experience with our big customers. Minor changes requires countless meetings with a dozen people and nobody takes charge, everything has to be repeated at least three times because people rotate while this circus goes on and so on.
In the end we finally get the people who know what's up involved and it results in a brief meeting and like an hour of writing code and testing.
One of our strengths is that we've all been here so long. We know these large customers way better than they do, because we still have the dev and the project manager that worked on the last thing, even if that was 6-7 years ago. We've literally had a large Fortune 500 customer call us and as how their systems worked, systems we don't even integrate directly with. And we could inform them, since we had retained that knowledge from a project some years ago.
So yeah, the code is usually never the problem.
I'll be impressed when a big service like Google Photos can run 100% with LLM Oncall without paging anyone or any human intervention. FWIW this was always my benchmark.
> AI tools in all processes from planning to coding to wrapping up
what i'd ideally like to do: send an ai to all the meetings i have to attend so i can get back to codingI think the key is integration. It is a lot of nitty-gritty to clean the corporate data, train AI with the data, and tweak it to suit different "roles". But if someone can make it work, it's a lot of $$$.
> AI to summarize meetings
we tried that a little while ago to summarize a few meetings into a timeline of events; turns out the thing invented several milestones and events that don't exist and meeting summaries for meetings that never happened... sigh > so AI is still not ready for that
it really isnt, at least for the tools we are using...Of course it's the latter, this article is silly clickbait. AI won't help you convince a peer to go with your design, won't clear your work with legal, yada yada.
AI boosters are heralding this as a new age of programming, but they conveniently ignore that the prompt in these hyperbolic articles (or GASP the source code!) is never shared, because then we would be able to try and replicate it and call out this bullshit. There would be no more hiding behind "oh you are not using the correct model".
Shame on all these "reporters" as well who never ask "Okay can we see the code? Can we see the prompt, since it's just 3 paragraphs?"
In the Xitter thread she literally says it’s a toy version and needs to be iterated on.
How much longer are we going to collectively pretend this is so revolutionary and worth trillions of dollars of investment?
I think really things will just start shifting as things really do start to get better and step changes of capabilities where “I’m not even going to try to get opus to do X because I know it’s going to suck” moves to “oh wow it’s actually helpful” to “I don’t really even need to be that involved”.
Places where engineering labor is the bottleneck to better output will be where talent migrates towards and places where output is capped regardless of engineering labor are going to be where talent migrates from. I don’t really see this apocalyptic view really accurate at all, I think it’s really going to be cost / output will reduce. It’ll make new markets pop up where it wouldn’t be really possible to justify engineering expense today.
For engineering, its extremely easy to not only acquire tech debt but to basically run yourself into the ground when opus cannot do a task but doesn’t know it cannot do a task.
BUT: what I will say is to me and to others the trajectory has been really impressive over the last 1.5 years, so I don’t view the optimism of “we’re nearly there!” As being kind of wishful or magical thinking. I can definitely believe by the end of 2026 we’ll have agents that break through the walls that it still can’t climb over as of today just based on the rate of improvements we’ve seen so far and the knowledge that we still have a lot of headroom just with current stack.
Prime example was a quick set of scripts to run some tasks that were commonly performed. The ai version took a couple days to get the patterns right but worked and i was ready to not have to think of it anymore. A solid POC i thought. Well along comes the old hat saying its not the way he wouldve done it. Says he’s going to take a crack at it over the weekend. A month later and he still wont approve my pr, still not done with his better version and the code he has checked in is wayyyy more complicated using self-coded routines rather than preexisting libraries or even forked versions of them cause they are “security risks”. Then theres the things he straight up replicated what i produced with new file and variable names. Im still using what i put together aince his is still not done but in the mean time talking with powers that be about how we balance time, cost and quality.
Which is no shade on Claude Code, but given everything CC has already done, this seems pretty normal.
rolph•1d ago
people learned, explored concepts, and discovered lateral associations, developed collective actions, consolidated future solidarity.
claude just output some code.
kvgr•1d ago
Deegy•1d ago
falcor84•1d ago
P.S. EDIT:
The big question will soon become - how technical do you need to be to build a system, because most of those learnings, concepts and associations are surely at the domain level. Or phrased differently: to what extent will future software development shift from hands-on engineering to hands-off technical guidance? Perhaps the future developer role would be much more similar to today's TPM (Technical Program Manager)?
gtirloni•1d ago
It's awesome to be amazed by some cool new technology but let's not be naive.
andrekandre•1d ago
i wonder, is coding really the bottleneck in most cases?
NuclearPM•1d ago