“Autofishers” are large boats with nets that bring in fish in vast quantities that you then buy at a wholesale market, a supermarket a bit later, or they flash freeze and sell it to you over the next 6-9 months.
Yet there’s still a thriving industry selling fishing gear. Because people like to fish. And because you can rarely buy fish as fresh as what you catch yourself.
Again, it’s not a great analogy, but I dunno. I doubt AGI, if it does come, will end up working the way people think it will.
The moment of realisation happen for a lot of normoid business people when they see claude make a DCF spreadsheet or search emails
claude is also smart because it visually shows the user as it resizes the columns, changes colours, etc. Seeing the computer do things makes the normoid SEE the AI despite it being much slower
Replace excel and office stuff with ai model entirely then people will pay attention.
iterating over work in excel and seeing it update correctly is exactly what people want. If they get it working in MSWord it will pick up even faster.
If the average office worker can get the benefit of AI by installing an add-on into the same office software they have been using since 2000 (the entire professional career of anyone under the age of 45), then they will do so. its also really easy to sell to companies because they dont have to redesign their teams or software stack, or even train people that much. the board can easily agree to budget $20 a head for claude pro
the other thing normies like is they can put in huge legacy spreadsheets and find all the errors
Microsoft365 has 400 million paid seats
For my team at least, the productivity boost is difficult to quantify objectively. Our products and services have still tons of issues that AI isn't going to solve magically.
It's pretty clear that AI is allowing to move faster for some tasks, but it's also detrimental for other things. We're going to learn how to use these tools more efficiently, but right now, I'm not convinced about the productivity gain.
What improvements have you noticed over that time?
It seems like the models coming out in the last several weeks are dramatically superior to those mid-last year. Does that match your experience?
FWIW Gemini inside Google apps is just as bad.
filling in pdf documents is effectively the job of millions of people around the world
Do they also make you write your own performance review and set your own objectives?
This is basically the same story I have heard both my own place of employment and also from a number of friends. There is a "need" for AI usage, even if the value proposition is undefined (or, as I would expect, non-existent) for most businesses.
https://newsletter.semianalysis.com/p/claude-code-is-the-inf...
Keep licking those boots.
You rebutted by claiming 4% of open source contributions are AI generated.
GP countered (somewhat indirectly) by arguing that contributions don’t indicate quality, and thus wasn’t sufficient to qualify as “amazing AI-generated open source projects.”
Personally, I agree. The presence of AI contributions is not sufficient to demonstrate “amazing AI-generated open-source projects.” To demonstrate that, you’d need to point to specific projects that were largely generated by AI.
The only big AI-generated projects I’ve heard of are Steve Yegge’s GasTown and Beads, and by all accounts those are complete slop, to the point that Beads has a community dedicated to teaching people how to uninstall it. (Just hearsay. I haven’t looked into them myself.)
So at this point, I’d say the burden of proof is on you, as the original goalposts have not been met.
Edit: Or, at least, I don’t think 4% is enough to demonstrate the level of productivity GP was asking for.
Also small businesses aren't going to publish blog posts saying "we saved $500 on graphic design this week!"
Once you do have a billion dollar product protecting it requires spending time, money and people to keep running. Because building a new one is a lot more effort than protecting existing one from melting.
But I like your and OP's analogy. Also, the productivity claims are coming from the guys in main memory or even disk, far removed from where the crunching is taking place. At those latency magnitudes, even riding a turtle would appear like a huge productivity gain.
Other white collar business/bullshit job (ala Graeber) work is meeting with people, “aligning expectations”, getting consensus, making slides/decks to communicate those thoughts, thinking about market positioning, etc.
Maybe tools like Cowork can help to find files, identify tickets, pull in information, write Excel formulas, etc.
What’s different about coding is no one actually cares about code as output from a business standpoint. The code is the end destination for decided business processes. I think, for that reason, that code is uniquely well adapted to LLM takeover.
But I’m not so sure about other white-collar jobs. If anything, AI tooling just makes everyone move faster. But an LLM automating a new feature release and drafting a press release and hopping on a sales call to sell the product is (IMO) further off than turning a detailed prompt into a fully functional codebase autonomously.
this software (which i am not related to or promoting) is better at investment planning and tax planning than over 90% of RIAs in the US. It will automate RIA to the point that trading software automated stock broking. This will reduce the average RIA fee from 1% per year to 0.20% or even 0.10% per year just like mutual fund fees dropped in the early 00s
more expensive silly companies will exist, but the cheap ones get the scale. SP500 index funds have over 1 trillion in the top 3 providers. cathy wood has like 6-7 billion.
BNYMellon is the custodian of $50 trillion of investment assets. robinhood has $324bn.
silly companies get the headlines though
That use case is definitely delegated to LLMs by many people. That said, I don't think it translates into linear productivity gains. Most white collar work isn't so fast-paced that if you save an hour making slides, you're going to reap some big productivity benefit. What are you going to do, make five more decks about the same thing? Respond to every email twice? Or just pat yourself on the back and browse Reddit for a while?
It doesn't help that these LLM-generated slides probably contain inaccuracies or other weirdness that someone else will need to fix down the line, so your gains are another person's loss.
But if you get deep into an enterprise, you'll find there are so many irreducible complexities (as Stephen Wolfram might coin them), that you really need a fully agentically empowered worker — meaning a human — to make progress. AI is not there yet.
If you weren’t doing much of that before, I struggled to think of how you were doing much engineering at all, save some more niche extremely technical roles where many of those questions were already answered, but even still, I should expect you’re having those kinds of discussions, just more efficiently and with other engineers.
huh? maybe im in the minority, but the thinking:coding has always been 80:20 spend a ton of time thinking and drawing, then write once and debug a bit, and it works
this hasnt really changed with Llm coding either, except that for the same amount of thinking, you get more code output
So I’m not even in the “it’s useless” camp, but it’s frankly only situationally useful outside of new greenfield stuff. Maybe that is the problem?
Which is that information technology similarly (and seemingly shockingly) didn't produce any net economic gains in the 1970's or 1980's despite all the computerization. It wasn't until the mid-to-late 1990's that information technology finally started to show clear benefit to the economy overall.
The reason is that investing in IT was very expensive, there were lots of wasted efforts, and it took a long time for the benefits to outweigh the costs across the entire economy.
And so we should expect AI to look the same -- it's helping lots of people, but it's also costing an extraordinary amount of money, and the few people it's helping is currently at least outweighed by the people wasting time with it and its expense. But, we should recognize that it's very early days, and that productivity will rise with time, and costs will come down, as we learn to integrate it with best practices.
The thing to note is, verifying if something got done is harder and takes time in the same ballpark as doing the work.
If people are serious about AI productivity, lets start by addressing how we can verify program correctness quickly. Everything else is just a Ferrari between two traffic red lights.
A Claude subscription is 20 bucks per worker if using personal accounts billed to the company, which is not very far from common office tools like slack. Onboarding a worker to Claude or ChatGPT is ridiculously easy compared to teaching a 1970’s manual office worker to use an early computer.
Larger implementations like automating customer service might be more costly, but I think there are enough short term supposed benefits that something should be showing there.
but at that point you could go for a bugger one and split amongst headcount
Claude Code has rate limits for a reason: I expect they are carefully designed to ensure that the average user doesn't end up losing Anthropic money, and that even extreme heavy users don't cause big enough losses for it to be a problem.
Everything I've heard makes me believe the margins on inference are quite high. The AI labs lose money because of the R&D and training costs, not because they're giving electricity and server operational costs away for free.
Not recognizing the essential role of sales seemed to be a common mistake.
As measured by whom? The same managers who demanded we all return to the office 5 days a week because the only way they can measure productivity is butts in seats?
Is it fair to say that wall street is betting America's collective pensions on AI...
[0]: https://www.almendron.com/tribuna/wp-content/uploads/2018/03...
CEOs are now on the downside of the hype curve.
They went from “Get me some of that AI!” after first hearing about it, to “Why are we not seeing any savings? Shut this boondoggle down!” now that we’re a few years into bubble, the business math isn’t working, and they only see burning piles of cash.
Figure A6 on page 45: Current and expected AI adoption by industry
Figure A11 on page 51: Realised and expected impacts of AI on employment by industry
Figure A12 on page 52: Realised and expected impacts of AI on productivity by industry
These seem to roughly line up with my expectations that the more customer facing or physical product your industry is, the lower the usage and impact of AI. (construction, retail)
A little bit surprising is "Accom & Food" being 4th highest for productivity impact in A12. I wonder how they are using it.
However, there's another factor. The J-curve for IT happened in a different era. No matter when you jumped on the bandwagon, things just kept getting faster, easier, and cheaper. Moore's law was relentless. The exponential growth phase of the J-curve for AI, if there is one, is going to be heavily damped by the enshitification phase of the winning AI companies. They are currently incurring massive debt in order to gain an edge on their competition. Whatever companies are left standing in a couple of years are going to have to raise the funds to service and pay back that debt. The investment required to compete in AI is so massive that cheaper competition may not arise, and a small number of (or single) winner could put anyone dependent on AI into a financial bind. Will growth really be exponential if this happens and the benefits aren't clearly worth it?
The best possible outcome may be for the bubble to pop, the current batch of AI companies to go bankrupt, and for AI capability to be built back better and cheaper as computation becomes cheaper.
Unfortunately I think most of the stuff they make will be shit, but they will build it very productively.
In the past 6 months, I've gone from Copilot to Cursor to Conductor. It's really the shift to Conductor that convinced me that I crossed into a new reality of software work. It is now possible to code at a scale dramatically higher than before.
This has not yet translated into shipping at far higher magnitude. There are still big friction points and bottlenecks. Some will need to be resolved with technology, others will need organizational solutions.
But this is crystal clear to me: there is a clear path to companies getting software value to the end customer much more rapidly.
I would compare the ongoing revolution to the advent of the Web for software delivery. When features didn't have to be scheduled for release in physical shipments, it unlocked radically different approaches to product development, most clearly illustrated in The Agile Manifesto. You could also do real-time experiments to optimize product outcomes.
I'm not here to say that this is all going to be OK. It won't be for a lot of people. Some companies are going to make tremendous mistakes and generate tremendous waste. Many of the concerns around GenAI are deadly,serious.
But I also have zero doubt that the companies that most effectively embrace the new possibilities are going to run circles around their competition.
It's a weird feeling when people argue against me in this, because I've seen too much. It's like arguing with flat-earthers. I've never personally circumnavigated Antarctica, but me being wrong would invalidate so many facts my frame of reality depends on.
To me, the question isn't about the capabilities of the technology. It's whether we actually want the future it unlocks. That's the discussion I wish we were having. Even if it's hard for me to see what choice there is. Capitalism and geopolitical competition are incredible forces to reckon with, and AI is being driven hard by both.
Until the handoff tax is lower than the cost of just doing it yourself, the ROI isn't going to be there for most engineering workflows.
virgildotcodes•1h ago