But because the economics don't translate the way VCs claim. When you replace a $50,000 employee with AI, you don't capture $50,000 in software revenue. You capture $5,000 if you're lucky. """
So you are saying, AI does replace labour.
A great example is the current Tata disaster in the UK with M&S.
Some C level MBAs get a couple of lunches together, or a golf match, exchange a bit of give and take, discounts for the next gig, business as usual.
Have you seen how valuable companies like Tata are, despite such examples?
I’d really love to be replaced by AI. At that point I can take a few months off paid gardening leave before they are forced to rehire me.
I'm envisioning a blog post on linkedin in the future:
> "How Claude Code ruined my million dollar business"
> You must be paying your software engineers around $100,000 yearly.
> Now that vibecoding is out there, when was the last time you committed to pay $100,000 to Lovable or Replit or Claude?
I think the author is attacking a bit of a strawman. Yes, people won't pay human prices for AI services.
But the opportunity is in democratization - becoming the dominant platform - and bundling - taking over more and more of the lifecycle.
Your customers individually spend less, but you get more customers, and each customer spends a little extra for better results.
To respond to the analogy: not everyone had $100,000 to build their SaaS before. Now everyone who has a $100 budget can buy Lovable, Replit and Claude subbscriptions. You only need 1,000 customers to match what you made before.
Every philosopher eventually came to the same realization: We don't have access to the world as it is. We have access to a model of the world as it predicts and is predicted by our senses. In so far as there is a correlation between the two in whatever fidelity we can muster, we are fated to direct access to a simulacrum.
For the most part they agree, but we have a serious flaw - our model inevitably influences our interpretation of our senses. This sometimes gets us into trouble when aspects of our model become self-reinforcing by framing sense input in ways that amplify the part of the model that confers the frame. For example, you live in a very different world if you search for and find confirmation for cynicism.
Arguing over metaphysical ontology is exemplified by kids fighting about which food (their favorite) is the best. It confuses subjectivity and objectivity. It might appear radical, but all frames are subjective even ones shared by the majority of others.
Sure, Schopenhauer's philosophy is the mirror of his own nature, but there is no escape hatch. There is no externality - no objective perch to rest on, even ones shared by others. That's not to say that all subjectivities are equally useful for navigating the world. Some models work better than others for prediction, control, and survival. But we should be clear that useful does not equate with truth, as all models are wrong, some are useful.
JC, I read the rest. The author doesn’t seem to grasp how profit actually works. Price and value are not welded together: you can sell something for more or less than the value it generates. Using his own example, if the AI and the human salesperson do the same work, their value is identical, independent of what each costs or commands in the market.
He seems wedded to a kind of market value realism, and from this shaky premise, he arrives at some bizarre conclusions.
To me this is like drawing a circuit diagram on a piece of paper and trying to convince someone that, "Really there is electricity flowing through it."
Models are relations between signifiers. There exists a transformation between the signified relations and the relations of the signifiers, but they are, in fact, two separate categories and the transformation isn't bijective. ie it doesn't form an isomorphism.
OK, yes, all models (and people) are wrong. I'll also allow that usefulness is not the same as verisimilitude (truthiness). But there is externality, even though nobody can as you say "perch" on it: it's important that there is objective reality to approach closer to, however uncertainly.
We will never access the signified only the signifier. When we believe that signifiers exist externally, we are engaging in a suspension of epistemic honesty, and I get why we do it - it makes talking about and engaging with the world infinitely easier. But we shouldn't ever believe our own trick. That's reverting to a pre-operational version of cognition.
Sure, if you train an LLM enough on gamefaqs.org, it will be able to answer my question as accurately as an SQL query, and there's a lot of jobs that are just looking up answers that already exist, but these systems are never going to replace engineering teams. Now, I definitely have seen some novel ideas come out of LLMs, especially in earlier models like GPT-3, where hallucinations were more common and prompts weren't normalized into templates, but now we have "mixtures" of "experts" that really keep LLMs from being general intelligences.
We do not need AGI to cause massive damage to software engineering jobs. A lot of existing work is glue code, which AI can do pretty well. You don't need 'novel' solutions to problems to have useful AI. They don't need to prove P = NP
Any nontrivial business application will be on the order of ~60% glue, API, interface/model definition, and CRUD UI code, which LLMs are already quite good at.
They're also good at writing tests, with the caveat that a human reviews them.
They're pretty decent at emitting documentation from pure code, too.
The only way these models don't result in mass unemployment in this industry is if the amount of work required expands to fill the gap. Which is certainly possible! The Jevons Paradox of software development.
tuatoru•5h ago
What the article is really about is the idea that all of the money that is now paid in wages will somehow be paid to AI companies as AI replaces humans. That idea being muddle-headed.
It points out that businesses think of AI as software, and will pay software-level money for AI, not wage-level money. It finishes with the rhetorical question, are you paying $100k/year to an AI company for each coder you no longer need?
tines•5h ago
gh0stcat•5h ago
RedOrZed•5h ago
aerostable_slug•5h ago
tines•3h ago
blibble•5h ago
employee salaries are high because your competitors can't spawn 50000 into existence by pushing a button
competition in the industry will destroy its own margins, and then its own customer base very quickly
soon after followed by the economies of the countries they're present in
the whole thing is a capitalism self destruct button, for entire economies
Revisional_Sin•5h ago
Is anyone actually claiming this?
lelandbatey•2h ago
satyrnein•3h ago
But that means that AI just generated a $90k consumer surplus, which on a societal level, is huge!