Isn't this a "startup blueprint" for tech companies? Uber, Airbnb, Amazon, etc ... More importantly, AI dominance is more important given the reward?
Isn't this a "startup blueprint" for tech companies? Uber, Airbnb, Amazon, etc ... More importantly, AI dominance is more important given the reward?
They have barely even monetized users. I think it's possible the bubble pops and openai still continues to win.
So much of this article is copium pretending the world is not radically changing. Even if progress stops today massive numbers of jobs will be and are being replaced. I wish it wasn't true but what I wish has no bearing on reality.
> AI hallucinations are one of the best bits of PR ever. The term reframes critical errors to anthropomorphise the machine, as that is essentially what an AI hallucination is: the machine getting it significantly and repeatedly wrong. Both MIT and METR found that the effort and cost required to look for, identify, and rectify these errors was almost always significantly larger than the effort the AI reduced.
> In other words, for AI (specifically generative AI) to be even remotely useful in the real world and have a hope in hell of generating revenue by augmenting workers at scale, let alone replacing them like it has promised to, it needs to cut “hallucinations” down to basically zero.
As someone who uses Claude 4.5 in Cursor every workday this rings extremely hollow. I am thinking to myself daily “I would have never had time to do this before.”
Have an idea for a script, you don’t have to lose a day building it. Wanna explore a feature, make a worktree and let the agent go. It’s fundamentally changed my workflow for the better and I don’t wanna go back, hallucinations and all.
The overall goal isn't to get wealthy, but for the wealthy to get the foot in the doorway to gain influence in the core of the automated economic military industrial scientific system that is going to replace money-based economics.
It is about autonomous robot armies.
aurareturn•3mo ago
Further more, losing $8b in the first half to buy GPUs isn't a big deal when you're growing 3-4x and there are investors lining up to give you money.
The rest of the article is mostly exaggerated AI doomer opinions that are often dispelled here on HN news comment section. For example, the author cites the MIT AI report snippet that says 95% of companies are failing at agentic AI. But the actual report is far more positive on AI's impact in the workforce.[1]
These doomer articles always fail to grasp two things:
1. Major Silicon Valley companies have always lost a huge amount of money before becoming profitable. OpenAI is just the next and at a bigger scale (because tech is far bigger in 2025 than before). Despite countless examples of tech companies losing a lot of money early on to becoming hugely profitable later, people still get hung up on the fact that OpenAI isn't profitable in 2025.
2. They always think that AI is as good as it gets now with little to no improvements coming. But we're still on an exponential curve.[2]
[0]https://finance.yahoo.com/news/openai-cfo-we-will-more-than-...
[1]https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Bus...
[2]https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...
techblueberry•3mo ago
Also the article, as all doomer articles I read do, address your points directly.
tim333•3mo ago
techblueberry•3mo ago
tim333•3mo ago
I just watched an interview and he seems to be leaning the other way that self improving AI is about to kick off https://www.youtube.com/watch?v=JfE1Wun9xkk&t=810s That would be a step beyond the normal Moore's law type exponential.
techblueberry•3mo ago
lisbbb•3mo ago
aurareturn•3mo ago
wqaatwt•3mo ago