This article even smells ... generative.
Each of those are so wildly different conclusions and require such wildly different data to support it.
If we want a feature we can write a two sentence prompt and get that feature. But the technical debt is going to grow exponentially, and I haven't seen a shred of focus on preventing that inevitable outcome.
It really is about "How do you know your startup is 'working' (i.e. doing the right things to be successful) in this AI era".
There may be loads I don't like about a16z, but this article contained lots of interesting insights and actual data. You don't have to agree with their conclusions to get tons of value out of the information they present.
Given the cost of training a SOTA model, it’s not clear these companies have sustainable businesses. If your primary expense is AWS you can always shift to your own hardware once you hit sufficient scale. If you’re Cursor, how big do you need to get to eliminate your 3rd party API dependency?
There is also enough competition in the core model space that these apps don't need to have an Achilles heel by being reliant on a single vendor. E.g. I think Cursor was smart to let you "bring your own API key".
Put another way, if I make $100M in annual revenue, but am paying out $110M to the API I wrap, it’s not nearly as compelling a business as that top-line $100M number makes it out to be.
In the previous generation of startups, expenses were mostly dominated by headcount, and the cost of actually delivering the service tended to be small. The story was “keep growing revenue, and if you need to show a profit, stop hiring.”
An AI startup built on other people’s models has to hope that the foundational models end up being fungible commodities, otherwise any margins you might gain will get squeezed out by your LLM provider. Alternatively, you can train your own model.
I don’t know what Cursor’s userbase looks like. If everyone is paying for Pro but using their own API key, that’s obviously a high margin business.
On one hand, it means you can "fail faster". That is, if you're a startup employee, and you don't see "hockey stick" growth that is looking crazy impressive at the end of year 1, you should know that the chances of your equity being worth more than a token are basically zero. Starting around the dot com boom, I worked in numerous startups, and for some of them we were still chugging along in years 3-4 with the hopes that our "semi-OK, decent growth" would turn vertical any day now. I've seen numerous startups that started in the 2015-2020 timeframe (so existed for 5-10 years) where they didn't outright fail but common got wiped in an acquisition. That's more a consequence of the rise in interest rates and difficult fundraising environment, but it's really rough to plug along at a company for 5-10 years, think you're doing OK, and then your stock is worth nothing. So from a startup founder/employee perspective, you get signal faster and don't have to waste time.
Simultaneously, though, it seems like any idea that would take a decent amount of upfront investment and time would be hella difficult to get funded, and I think that's unfortunate.
This literally just happened to me after working at the company for almost 4 years, and yeah it really sucked. Exec team were super excited about the acquisition though of course…
What stands out to me right now is just how loud the expectations around AI have become, especially among non-technical folks. It’s not just “Bitcoin hype” loud, it’s bordering on “AI will solve everything” levels of noise. For those of us who’ve been around a bit longer (sorry, younger HN crowd), the current buzz feels reminiscent of Y2K or the first dot-com wave.
Back then, I was early in my career, but I vividly remember the headlines, the overpromises, and the sheer volume of attention. The difference now is, there’s a lot more substance under the surface. The tools are genuinely useful, and the adoption curve feels more practical, even inevitable. That’s what makes me think AI might become to this era what the smartphone was to the last, not just a novelty, but an everyday dependency.
That said, I’ve also learned a lot from voices here on HN, especially when it comes to the financial realities behind the tech. If there’s one throughline in many of these discussions, it’s that financial viability, not just hype or innovation, is what ultimately determines whether this all collapses or truly transforms the world.
Just my 2 cents.
In the meantime, the usual suspects are gonna make a whole lotta money.
I think discussions about AI hype miss a critical factor: there are two groups of people getting swept up in hype. One are the Investors[0]. The other are the Beneficiaries of the technology[1]. AI is over-hyped for the former, but not for the latter.
If AI hype is anything like dotcom boom - or like telecom, or building up railways in the US - well, it sucks for the Investors. For them, the hype is getting dangerous - if it's a bubble and it bursts, plenty of them will lose money, and many companies will fold.
But I'm not in that group, so I don't care.
For me, one of the Beneficiaries, the hype seems totally warranted. The capability is there, the possibilities are enormous, pace of advancement is staggering, and achieving them is realistic. If it takes a few years longer than the Investor group thinks - that's fine with us; it's only a problem for them.
--
[0] - In a broad sense, to include both people funding it and people making big investments around the expectations - whether regular investments, or company strategy, or career plans.
[1] - People using it for work and personally, researchers, etc.; also people with defined hopes for the technology; also ultimately everyone who benefits from it when it matures (and possibly builds on top of it).
It is also for the beneficiaries because price comes into the equation and the longer it takes, the more expensive it will be.
We are currently paying the early-Uber prices at the moment but it's likely not sustainable (or not enough) and we'll see price hikes as soon as vendor lockin is sufficiently set in.
> We are currently paying the early-Uber prices at the moment but it's likely not sustainable (or not enough) and we'll see price hikes as soon as vendor locking is sufficiently set in.
Not so, there are many open-weights models close to the Pareto frontier* just waiting for cheaper RAM. Low-end models I can already run on my laptop.
We only get lock-in if some vendor manages to create an architecture which is both significantly better and secret. Secret not merely against employees moving around and sharing ideas or anonymously leaking things — the labs are known to use AI as part of the model development, while the models themselves are already observed to attempt to leak their own weights in various circumstances.
I do see the argument, though: delays and resulting prices are in a positive feedback loop. But that's a problem for Benefactors only when it makes the new thing stay economically unviable indefinitely. Otherwise, market will find a way (and hype helps!).
EDIT:
Vendor lock-in is always annoying to Benefactors, but I wouldn't worry too much now. GenAI is not Uber - it actually is sustainable at the current and future quality level. Even as major AI services are subsidized with investor money to some extent (I don't have current numbers on that), it's just the usual, boring case of throwing money at a thing to accelerate its growth, to capture more market than competitors.
Uber's case is special in that the business is fundamentally unsustainable, so it actually amounts to market destruction - burning stupid amounts of investor money let them break into existing markets and gut all local incumbents, but as that money runs out, both passengers and drivers see costs skyrocket while quality of service sinks rapidly sinks, and there is no going back - incumbents all died out or transformed into Uber-like thing, which destroyed the structural efficiencies in the market that built up over decades.
As Benefactors of technological advancement, all we got from Uber was the ability to order a taxi with an app. Doesn't feel like it was worth it.
But GenAI is not Uber.
I’ve lost count of how many times I’ve had to explain, again and again “No, AI can’t do that… and no, it’s definitely not drawing up your architectural building plans.” Well, not yet, anyway.
This. It's bordering on mass madness. I am taking 2-4 calls a week from "two guys from ..." with mad ideas and unrealistic expectations of what it takes to build and maintain an AI product. I've seen it with early internet rush, Web 2.0, and crypto before.
The post was more about the hype and attention surrounding AI, which can feel mentally exhausting at times, mostly because of how fast everything is moving. Not a complaint, really. If anything, that might be a good sign. I totally get why people are excited, it just takes effort to stay grounded in the middle of it all.
Appreciate the comment! Hopefully next time I’ll be jumping in with war stories instead of sideline takes.
In the mean time, I try to enjoy the freely available LLMs for quick summaries on technical topics before the inevitable enshittification ruins them forever.
The current AI companies are burning money with an exit strategy of replacing office workers with robots. If/when that doesn't happen, they'll have to jack up prices and figure out another business model. Uber had the two-sided market and network effects for a true enshittification play -- riders and drivers are both trapped -- but LLM companies haven't figured that part out yet. Do they go for ads once they have enough users and brand recognition? Hoard GPUs and training data (maybe through licensing deals) to create a moat?
Anyways, it's fun while it lasts.
What I will say is that AI will either fundamentally transform the economy for all of humanity, or it will fizzle out. I really don't see any in-between.
My dotcom comparison wasn’t really about the tech, more about the noise and hype. Feels like that same kind of frenzy, but now the tech’s actually capable of doing something big. The financial viability is still a big question though. Thanks for the comment.
and that hasn't changed
The fact that we have people on HN (of all places) who are convinced that now we do have this sort of tech when it's simply not true, is really sad.
https://news.ycombinator.com/item?id=44208831
TL;DR: there are two groups of people mixed up in the hype: the people investing in it, and people using it. AI may indeed be overhyped for the former. It's not overhyped for the latter.
Makes me think of how railways were built across the US. AFAIK, the first generation of investors generally lost big. They funded a huge, capital-expensive infrastructure project, and didn't get a return on it in time. But even as they lost, the work they funded remained - subsequent waves of businesses built on top of it and became profitable, the society benefited, and the country was transformed. The only losers to this "bubble" were the first-movers and their backers.
So when someone wonders if AI is overhyped, I'd ask them: what's your stake in this? Are you an investor hoping for quick returns, or are you someone who stands to benefit from the technology existing?
Just don't do the reverse - privatization seems to be lethal to hope just as much as it is to infrastructure operations and maintenance. Just like railways.
That is some of the best wisdom on HN. Beating your competition's AI model at whatever goalposts you think is important means nothing until you have positive cash flow. All hype must encounter reality and survive to not only make a sale, but then go and do it very consistently.
The main claim in the post: Their portfolio companies have shown an improved rate of accumulating revenue ever since LLMs took off.
Weakest part of the post: No attempt at explaining how or why a LLM affects these numbers. They allude to 'shipping speed' and 'product iteration', but how an LLM helps these functions is left unexplored.
There's an implied deductive argument that a LLM can write some code, so obviously shipping speed is faster, so obviously revenue is faster. But the argument is never explored for magnitude of effect or defended against examples where shipping faster or using LLMs doesn't equal faster revenue.
Also, nothing about sampling bias, size or spread.
Overall: Probably meant as a confidence boost to the sleep-deprived founders out there. But teaches nothing.
The post insist 2 to 4 million ARR in 1 year is the new norm. My guess its meant for their own investors and get founders to undervalue their achievements (Or learn to get creative with what ARR means).
Client will go around saying they need an app or a thing and they reach a no-code company who promise they can deliver a working MVP at sprint 0 (~2 weeks). They charge a lot of $$$ and promises what would be construed at 24/7 support. If they get the contract, the "consultants" are worked to the bone in the beginning because managers need to hit their marks, make their C-suite happy, and ultimately try and keep the client as a long term paying customer.
This isn't anything out of the ordinary but I just had to rant that no-code is BS but it's still a flourishing market.
Tech company: we made you a tool that allows you to increase energy flows into your closed systems AND decrease their entropy!
Software engineers: thank you for increasing the demand for manual entropy management and getting our pay to new heights while proclaiming our demise as a class.
> Startups are working faster than ever, and both businesses and consumers are demonstrating high willingness to pay for new products.
..I feel like the focus was more "offering (generative) AI features" than "built with AI", as in startups being AI forward for their ICPs are building businesses faster than the incumbents who are still trying to figure out how to wedge AI into their tech debt laden product landscape.
Did these portfolio companies just get the UX right? That would mean their entire portfolio, across industries, just happens to be filled with exceptional designers of products, who all simultaneosly cracked the UX for their respective industries.
A simpler inquiry:
What if the cause of this revenue growth is not on the startup/VC side but on the buyer side? Why are more companies rushing out there willing to spend more than before?
Here we have some candidates to consider: FOMO, labor efficiency, excitement.
Now, we have economics (behavioral and regular) to form a causal narrative; we have loss aversion and job cuts as incentives.
So, based on this post can we say this is a better time to start a startup than before? I don't think so.
Can we say that if you do start one with the same odds of success as before, grow it to a VC happy size, and have AI somewhere in the offering, you'll probably have more revenue than companies of a similar size from a few years ago? Yes, I think we can say that.
In other terms, as your revenue scales, your OpEx scales too. This breaks the idea that you need to grow revenue to "break off" as your margin as set due to compute.
The other issue is, I've been burning compute between Perplexity, Grok, Gemini, Claude and Deepseek. I pay nothing for these and they are good enough. It is easy to grow revenue to $1m when you are burning $2m of compute.
But whether it's short-sighted for the investors or not, I think the takeaway for founders is "investors now expect you to make more revenue faster, and B2C applications are more interesting than before".
scubbo•8mo ago
> we believe there’s never been a better time to build an application-layer software company.
Nothing could be a clearer indication that the primary desirable quality in a founder is the conviction that, against all odds, you are better than everyone else.
paulddraper•8mo ago
bluefirebrand•8mo ago
paulddraper•8mo ago
Which is why I said what I said.
nico•8mo ago
Have seen how application processes for technical roles went, in less than a year, from considering AI cheating; to now requiring AI to do the take home or finish a live coding task
ZeroTalent•8mo ago