You can get the engineers from those very hardware companies to do bespoke optimizations for your specific high load use cases. That's something a startup would struggle to match.
That doesn't seem to be the case to me. I guess the author wants to do everything on his own terms and maybe companies aren't interested in that.
The author could also be correct. Investors tend to be herd animals, and if you're not buying into the same tech as everyone else, your proposal is higher risk. It might very well be easier to say to an investor that you're going to buy a million Nvidia GPUs and stuff them in a datacenter in Texas like everyone else.
I'm interested in the one company that does take the bet on infrastruture optimization. If that works, then a lot of people are going to lose a lot of money really quickly.
But if they succeed with agentic reasoning models (we are absolutly not there yet) then I think meritocracy will be replaced with assetocracy. The better the model, the more expensive it will be and the better the software will be.
I don’t worry about it myself, but I do worry for my kids. Im not even sure what to teach them anymore to have a shot at early retirement (and they keep raising the retirement age too).
It does not matter what your income is if you cannot budget and save.
Risk too is sort of a red herring. Just buy in whenever it dips, and you are set. Diversify just enough to dilute the aggregate risk, and it practically disappears.
Savings are not even possible with low income, only with medium to high income. The lesson to learn is to avoid wasteful excessive spending that benefits oneself only in the moment.
That's an interesting neologism, but the existing term for "rule by whoever controls the most expensive assets" is "capitalism"
Capitalism certainly favors those with the most... capital, but there are quite a few other factors. Market fit, efficiency, etc. The Dutch East India Company had the most assets, yes, but also the best ships and a killer (literally) business model.
The notion of a sector where success is determined almost entirely by who can stockpile the most assets (GPUs in this case) is a somewhat unique situation and probably merits its own term
OTOH garage-startup acquisitions are acquihires.
This eliminates the need for more specialized models and the associated engineering and optimizations for their infrastructure needs.
Neither the hyperscalers nor NVDA are safe from uncertainty.
Spending a lot (on capex or opex) certainly is not providing any kind of signaling benefit at this level. It's the opposite, because obviously every single financial analyst in the market is worried about the rapid increase in capex. The companies involved are cutting everything else to the bone to make sure they can still make those (necessary) investments without degrading their top-line numbers too much. Or in some cases actively working to hide the debt they're financing this with from their books.
Even if we imagined that the author's conspiracy theory were true, there would still be massive incentives for optimization because everyone is bottlenecked on compute despite expanding it as fast as is physically possible. Like, are we supposed to believe that nobody would run larger training runs if the compute was there? That they're intentionally choosing to be inefficient, and as a result having to rate-limit their paying customers? Of course not.
The reality is that any serious ML operation will have teams trying to make it more efficient, at all levels. If the author's services are not wanted, there are a few more obvious options than the outright moronic theory of intentional inefficiency. In this case most likely that their product is an on-edge speech to text model, which is not at all relevant to what is driving the capex.
It's not providing any benefit now but there's still signalling going on, and it absolutely provided benefit at the beginning of this cycle of economy-shattering fuckwittery.
>I see hundreds of billions of dollars being spent on hardware
>I don’t see are people waving large checks at ML infrastructure engineers like me
Which seemed like a valid question mark until you look at the github. <1B Raspberry pi class edge speech models. That's not the game the hyperscalers are playing
I don't think we can conclude much of anything about the datacenter build out from that
The hyperscalers are playing the game hyperscalers are playing - and only them. Where do they expect to find talent then? If the logic is, you need to work at a hyperscaler to work at a hyperscaler, no wonder they won't find any talent. That would be like NASA only hiring astronauts to send to space if they had already experience being in space.
But also, the companies are buying up this infrastructure because whoever controls the infrastructure also controls the industry in around 5 years time.
Source? Satya Nadella seems to disagree with your statement (at least as I understand both): https://uk.finance.yahoo.com/news/microsoft-ceo-satya-nadell...
"Ah yes we invested $13B into OpenAI but it's a bubble"
CEOs of big public companies lie to their shareholders all the time. It would be fantastic if they could be held accountable for those lies, but AFAIK when the SEC has tried, they always weasel out of it by saying things like "well, from what I knew at the time, it was true" or "if you interpret it this (ridiculous) way it was true". It's very, very hard to prove malicious intent—that is, prove what was going on in the CEO's head when they said it—with something like this beyond doubt, and that's effectively what's required.
Actually curious, when was the last time we saw a CEO of a company as big as Microsoft caught lying to shareholders?
The one instance I can recall offhand of big companies doing fraud were cases like Enron, which resulted in execs going to jail. More recent cases of CEOs were not large companies and they also ended up with them in prison, e.g. Nikola. (Sure the guy's out again, but that's been done, uh, outside the usual process of justice.)
How probable is it that all these competitors are colluding on the same story while burning what would have otherwise been very healthy profits?
Occam's razor and all that.
Economic and social predictions beyond 2 years are sketchy at best.
I don’t follow what this means
The phrase "the second mouse gets the cheese" means that it can be beneficial to let others take the initial risk, as the first to act might trigger a negative outcome, leaving the opportunity for the second person to succeed without the same danger. I
The take is that small incremental improvements on the hardware-software at that scale imply massive returns yet there isn't much work for that use case.
However, clueless people who don't know how to optimize probably don't know where to spend money on optimization, either. So maybe it's just not a great fit for outsourcing, especially in a realm where there's no standard of correctness to measure the results of the supposedly "optimized" training against. And Warden seems to be pitching outsourcing rather than trying to get acquihired.
the optimal amount to spend on software optimization
is at least a substantial fraction of your hardware budget.
This has been a banging-head-against-wall sort of struggle every place I've worked on software, without AI even coming into the picture.At one startup they were spending millions of dollars on AWS and complaining loudly to us about AWS spend and yet... god forbid the engineering team devote any resources to optimization passes instead of rolling out more poorly considered features, and hiring more engineers, because the existing engineers are struggling to be productive because everything is so unoptimized, and also because they have to spend a bunch of their time interviewing and training new hires.
Recent generation llms do seem to have some significant efficiency gains. And routers to decide if you really need all of their power on a given question. And Google is building their custom tpus. So I'm not sure if I buy the idea that everyone ignores efficiency.
1. Token prices keep plummeting even as models are getting stronger.
2. Most models are being offered for free at a significant loss, so reducing costs would be critical to maintain some path to sustainability.
3. Every hyperscaler has been consistently saying for the past several quarters that they are severely constrained on capacity and in fact have billions in booked backlogs. That is, if they had more capacity they would actually be making even more billions.
I can totally imagine the smaller players renting these cloud resources for their private model uses to be rather inefficient (which is where the 50% utilization number comes from), probably because they are prioritizing time-to-market over other aspects. But I would wager that resource efficiency, at least for inference, is absolutely a top priority for all the big players.
bluesky19283746•2mo ago
iparaskev•2mo ago
> So i understand correctly, they spend more even thought They can, optimize and spend less
This is what I understand as well, we could utilise the hw better today and make things more efficient but instead we are focusing on making more. TBH I think both need to happen, money should be spent to make better more performant hw and at the same time squeeze any performance we can from what we already have.
_heimdall•2mo ago
Optimization isn't even being considered because its the total cost spent on hardware that is the goal, not output from the hardware.
epolanski•2mo ago
_heimdall•2mo ago
Right now it seems investment is primarily based on vibes, media hype, and total spend on hardware and infrastructure.
Coffeewine•2mo ago
_heimdall•2mo ago
For example, I could see him saying not to waste tokens on "please" simply because he thinks that is a stupid way to use the LLM. I.e. a judgement on anyone that would say please, not a concern over token use in his data centers.