Edit:
- I wonder what's stopping Nvida from releasing an AI phone
- A LLM competitor service (Hey, how about you guys make your own chips?)
- They are already releasing an AI PC
- Their own self driving cars
- Their own robots
If you mess with them, why won't they just compete with you?
Just wanted to say one more thing, that Warren Buffet famously said he regretted not investing in both Google and Apple. I think something like this is happening again, especially as there are lulls that the mainstream public perceives, but enthusiasts don't. To maintain the hyperbole, if you are not a full believer as a developer, then you are simply out of your mind.
I've tried explaining that one or two AI data center clients for Nvidia dwarfs the entire gaming GPU market, but he just doesn't get it.
> I've tried explaining that one or two AI data center clients for Nvidia dwarfs the entire gaming GPU market, but he just doesn't get it.
I have a feeling that the different judgements come from the fact that the coworker thinks that the AI bubble will soon burst - thus, in his judgement, the AI data center sector of Nvidia is insanely overvalued, and will collapse. What will "save" Nvidia then will be the gaming GPUs. Thus, in his opinion, this is the sector that matters most for Nvidia, since it will become Nvidia's lifeline when (not "if"! - in your coworker's judgement) things will go wrong in AI.
You, on the other hand believe AI data centers are here to stay (which is a bold assumption: it could happen that AI will move more to the edge), and no big competition will arise for NVidia for "big AI ASICs" (another bold assumption). Your judgment is based on these two strong assumptions about the future, while your coworker's is based on different (possibly similarly bold) assumptions.
Currently. :-)
You're silly if you think otherwise.
Before the current AI hype, except for some rather specialized applications, people had rather little use for GPU acceleration (GPGPU) in data centers.
One or two large datacenter contracts will make more money than all consumer hardware for companies like Nvidia. The margins are completely different.
The market is so tiny that their capex investments into AI stuff would catch them with massive debt that the gaming revenue couldn’t support and they would have to go through bankruptcy
The scale of the orders that datacenters put in is insane. It's literally 1000:1 vs consumer stuff.
While building a single server I could easily handle 2-3 million dollars worth of hardware. We'd roll out 50 of those in a week. That was just my shift and we had several.
Gamers and consumers simply dont understand the scale that datacenters work at.
EDIT: brief search says last year apple sold 300m ios vs 20m macos devices.
Did you know that Nvidia has a gaming cloud running which might become the largest in the world at some time?
In 10-20 years, Nvidia might make more revenue from gaming cloud than they do today with gaming HW.
And that buys a lot of loyalty. Which translates to productivity.
B2C is a hellish headache that has marginal returns if you are not B2C first, and the amount of investment needed to be B2C competent just isn't worth it when there are alternative options to invest in
> A LLM competitor service (Hey, how about you guys make your own chips
Already exists. AI Foundary
> They are already releasing an AI PC
It's just an OEM rehash
> Their own self driving cars
Not worth the headache and also losing customers like Google or Amazon due to competitive pressure
------
Cannot reply: releasing their own "Nvidia Car" means they will lose their existing automotive partners becuase they will not spend on a competitor. Same reason Walmart stipulates EVERY tech vendor must decouple from AWS when selling to them.
I'm curious to know more about this if you (or anyone else) can elaborate on it.
What constitutes a tech vendor? Are you talking about Walmart buying PCs from Dell, from buying/renting a SaaS from someone, from IT-consulting coming in to do a one-time service for them (even if that service takes years)?
You're not talking about stuff like "Apple wants to sell iPhones to Walmart customers", I assume - yes?
WRT your edit: The answer to all of this is that it's very hard and requires a huge amount of investment to produce good vertical solutions in each of these spaces. You cannot build a good AI phone without first building a good phone. You cannot build a self-driving car without starting with a good car, etc. For robots, I'll point you to someone using Nvidia chips: the Matic is a complete ground-up rethink of how robot vacuums should work. It's taken them 7 years to get to early adopter phase.
More like you cannot build a self-driving car without starting with a good phone. See Huawei.
Agreed, and for all the "price crash" I still can't just whip out my credit card and purchase an hour or two on an H100/B100.
It's still multi-year contracts and "Contact Sales".
It's a low margin business and would hurt the balance sheet more than the completely irrelevant revenue from a project like that.
I've been investing in semi for decades and what strikes me about this recent cycle is that so many don't seem to understand that semi is a highly cyclical business that is prone to commoditization waves and inventory/capacity overbuild.
And speaking as a trader, instead of reinforcing your firmly held base case, I'd strongly consider painting out the bear cases. Look at the roadmaps of the hyperscalers that are designing their own chips for internal use, etc. And never use the word faith when it comes to markets.
You could easily see NVIDIAs margins get chopped down, and see the multiple re-rate lower from here. Actually, I'd argue the name is well on the way down this path already.
It's almost guaranteed to happen sooner or later. Semi down cycles are usually brutal for semi equities.
That's not to say it isn't a great company. It's certainly not a Buffett name though.
The cyclical stuff was the argument made for semis during the 2010s when no one gave a shit about semis really. I think the game changed, but again, I do operate on faith, or in investor terms, conviction. The main evidence for why the game has changed to me (well, other than AI being the most incredible piece of tech we ever built) is mostly that there are companies that have no business making chip hardware now interested in making chip hardware. That's not usually part of the cycle.
One is often unrealistic and later one is lot more common. And one really should consider later one in long term investments.
Anyway, maybe marvell should focus more on the the consumer side since hobbyists seem to be building crazy ai rigs and likely needs drives, at the very least, for models. It sort of seems like hobbyists are devouring any worthy gpu that gets produced.
See it that way, if you have an OS/SW for all the industries you mention then who is your competitor? Not the participants in that industries. Nvidia can partner with any automotive company but won't compete with any of them as long as they don't build cars. But imagine the potential of every self driving car being build using Nvidia AI?
Think about the potential of every robot build using Nvidia AI?
Think about the potential of any AI Service using Nvidia AI?
See, Nvidia isn't directly competing in the enduser market but instead focuses on the B2B. Nvidia can also create many different revenue streams from 1 customer.
For example an automotive customer: - Nvidia HW in car for AI - Nvidia data center on-prem/cloud for DriveSim in car - Nvidia Omniverse for car design and manufacturing simulation - Nvidia Isaac for robotics/logistics in manufacturing plant - Nvidia Cosmos+GR00T for robots inside the plant - Nvidia edge devices inside any robot in the plant - Nvidia NeMo data center on-prem/cloud for AI models / LLMs for internal use
And what will be the advantage? Nvidia can actually make it more and more seamless to operate between all Nvidia solutions. For example, you can do an update to your robots in Cosmos, simulate it in Omniverse and with 1 click update your Nvidia driven real robots. The alternative is that you have 3 solutions from 3 different vendors with no interface between them.
People have no idea, what Nvidia is actually creating. Nvidia has more SW engineers and even Nvidia employees call Nvidia an AI SW company. They publish so many libs and lots of other SW stuff that it's sometimes hard to keep up. Just look at all the RTX goodies for gamers which Nvidia is developing. And they are all free, well except that you need Nvidia HW for it. The same model, Nvidia will apply to ALL industries in the world. And here they discuss about CSPs being an issue for Nvidia while Jensen focuses to build a Mega Corp. which potential TAM is in every industry in the world :)
Gaming is only 7% of Nvidia's revenue.
They are losing their biggest customers to custom in-house silicon, and smaller orders are going to compete with a market being flooded by superfluous hardware from companies which either went bust due the the AI bubble shrinking, or went bust because they weren't able to compete with the big fish.
That's not a trend yet. We're about to enter an era where most media is generated. Demand is only going to go up, and margins may not matter if volume goes up.
> The open question is long-term (>6yrs) durability1. Hyperscalers (Google, Microsoft, Amazon, and Meta) are aggressively consolidating AI demand to become the dominant consumers of AI accelerators; while developing competitive, highly-credible chip efforts.
Hyperscalers aren't the only players building large GPU farms. There are large foundation model companies doing it too, and there are also new clouds that offer compute outside of the hyperscaler offerings (CoreWeave, Lambda, and dozens of others). Granted, these may be a drop in the bucket and hyperscalers may still win this trend.
But to your point, Disney is using GenAI in their new live action Moana film. Presumably that'll do lots of sales.
> Disney is using genAI in their new live action Moana film
If this is your bar then sure, but I think the interesting question is when/if we cross into a regime where mostly self-managed genAI is in true competition with the media you consume day to day. Not something that's hundreds of thousands of person hours being enhanced by genAI. I don't think there's any chance we see a total collapse of demand for this stuff but I think the jury is still out on how valuable it truly is imo.
I'm sure it's non zero and I expect demand will continue to rise, but it may plateau or slow sooner than people think, I just don't think we can clearly say yet.
At some point one of these Nvidia doomers will be right but there is a long line of them who failed miserably.
The article explains that Nvidia's biggest customers (50% of datacenter revenue) are switching to their own hardware.
Google runs AI / HPC workloads on their own hardware and has been doing that for more than a decade. Google Gemini was trained on TPUs developed in house. It does not run on Nvidia hardware.
https://www.tomshardware.com/tech-industry/artificial-intell...
https://finance.yahoo.com/news/apple-might-ai-game-1-1951003...
Before that, in a Wired article from 10 years ago about siri and ai, one of the apple higher ups was quoted bragging about having one of the baddest gpu farms around (paraphrasing)
But their internal workloads and their frontier model (Gemini) runs on TPUs.
How does Google pay for TPUs internally? By Google Search and Google Cloud of course. Google Search uses TPUs, Google Cloud however has way more non-TPUs instances.
What people forget, nobody wants to switch from CUDA dependency to SW dependency on Google/AWS/Azure. CUDA at least allows me to use it in consumer, in pro HW, in cloud and AND in on-prem data center.
I'm really looking forward to Fortune 500 companies sending all their internal company data to Google to structure it to train custom AI models. Yeah, that will never happen. What happens instead is that Fortune 500 companies will build up AI expertise to build their own custom AI model and they will think hard if they want the training AI compute internally or on a cloud. Nvidia has a huge business of building data centers on-premises which people totally oversee. NO CSP will ever compete there because it's against their primary business model. A Reliance India contract from 2023 alone is a delivery of 2 million GPUs in a few years. That's probably more than Nvidia's last year's total revenue and that is 1 large corp in India only.
You can argue they won’t, but the “enterprises won’t put sensitive data in Cloud” ship sailed years ago.
Their internal company data is already on cloud servers. They’re not going to waste money on doing it all in house. The executives will buy the AI service from Google/Azure/AWS, where the company data is already hosted, avoid the costs and risk of doing it in house, and collect their bonus.
The article seems focused more on stock price and where to bet, than the market for GPUs or generic hardware vendors.
There's a kernel of truth in it, but if I was McDonalds I'd care a lot more about what KFC is doing than the market trends of cast iron pans.
90% of Nvidia's revenue is from datacenters.
If the datacenters stop buying Nvidia's products, and use their own hardware instead, then Nvidia loses 90% of its revenue.
These 90% will be a flash in the pan, same way the COVID revenue was for mask sellers. Sure it feels bad from Nvidia's perspective, but we also can understand that the AI boom would not have kept Nvidia skyrocketing infinitely anyway.
It's a huge shift, but not something that can be acted by Nvidia, nor something they had to care about 5 years ago, nor a primary concern 5 years from now. On the long scale, it's almost as if the big fireworks that pushed Nvidia to its curent valuation just disappears and they're back to selling GPUs to OEMs and consumers primarily.
It's kind of like how people may make fun of YouTube seeing TikTok as a competitor. But you dig deeper and realize why they decided to get into short for content.
Nope; if hypothetically 100% left NVidia, whether to their own hardware or to not use GPUs at all, it'd be easy to say NVidia would be last in the market
Gaming is not entirely Nvidia's turf either. AMD has 75% of the console market (Xbox and Playstation) and 30% of the PC market.
Looking at these numbers[0] for the console market:
> Sony has sold 61.94M units of PS5 and Microsoft has sold 30.14M units of Xbox Series X|S and Nintendo Switch sold 143.49M units.
Nintendo (NVidia chip) sold 1.5x of Sony and Microsoft combined. Given Switch's success the numbers look reliable to me, then Switch 2 is of course also from Nvidia, and I wouldn't be against it selling well.
[0] https://hookedontech.com/switch-vs-ps5-vs-series-x-console-g...
https://www.ign.com/articles/ps5-has-best-holiday-ever-overa...
In terms of revenue, AMD has the vast majority of the console market.
Yes the Switch sold more units, but that is a smaller SOC with much lower revenue.
Revenue and profit is what matters, not the number of units sold.
Check the price of the PS5 Pro vs Switch.
AMD has dominated the console market since Xbox One / PS4.
Nvidia could be earning $30 - $50 per Switch, while AMD earns $120 - $200 per PS5/PS5 Pro.
Just the hypothetical "if X% of your customers leave, but don't go to competitors, won't you keep your relative market position"
NVIDIA is very strategic about building product to avoid commodification -- both by building out network effects where software is tied to their proprietary sdk libraries, and by always focusing on being at the cutting edge of product.
Both these things can be true: a large company should try to build their own hardware to reduce supplier risk, and a large company should be open to suppliers that have better product that delivers business value.
So far, these large companies' internal hardware has been useful internally but never a complete replacement for NVIDIA, which keeps staying at the cutting edge of new capabilities.
NVIDIA already faced existential risk when Intel was commodifying all the dedicated motherboard components in the late 90s, 2000s, (like sound cards etc), so they're hyper-aware of this.
don't know much about the rest tho
Google use their own hardware for AI/HPC. Nvidia hardware is offered to external customers who demand it on Google Cloud Platform.
The AI datacenter explosion has only occurred within the past two years. We are talking about plans that will take years to implement. The hyperscalers are trying to cut out Nvidia as soon as possible.
The company that can allow collaboration up the value chain only like Google can in this space right now is going to win. The author went over similar pushes by Meta, AWS, and Microsoft.
Now that their eyes are on the prize, and the prize is so staggeringly big, I'm convinced of the threat to their moat.
Fair Disclosure: I am very neutral when it comes to FLOPS/W/$ and the generality of those FLOPS. Given inference and training, the advantage is slipping.
Not ideal for them but hardly a death blow
I figure regardless of tariffs and competition and other fluctuations the demand for computing power will endlessly trend upwards and as much as we can produce will be consumed.
Good luck to you.
Personally, I have found NVIDIA to be one of the most hyped stocks I have ever seen and it feels weird to take a position that it is under-hyped. That said, people aren't capable of beating the market consistently so I would never invest based on this, or any intuition or information I had.
Others talk about chips when Nvidia thought about interconnects 8 years ago. Today, competitors try to catch up on this while Nvidia talks about One Giant GPU.
The next step will be scale up and then scale out.
Nvidia is always ahead because what the article fails to see is that where CSPs are today is where Nvidia was in the last decade. Nvidia has a working ecosystem for everyone which they can now fine tune with actual customers.
alephnerd•9mo ago
This is what will help protect Nvidia now that DC and cluster spend is cooling.
They own the ecosystem thanks to CUDA, Infiniband, NGC, NVLink, and other key tools. Now they should add additional applications (the AI Foundry is a good way to do that), or forays into adjacent spaces like white-labeled cluster management.
Working on building custom designs and consulting on custom GPU projects would be helpful as well by helping monetize their existing design practice during slower markets.
Of course, Nvidia is starting to do both, with Nvidia AI Foundry for the former and is working on the latter by starting a GPU architecture and design consulting as announced at GTC and under McKinney
scrlk•9mo ago
Apart from Nintendo, who has successfully partnered with Nvidia? Apple, Microsoft and Sony have all been burnt in the past.
alephnerd•9mo ago
Nvidia has started formalizing that last year [0], but it's a new muscle for them.
[0] - https://www.reuters.com/technology/nvidia-chases-30-billion-...
schaefer•9mo ago
AlotOfReading•9mo ago
voidspark•9mo ago
No they do not. The article explains that Google, Amazon, Microsoft, and Meta are developing their own hardware and software for AI/HPC.
Google Gemini was not trained using CUDA or Nvidia hardware.
liuliu•9mo ago
Chinese CSPs are the only ones can develop their own hardware / software for AI / HPC.
voidspark•9mo ago
https://www.reuters.com/technology/artificial-intelligence/m...
https://azure.microsoft.com/en-us/blog/azure-maia-for-the-er...
https://aws.amazon.com/ai/machine-learning/trainium/
liuliu•9mo ago
Meta will not be able to produce a chip that can run GenAI workload in the next 2 years.
Microsoft is doing a side-quest, and they haven't even proved themselves with their FPGA adventure and ARM server adventure.
Amazon is legit, they have done well on ARM server, but trainium is TBD, and how much they will pull back in a recession given Jassy is a number guy will be a question mark.
No need to discuss, we can just see this in 2 years, everything will be crystal clear.
voidspark•9mo ago
solidasparagus•9mo ago
I suspect the closed nature of the ecosystem will preclude them from winning as much as they could.
bee_rider•9mo ago
There’s maybe some wiggle room, in that these AI distributed systems might not (?) look like HPC/scientific computing systems—maybe they don’t need Infiniband style low latency. So these other funky networks might work.
But like, Nvidia has the good nodes and the good network. That’s a rough combination to compete against.