Stock number go up
The public co valuations of quickly depreciating chip hoarders selling expensive fever dreams to enterprises are gonna pop though.
Spend 3-7 USD for 20 cents in return and 95% project failures rates for quarters on end aren't gonna go unnoticed on Wall St.
Career finance professionals are calling it a bubble, not due to their suddenly found deep technological expertise, but because public cos like FAANG et. al are engaging in typical bubble like behavior: Shifting capex away from their balance sheets into SPACs co-financed by private equity.
This is not a consumer debt bubble, it's gonna be a private market bubble.
But as all bubbles go, someones gonna be left holding the bag with society covering for the fallout.
It'll be a rate hike, it'll be some Fortune X00 enterprises cutting their non-ROI-AI-bleed or it'll be an AI-fanboy like Oracle over-leveraging themselves and then watching their credit default swaps going "Boom!" leading to a financing cut off.
...and again, this is assuming AI capability stops growing exponentially in the widest possible sense (today, 50%-task-completion time horizon doubles ~7 months).
The other thing is enterprise sales is ridiculously slow. If Intel wants corporate customers to buy these things, they've got to announce them ~a year ahead, in order for those customers to buy them next year when they upgrade hardware.
Samples of new products also have to go out to third party developers and reviewers ahead of time so that third party support is ready for launch day and that stuff is going to leak to competitors anyway so there's little point in not making it public.
First, they're not even an also-ran in the AI compute space. Nobody is looking to them for roadmap ideas. Intel does not have any credibility, and no customer is going to be going to Nvidia and demanding that they match Intel.
Second, what exactly would the competitors react to? The only concrete technical detail is that the cards will hopefully launch in 2027 and have 160GB of memory.
The cost of doing this is really low, and the value of potentially getting into the pipeline of people looking to buy data center GPUs in 2027 soon enough to matter is high.
Not release anything?
There'll be a good market share for comparatively "lower power/ good enough" local AI. Check out Alez Ziskind's analysis of the B50 Pro [0]. Intel has an entire line-up of cheap GPUs that perform admirably for local use cases.
This guy is building a rack on B580s and the driver update alone has pushed his rig from 30 t/s to 90 t/s. [1]
0: https://www.youtube.com/watch?v=KBbJy-jhsAA
1: https://old.reddit.com/r/LocalLLaMA/comments/1o1k5rc/new_int...
Yeah even RTX’s are limited in this space due to lack of tensor cores. It’s a race to integrate more cores and faster memory buses. My suspicion is this is more me too product announcement so they can play partner to their business opportunities and continue greasing their wheels.
Semiconductors are like container ships, they are extremely slow and hard to steer, you plan today the products you'll release in 2030.
Local inference is an interesting proposition because today in real life, the NV H300 and AMD MI-300 clusters are operated by OpenAI and Anthropic in batching mode, which slows users down as they're forced to wait for enough similar sized queries to arrive. For local inference, no waiting is required - so you could get potentially higher throughput.
How is this better?
Not, as I assume you mean, vector graphics like SVG, and renderers like Skia.
https://www.linkedin.com/posts/storagereview_storagereview-a...
RoyTyrell•3h ago
CoastalCoder•3h ago
I assume that hasn't changed.
0xfedcafe•54m ago