Edit: It kind of looks like there's no silicon anywhere near production yet. Probably vaporware.
Also, the 3D graphic of their chip on a circuit board is missing some obvious support pieces, so it's clearly not from a CAD model.
Lots of chip startups start as this kind of vaporware, but very few of them obfuscate their chip timelines and anticipated release dates this much. 5 years is a bit long to tapeout, but not unreasonable.
This seems indicative enough for me, give or take a quarter or two probably, from the latest news post on their website:
> VSORA is now preparing for full-scale deployment, with development boards, reference designs, and servers expected in early 2026.
https://vsora.com/vsora-announces-tape-out-of-game-changing-...
Seems they have partners as well, who describe working together with a Taiwanese company as well.
You never know, guess they could have gotten others to fall for their illusions too, it's not unheard of. But considering how long time something like this takes to bring to market, that they have dev-boards ready is months rather than years at least gives me enough to wait until then to judge them too harshly.
So far, they just talk about it.
The bottleneck for inference right now isn't just raw FLOPS or even memory bandwidth—it's the compiler stack. The graveyard of AI hardware startups is filled with chips that beat NVIDIA on specs but couldn't run a standard PyTorch graph without segfaulting or requiring six months of manual kernel tuning.
Until I see a dev board and a working graph compiler that accepts ONNX out of the box, this is just a very expensive CGI render.
That seems like not much compared to the hundreds of billions of dollars US companies currently invest into their AI stack? OpenAI pays thousands of engineers and researchers full time.
indeed no mention of PyTorch in their website...honestly it looks a bit scammy as well
The outcome is that most of custom chips end up not being sold on the open market; instead their manufacturers run them themselves and sell LLM-as-a-service. E.g. Cerebras, Samba Nova, and you could count Google's TPUs there too.
The specs look impressive. It is always good to have competition.
They announced tapeout in October with planned dev boards next year. Vaporware is when things don’t appear, not when they are on their way (it takes some time for hardware).
It’s also strategically important for Europe to have its own supply. The current and last US administration have both threatened to limit supply of AI chips to European countries, and China would do the same (as they have shown with Nexperia).
And of course you need the software stack with it. They will have thought of that.
https://vsora.com/vsora-announces-tape-out-of-game-changing-...
Multiple independent sources confirmed the tape-out: EE Times: https://www.eetimes.eu/vsora-tapes-out-ai-inference-chip-for...
L’Informaticien: https://www.linformaticien.com/magazine/infra/64028-vsora-me...
Solutions Numériques: https://www.solutions-numeriques.com/vsora-franchit-un-cap-a...
There’s also an industrial manufacturing partnership with GUC: https://www.design-reuse.com/news/202529700-vsora-and-guc-pa...
Strategically, having a European AI inference chip matters. The US has already threatened export limits to Europe, and China has shown similar behavior (e.g., Nexperia). Building local supply is important.
Calling this vaporware makes no sense: tape-out + published roadmap = real, not slides.
I agree that the comments here are surprisingly superficial in their complaints, but I guess it the typical bike-shedding, people don't have technical points to nitpick or the experience to judge the actual product, so from their US-based point of view, they find something else to hook on to, even when there are facts like concrete partnerships making it clear it isn't vaporware, they just have to say something.
Where do you see the negativity?
I don't believe labeling healthy skepticism and criticism as negativity to farm artificial sympathy in retaliation, does any good to anyone.
Humans have pattern recognition capabilities for a reason, and if a company is triggering that in them, then it's best expressed why(probably because they saw this MO before and got burned) instead of just cheerleading the unknown for fake positivity.
First comment: "Looks expensive, I'm guessing"
Second comment: "Probably vaporware"
6th comment: "They haven't disclosed any release date, Lots of chip startups start as this kind of vaporware" (they did literally just enter fabrication it seems)
10th comment: "So far, they just talk about it."
Maybe it looks differently now, after 14 hours since the submission was made, but initially yesterday, most of the comments were unfounded (and poorly researched) criticism.
Since it seems like a French company, I can think of various European customers who would be interested in using their hardware, or even investing in this company. For starters, government, defense industry, IT (including infosec). European defense industry is a goldmine right now, the sky is the limit.
Care to detail? Like I'm sure defense stocks and some arms manufacturing is up, but where I live I don't see the tech jobs market being boosted by defense spending.
It all goes slowww. But it is moving forward. You would need a tough screening regardless. Defense is very picky about their contractors. But I can give a hint where the money comes from: 3,5% of BBP has to be for defense, and 1,5% for infra for defense. Many countries were at about 2% before (getting a bit higher due to geopolitical changes early 2022).
Also, think of funding for European tech companies. Governments are getting rid of Microsoft 365 in favor of Nextcloud. Especially Opencloud (Nextcloud, also European, formerly known as nextcloud-go) and Opendesk. It goes slow, partly because of regulations and each country goes for their own local contractor (an expert who speaks the native language).
The victim is going to be our health care system, and general social security system. Because we were able to afford this thanks to NATO, and NATO is now a paper tiger.
We're also not known for our VC culture, but here goes: https://www.cursor.tue.nl/nieuws/2025/november/week-4/surf-z... SURF has financed many FOSS projects in the past.
Desire to invest in USA has gone down because of Trump, and that money will partly flow to China, partly internal. On the short term, USA has incredible soft power over EU. But on the long term, not so much anymore.
THen there's no trickle down happening.
>Desire to invest in USA has gone down because of Trump
Not true. Nokia just announced closing of its Infinera Munich HQ and relocating those activities to the US.
Talk is cheap, show me the stats.
These kinds of things-- cheaper-than-NVIDIA cards that can produce a lot of tokens or run large models cheaply are absolutely necessary to scale text models economically.
Without things like these-- those Euclyd things, those Groq things, etc. no one will be able to offer up big models at prices where people will actually use them, so lack of things like this actually cripples training of big models too.
If the price/token graph is right, this would mean 2.5x more tokens, which presumably means actually using multiple prompts to refine something before producing the output, or to otherwise produce really long non-output sequences during the preparation the output. This also fits really well with the Chinese progress in LLM RL for maths. I suspect all that stuff is totally general and can be applied to non-maths things too.
How do you tell the difference? Wait infinitely long and see if it appears?
If those things are true in ~6 months, then I'll join the crowd here who are overly pessimistic at this moment, but until then, as most of the time, I'll give them benefit of the doubt.
I want to believe: let's see that software stack working effectively.
https://www.opensourceforu.com/2025/11/ainekko-turns-esperan...
I was just trying to explain how "obvious" issues like that happen in the first place, not to excuse it, but to explain the likely background behind it.
> To streamline development and shorten time-to-market, VSORA embraces industry standards: our toolchain is built on LLVM and supports common frameworks like ONNX and PyTorch, minimizing integration effort and customer cost.
Most start-ups innovate on the compute side, whereas the techno needed for state of the art communications is not common, and very low-level: plenty of analog concerns. The domain is dominated by NVidia and Broadcom today.
This is why digital start-ups tend to focus on inference. They innovate on the pure digital part, which is compute, and tend to use off-the-shelf IPs for communications, so not a differentiator and likely below the leaders.
But in most cases coupling a computation engine marketed for inference with state of the art communications would (in theory) open the way for training too. It's just that doing both together is a very high barrier. It's more practical to start with compute, and if successful there use this to improve the comms part in a second stage. All the more because everyone expects inference to be the biggest market too. So AI start-ups focus on inference first.
It doesn't have to compete on price 1:1. Ever since Trump took office, the Europeans woke up on their dependence on USA who they no longer regard as a reliable partner. This counts for defense industry, but also for critical infrastructure, including IT. The European alternatives are expected to cost something.
Hope they can figure out software, but what im seeing isn't super-promising
Did they generate their website with their own chips or on Nvidia hardware?
From their web page Euclyd is a "many small cores" accelerator. Doing good compilation toolchains for these to get efficient results is a hard problem, see many comments on compilers for AI in this thread.
Vsora approach is much more macroscopic, and differentiated. By this I mean I don't know anything quite like it. No sea of small cores, but several more beefy units. They're programmable, but don't look like a CPU: the HW/SW interface is at a higher level. A very hand-wavy analogy with storage would be block devices vs object storage, maybe. I'm sure more details will surface when real HW arrive.
all2•2mo ago
It sounds nice, but how much is it?
rq1•2mo ago
ddalex•2mo ago