This could be interesting to see how much they try to loss-lead to get market share in the low-end
Must be the most moronic decision ever.
and it's not like 20/20 hindsight either, because every hardware enthusiast knew at the time Intel was having troubles and was worried TSMC (and Samsung at the time) were going to be the only fabs producing leading edge lithographies.
These nm values are really bullshit anyway, but the tech node that was supposed to be Intel’s 7nm, which ended up being called “Intel 4” (because they branded some 10nm tech as Intel 7), only came out in like 2023. Given they Global Foundries was always behind Intel, suddenly leapfrogging them by 2-3 years would be quite a feat.
> These nm values are really bullshit anyway, but the tech node that was supposed to be Intel’s 7nm, which ended up being called “Intel 4” (because they branded some 10nm tech as Intel 7), only came out in like 2023. Given they Global Foundries was always behind Intel, suddenly leapfrogging them by 2-3 years would be quite a feat.
This is a very weak argument. Intel was ahead of everyone, now everyone is ahead of Intel. Remember TSMC's blunder processes like 20nm? How they turned around after that? Or how GloFo has had always mediocre processes but they finally hit the nail in the head with their 14/12nm? Fab business has always had companies leapfrogging each other, it turns out the worst sin is not trying. GloFo's greedy investors chose to bury the business in the ground for their short term profits.
First, nobody knew if even TSMC was going to succeed at bringing a 7nm process to market. 02018 was maybe the height of the "Moore's Law is over" belief. There was a lot of debate about whether planar semiconductor scaling had finally reached the limit of practical feasibility, although clearly it was still two orders of magnitude from the single-atom physical limit, which had been reached by Xie's lab in 02002. Like Intel, SMIC didn't reach 7nm until 02023 (with the HiSilicon processor for Huawei's Mate60 cellphone) despite having the full backing of the world's most technically productive country, and when they did, it was a shocking surprise in international relations with the US.
Second, even if GF had brought 7nm to market, there was no guarantee it would be profitable. The most profitable companies in a market are not always the most technically advanced; often the pioneers die with arrows in their backs. If you can make 7nm chips in volume, but the price for them is so high that almost everyone sticks with 12nm processes (maybe from your competitors), you can still lose money on the R&D. Moore's Law as originally stated in "Cramming" was about how the minimum price per transistor kept moving to smaller and smaller transistors, and historically that has been an immensely strong impetus to move to smaller processes, but it's clearly weakened in recent years, with many successful semiconductor products like high-end FPGAs still shipping on very old process nodes. (Leaving aside analog, which is a huge market that doesn't benefit from smaller feature size.)
Third, we don't know what the situation inside GF was, and maybe GF's CEO did. Maybe they'd just lost all their most important talent to TSMC or Samsung, so their 7nm project was doomed. Maybe their management politics were internally dysfunctional in a way that blocked progress on 7nm, even if it hadn't been canceled. There's no guarantee that GF would have been successful at mass production of 7nm chips even in a technical sense, no matter how much money they spent on it.
In the end it seems like GF lost the bet pretty badly. But that doesn't necessarily imply that it was the wrong bet. Just, probably.
They had previously signed a contract with IBM to produce silicon at these more advanced nodes that they could not honor, and there was legal action between them.
https://www.anandtech.com/show/13277/globalfoundries-stops-a...
https://newsroom.ibm.com/2025-01-02-GlobalFoundries-and-IBM-...
In any case, at the time and still I think GF was probably correct in that they would not be able to compete at the leading edge and make money at it. Remember, AMD and IBM separated fabs out for a reason and not having the scale necessary to compete was probably a big part of that. AMD has succeeded on TSMC and IBM seems to be doing ok on Samsung. Most chips are not at the leading edge and don't need to be, and so most fabs don't need to be leading edge to serve customers. There are all kinds of applications where a more mature and better characterized process is better, whether for harsh environments, mixed signal applications, or just low volume parts where $20M of tooling cost is not worth it.
Do you have any evidence, besides GF's own PR/IR department, that the process ever actually worked in volume? Because from my point of view, how they ended things looks exactly how I would spin away a multibillion-dollar investment into a failed process.
Name company making chips with EUV that is not TSMC, Samsung, or Intel?
https://www.eetimes.com/samsung-globalfoundries-prep-14nm-pr...
"Samsung expects to be in production late this year with a 14 nm FinFET process it has developed. GlobalFoundries has licensed the process and will have it in production early next year."
GlobalFoundries licensed 14nm from Samsung. How do you know GlobalFoundries is capable of 7nm?
You can't do 14nm either, but it shouldn't stop you from licensing 14nm and producing millions of wafers, by that logic. I'm waiting news on your new fab.
My guess is that the guys in Abu Dhabi did not want to do the investments needed to bring 7nm into production. They lost a huge opportunity because of that. At the time, it probably looked like the right financial decision to them, even though practically everyone affected downstream thought it was myopic.
Pursuing 7nm would have likely bankrupted GloFo.
Keep in mind that your iphone only has very few chips in <10nm technology. The rest is using much larger groundrules, even the memory.
The automobile industry showed us that there is demand for older nodes.
Glo-flo is leading edge for anyone without EUV.
Is this the very beginning of a market consolidation?
For most places that kind of high-cost work doesn't make much sense when their product isn't "a CPU", and they also typically have to buy other IP anyway like memory controllers or I/O blocks -- so buying a CPU core isn't that strange in the grand scheme.
https://en.wikipedia.org/wiki/Delay_slot
I'm surprised by how many other architectures use it.
Stanford MIPS was extremely influential, which was undoubtedly a major factor in many RISC architectures copying the delay-slot feature, including SPARC, the PA-RISC, and the i860. But the delay slot really only simplifies a particular narrow range of microarchitectures, those with almost exactly the same pipeline structure as the original. If you want to lengthen the pipeline, either you have to add the interlocks back in, or you have to add extra delay slots, breaking binary compatibility. So delay slots fell out of favor fairly quickly in the 80s. Maybe they were never a good tradeoff.
One of the main things pushing people to RISC in the 80s was virtual memory, specifically, the necessity of being able to restart a faulted instruction after a page fault. (See Mashey's masterful explanation of why this doomed the VAX in https://yarchive.net/comp/vax.html.) RISC architectures generally didn't have multiple memory accesses or multiple writes per instruction (ARM being a notable exception), so all the information you needed to restart the failed instruction successfully was in the saved program counter.
But delay slots pose a problem here! Suppose the faulting instruction is the delay-slot instruction following a branch. The next instruction to execute after resuming that one could either be the instruction that was branched to, or the instruction at the address after the delay-slot instruction, depending on whether the branch was taken or not. That means you need to either take the fault before the branch, or the fault handler needs to save at least the branch-taken bit. I've never programmed a page-fault handler for MIPS, the SPARC, PA-RISC, or the i860, so I don't know how they handle this, but it seems like it implies extra implementation complexity of precisely the kind Hennessy was trying to weasel out of.
The WP page also mentions that MIPS had load delay slots, where the datum you loaded wasn't available in the very next instruction. I'm reminded that the Tera MTA actually had a variable number of load delay slots, specified in a field in the load instruction, to allow the compiler to allow as many instructions as it could for the memory reference to come back from RAM over the packet-switching network. (The CPU would then stall your thread if the load took longer than the allotted number of instructions, but the idea was that a compiler that prefetched enough stuff into your thread's huge register set could make such stalls very rare.)
https://www.jwhitham.org/2016/02/risc-instruction-sets-i-hav...
alephnerd•4h ago
MIPS has also hitched it's horse to RISC-V now, and I am seeing a critical mass of talent and capital forming in that space.
kragen•3h ago
AFAIK MIPS still hasn't shipped a high-end processor competitive with the XuanTie 910 that article is about. And I think the billions of RISC-V microcontroller cores that have shipped already (10 billion as of 02022 according to https://wccftech.com/x86-arm-rival-risc-v-architecture-ships...) are also mostly not from MIPS.
garblegarble•3h ago
rrakow•2h ago
dcminter•1h ago
kstrauser•1h ago
dcminter•1h ago
kstrauser•58m ago
Frankly, something about that leading 0 makes me grit my teeth and stop reading. I can't explain why it affects me like that. Perhaps I'm the only one who does, although threads like this seem to pop up whenever they post so I don't think so. If HN had a mute button, I'd probably use it just because it annoys me to that level.
dcminter•47m ago
tonyedgecombe•4m ago
MalbertKerman•1h ago
I really don't.
acdha•1h ago
https://longnow.org/ideas/long-now-years-five-digit-dates-an...
nine_k•3h ago
dcminter•1h ago
At least the Long Now Foundation stuff comes with that context built-in.
https://longnow.org/
ndiddy•1h ago
hulitu•1h ago
The last high end MIPS was in the SGI times, 30 years ago.
kragen•1h ago
Findecanor•39m ago
I think the C910 looks better on paper than it performs in practice. I hope that isn't the case for MIPS.
kragen•35m ago
Findecanor•28m ago
That is a frustrating pattern in the RISC-V world. Many companies that boast having x wide cores with y SPECint numbers but nothing that has been independently verified.
kragen•12m ago
ajb•1h ago
Lots of companies had their own mips implementation, but still might use an implementation from mips-the-company because even if you have your own team, you probably don't want to implement every core size that you might need. But then for some reason lots of them switched to using ARM, within a few years (in some cases getting an architecture licence and keeping their CPU team).
It seems like RV has a more stable structure, as the foundation doesn't licence cores, so even if one or two of the implementors die it won't necessarily reflect on the viability of the ecosystem