While my PoV is US centered, I feel that other nations should largely optimize for the same as much as possible. Many of today's issues stem from too much centralization of commercial/corporatist power as opposed to fostering competition. This shouldn't be in the absence of a baseline of reasonable regulation, just optimizing towards what is best for the most people.
Now apply that to weapons systems in conflict against an enemy that DOES have modern production that you (no longer) have... it's a recipe for disaster/enslavement/death.
China, though largely hamstrung, is already well ahead of your hypothetical 2005 tech breakpoint.
Beyond all this, it's not even a matter of just slower, it's a matter of even practical... You couldn't viably create a lot of websites that actually exist on 2005 era technology. The performance memory overhead just weren't there yet. Not that a lot of things weren't possible... I remember Windows 2000 pretty fondly, and you could do a LOT if you had 4-8x what most people were buying in RAM.
How do you maintain this production with a sudden influx of ballistic missiles at the production facility - or a complete naval blockade of all food calories to your country?
If society as a whole reverted to 2005, we would be fine.
In 2004 Iraq, we had guided missiles, night vision, explosives, satellites. What advantages would 3nm transistors give the enemy in combat?
see Ukraine drone warfare ... there's a lot going on there which is more than just miniaturized motors, etc. a lot is efficient power use of the semiconductors in those drones, the image processors attached to the cameras, etc. that i suspect relies on newer processes
If you took today's software and tried running it on a memory constrained, slow, 2005 era system, you'd be in for some pain.
Why else are everything now seem to be wrappers for wrappers? What if the bloat was, subconsciously or whatever, the point?
Electron as bad as it can be, has allowed for a level of cross platform applications in practice that has never existed... It's bloated on several levels.
Most of that ease in being able to deliver software that works well enough and quickly doing so wouldn't be possible without the improvements in technology.
See tangentially related topic from yesterday: https://news.ycombinator.com/item?id=46362927
Another approach was Transmeta where the target ISA was microcoded, therefore done in "software".
"Apple created a chip which is not an X86! Its awesome! And the best thing about it is ... it does TSO does like an X86! Isn't that great?"
I think the last time I ran amd64 on my mac was months ago, a game.
There's also the CFINV instruction (architectural, part of FEAT_FLAGM), which helps with emulating the x86-64 CMP instruction.
Customer needs don't really matter in cases where monopolist (ab)uses the law to kill competition. That's the MAIN reason why monopolies are problematic.
The licensing deals that legitimized AMD's unlicensed clones came later.
> The processor was reverse-engineered by Ashawna Hailey, Kim Hailey and Jay Kumar. The Haileys photographed a pre-production sample Intel 8080 on their last day in Xerox, and developed a schematic and logic diagrams from the ~400 images.
Definitely read that wrong the first time I skimmed the article
- Mario Puzo, The Godfather
Those worked in 4-bit slices, and you could use them as LEGO blocks to build your own design (e.g. 8, 12 ou 16 bits) with much fewer parts than using standard TTL gates (or ECL NANDs, if you were Seymour Cray).
The 1980 Mick & Brick book Bit-slice Microprocessor Design later gathered together some "application notes" - the cookbooks/crib sheets that semiconductor companies wrote and provided to get buyers/engineers started after the spec sheets.
AMD has introduced in 1975 both its NMOS 8080 clone and the bipolar bit-slice 2900 family.
I do not know which of these 2 AMD products was launched earlier, but in any case there was only a few months difference between them at most, so it cannot be said that AMD "was already in the CPU market". The launch of both products has been prepared at a time when AMD was not yet in the CPU market and Intel had been earlier than AMD both in the NMOS CPU market and in the market for sets of bipolar bit-slice components.
While Intel 8080 was copied by AMD, the AMD 2900 family was much better than the Intel 3000 family, so it has been used in a lot of PDP-11 clones or competitors.
For example, the registers+ALU component of Intel 3000 implemented only a 2-bit slice and few ALU operations, while the registers+ALU component of AMD 2900 implemented a 4-bit slice and also many more ALU operations.
Moral: Awesome productivity happens when IP doesn't get in the way.
I remember when in the early 90s the am386-40MHz came out. Everyone was freaking out how we are now breaking the sound barrier. There was a company Twinhead(?) that came out with these 386-40Mhz motherboards with buses so overclocked most video cards would fry. Only the mono Hercules cards could survive. We thought our servers were the shizzle.
I was interested in this and followed the links to the original interview at: https://web.archive.org/web/20131111155525/http://silicongen... which was interesting:
> "Xerox being more of a theoretical company than a practical one let us spend a whole year taking apart all of the different microprocessors on the market at that time and reverse engineering them back to schematic. And the final thing that I did as a project was to, we had gotten a pre-production sample of the Intel 8080 and this was just as Kim and I were leaving the company. On the last day I took the part in and shot ten rolls of color film on the Leica that was attached to the lights microscope and then they gave us the exit interview and we went on our way. And so that summer we got a big piece of cardboard from the, a refrigerator came in and made this mosaic of the 8080. It was about 300 or 400 pictures altogether and we pieced it together, traced out all the logic and the transistors and everything and then decided to go to, go up North to Silicon Valley and see if there was anybody up there that wanted to know about that kind of technology. And I went to AMI and they said oh, we're interested, you come on as a consultant, but nobody seemed to be able to take the project seriously. And then I went over to a little company called Advanced Micro Devices and they wanted to, they thought they'd like to get into it because they had just developed an N-channel process and this was '73. And I asked them if they wanted to get into the microprocessor business because I had schematics and logic diagrams to the Intel 8080 and they said yes."
From today's perspective, just shopping a design lifted directly from Intel CPU die shots around to valley semi companies sounds quite remarkable but it was a very different time then.
The difference with the 386, I think, is that AFAIK the second-sourced 8086 and 286 CPUs from non-Intel manufacturers still made use of licensed Intel designs. The 386 (and later) had to be reverse engineered again and AMD designed their own implementation. That also meant AMD was a bit late to the game (the Am386 came out in 1991 while the 80386 had already been released in 1985) but, on the other hand, they were able to achieve better performance.
It is, yes. I meant to mention that detail!
> The 386 (and later) had to be reverse engineered … That also meant AMD was a bit late to the game
There were also legal matters that delayed the release of their chips. Intel tried to claim breach of copyright with the 80386 name¹ and so forth, to try stymie the competition.
> they were able to achieve better performance.
A lot of that came from clocking them faster. I had an SX running at 40Hz. IIRC they were lower power for the same clock then Intel parts, able to run at 3.3V, which made them popular in laptops of the time. That, and they were cheaper! Intel came out with a 3.3V model that had better support for cache to compete with this.
--------
[1] This failed, which is part of why the i386 (and later i486 and number-free names like Pentium) branding started (though only in part - starting to market direct to consumers rather than just EOMs was a significant factor in that too).
>AMD said Friday that its “independently derived” 486 microprocessor borrowed some microcode from Intel’s earlier 386 chip.
Borrowed hehe. Ended up in a 1995 settlement where AMD fully admitted copying and agreed to pay $58mil penalty in exchange for official license to 386 & 486 microcodes and infamous patent 338(mmu). Intel really wanted a legal win confirming validity of their patent 338 to threaten other competitors. 338 is what prevented sale of UMC Green 486 in USA. Cyrix bypassed the issue by manufacturing at SGS and TI who had full Intel license https://law.justia.com/cases/federal/district-courts/FSupp/8...
>were able to achieve better performance
Every single Am386 instruction executes at same cycle count as Intel counterpart, difference is only official ability to work at 40MHz.
Then there was the big licensing deal for Intel 8088 and its successors, which was forced by IBM upon Intel, in order to have a second source for the critical components of the IBM PC.
IP is one of those things you invent once you made it to the top.
https://www.amazon.com/Kicking-Away-Ladder-Development-Persp...
The US industrial revolution was from Samuel Slater memorizing detailed plans of British textile mills and their machines and bringing them here.
instant 20% speed boost replacing the IBM 8088 with the v20 chip
bought a sleeve of them cheap and went around to all the PCs and popped them out
only problem was software that relied on clocks ran too fast
Apparently by ripping off their military customers.
>says Wikipedia.
Why is that a primary source?
ksec•1mo ago
holowoodman•1mo ago
And x86 isn't that nice to begin with, if you do something incompatible, you might as well start from scratch and create a new, homogenous, well-designed and modern ISA.
fooker•1mo ago
So it would be faster and more efficient when sticking to the new subset and Nx slower then using the emulation path.
kimixa•1mo ago
fooker•1mo ago
Most architectures other than x86 have fixed sized machine instructions now, making decoding fast and predictable.
ksec•1mo ago
i.e Software compiled for 86 should work on x86. The value for backward compatibility is kept with both Intel and AMD. If the market wants something in between they now have an option.
I know this isn't a sexy idea because HN or most tech people like something shiny and new. But I have always like the idea of extracting value from the "old and tried" solutions.
Scoundreller•1mo ago
But thankfully I could install an old bin and lock it out from updating.
Intel’s software development emulator might run the newest bin but variable how slow it might be.
In other circumstances, the AVX extensions aren’t required but the app is compiled to fail if they’re not required: https://www.reddit.com/r/pcgaming/comments/pix02j/hotfix_for...
inkyoto•1mo ago
This isn't an issue in any way. Vendors have been routinely taking out rarely used instructions from the hardware and simulating them in the software for decades as part of the ongoing ISA revision.
Unimplemented instruction opcodes cause a CPU trap to occur where the missing instruction (s) is then emulated in the kernel's emulation layer.
In fact, this is what was frequently done for «budget» 80[34]86 systems that lacked the FPU – it was emulated. It was slow as a dog but worked.
lloydatkinson•1mo ago
tester756•1mo ago
>AMD and Intel Celebrate First Anniversary of x86 Ecosystem Advisory Group Driving the Future of x86 Computing
Oct 13, 2025
Standardizing x86 features
Key technical milestones, include:
wtallis•1mo ago
tester756•1mo ago
As of today it resulted in more features, but who knows what changes it will bring tomorrow?
Calling x86 clean up initiative dead/cancelled is quite not fair since this group is still working.
lloydatkinson•1mo ago
https://www.tomshardware.com/pc-components/cpus/intel-termin...
tester756•1mo ago
lloydatkinson•1mo ago
tester756•1mo ago
My linked article is 2025.
ksec•1mo ago
userbinator•1mo ago
But it's fortunate that they realised the main attraction to x86 is backwards-compatibility, so attempting to do away with that will lead to even less marketshare.
fulafel•1mo ago
tracker1•1mo ago
fulafel•1mo ago
cmrdporcupine•1mo ago
I suspect we'll see somebody -- a phone manufacturer or similar device -- make a major transition to RISC-V from ARM etc in the next 10 years that we won't even notice.
fulafel•1mo ago
tracker1•1mo ago
My biggest issue was the number of broken apps in Docker on Arm based Macs, and even then was mostly able to work around it without much trouble.
fulafel•1mo ago
fweimer•1mo ago
These days, even fairly low-level system software is surprisingly portable. Entire GNU/Linux distributions are developed this way, for the majority of architectures they support.
fweimer•1mo ago
Some distributions like Debian or Fedora will make newer features (such as AVX/VEX) mandatory only after the patents expire, if ever. So a new entrant could implement the original x86-64 ISA (maybe with some obvious extensions like 128-bit atomics) in that time frame and preempt the patent-based lockout due to ISA evolution. If there was a viable AMD/Intel alternative that only implements the baseline ISA, those distributions would never switch away from it.
It's just not easy to build high-performance CPUs, regardless of ISA.
IshKebab•1mo ago
tracker1•1mo ago
izacus•1mo ago
So this is kind of a useless question, because in such a timespan anything can happen. 20 years ago computers had somewhere around 512MB of RAM and a single core and had a CRT on desk.
fulafel•1mo ago
Obliterating x86 in that time would take quite a lot more than what the ARM trajectory is now. It's had 40 years to try by now and the technical advantage window (power efficieny advantage) has closed.
IshKebab•1mo ago
I was thinking more like if it falls to 10% of desktop/laptop/server market share, which is still waaaaaay more then the nearly-dead architectures you listed.
fulafel•1mo ago
Things that have < 10% market share
- macOS
- all car manufacturers except Toyota
Things that history considers obliterated:
- The city of Pompeii
- districts of Hiroshima within the bomb's blast radius
Keyframe•1mo ago
Mature gallery of software to be ported from TSO to weak memory model is a soft moat. So is avx/simd mature dominance vs neon/sve. x86/64 is a duopoly and a stable target vs fragmented landscape of ARM. ARM's whole spiel is performance per watt, scale out type of thing vs scale up. In that sense the market has kind of already moved. With ARM if you start pushing for sustained high throughput, high performance, 5Ghz+ envelope, all the advantages are gone in favor of x86 so far.
What might be interesting is if let's say AMD adds an ARM frontend decoder to Zen. In one of Jim Keller's interviews that was shared here, he said it wouldn't be that big of a deal to make such a CPU for it to be an ARM decoding one. That'd be interesting to see.
philistine•1mo ago
Laptops. Apple already owned the high margin laptop market before they switched to ARM. With phones, tablets, laptops above 1k, and all the other doodads all running ARM, it's not that x86 will simply disappear. Of course not. But the investments simply aren't comparable anymore with ARM being an order of magnitude more common. x86 is very slowly losing steam, with their chips generally behind in terms of performance per watt. And it's not because of any specific problem or mistake. It's just that it no longer makes economic sense.
zzzoom•1mo ago
IshKebab•1mo ago
zzzoom•1mo ago
IshKebab•1mo ago
I like RISC-V (it's my job and I'm very involved in the community) but even now it isn't ready for laptops/desktop class applications. RVA23 is really the first profile that comes close and that was only ratified very recently. But beyond that there are a load of other things that are very much work in progress around the periphery that you need on a laptop. ACPI, UEFI, etc. If you know RISC-V, what does mconfigptr point to? Nothing yet!
Anyway the question was why would anyone switch from one proprietary ISA to another, as if nobody would - despite the very obvious proof that yes they absolutely would.
tester756•1mo ago
Lunar Lake shows that x86 is capable of getting that energy efficiency
Panther Lake that will be released in around 30 days is expected to show significant improvement over Lunar Lake
So... why switch to ARM if you will get similar perf/energy eff?
fweimer•1mo ago
tester756•1mo ago
AMD and Intel Celebrate First Anniversary of x86 Ecosystem Advisory Group Driving the Future of x86 Computing
Standardizing x86 features
Key technical milestones, include: