frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Looking for testers for a location-based AI experiment

1•sharkgil•1m ago•0 comments

We're Training Students to Write Worse and to Use AI to Prove They're Not Robots

https://www.techdirt.com/2026/03/06/were-training-students-to-write-worse-to-prove-theyre-not-rob...
1•hn_acker•1m ago•1 comments

Show HN: We're on Women's Day Sale. Sign Up to Playtest Shop Crush

https://store.steampowered.com/app/2961120/Shop_Crush/
1•hollowlimb•2m ago•0 comments

Huawei PanguLM [pdf]

https://support.huaweicloud.com/intl/en-us/productdesc-pangulm/PanguLM%20Service_Service%20Overvi...
1•zlu•3m ago•0 comments

What's the deal with "age verification" and computers?

https://rudd-o.com/linux-and-free-software/what-is-going-on-with-age-verification-in-computers
1•Magnusmaster•4m ago•0 comments

Show HN: BottomUp- Translate Your Thoughts So AI Can Work For Your Neurotype

https://www.bottomuptool.com/
1•claythedesigner•5m ago•0 comments

SPA vs. Hypermedia: Real-World Performance Under Load

https://zweiundeins.gmbh/en/methodology/spa-vs-hypermedia-real-world-performance-under-load
1•todsacerdoti•6m ago•0 comments

Steve Jobs predicted "vibe coding" in 1997 [video]

https://twitter.com/musaabHQ/status/1582671928271118337
1•mba_lmh•6m ago•0 comments

Brain Computer Interfaces Are Now Giving Sight Back to the Blind

https://garryslist.org/posts/brain-computer-interfaces-are-now-giving-sight-back-to-the-blind
2•magoghm•7m ago•0 comments

Show HN: Hatice – Autonomous Issue Orchestration with Claude Code Agent SDK

https://github.com/mksglu/hatice/tree/main
1•mksglu2•7m ago•0 comments

Show HN: Free salary converter with 3,400 neighborhood comparisons in 182 cities

https://salary-converter.com/
2•jay7gr•8m ago•0 comments

The Quran's 950-Years of Noah Echoes the Ages of Kings in the Sumerian King List

https://mystudentfailedtheirmid.substack.com/p/if-muslims-accept-noahs-950-years
1•darkhorse13•10m ago•0 comments

More Is Different for Intelligence

https://fulcrumresearch.ai/2026/03/05/more-is-different-for-intelligence.html
2•etherio•11m ago•0 comments

What if CLIs exposed machine-readable contracts for AI agents?

https://github.com/sonde-sh/sonde
1•valentinprgnd•14m ago•1 comments

The Monk at the Cocktail Party

https://www.sebs.website/the-monk-at-the-cocktail-party
1•Incerto•14m ago•0 comments

Weather Report #1

https://at-news.leaflet.pub/3mgg7ie7tdk2o
2•Kye•14m ago•0 comments

A Million Simulated Seasons [video]

https://www.youtube.com/watch?v=Vv9wpQIGZDw
1•carlos-menezes•15m ago•0 comments

Incrementally parsing LLM Markdown streams on server/client

https://github.com/nimeshnayaju/markdown-parser
1•nayajunimesh•15m ago•1 comments

Show HN: Kula – Lightweight, self-contained Linux server monitoring tool

https://github.com/c0m4r/kula
2•c0m4r•15m ago•0 comments

Show HN: Cross-Claude MCP – Let multiple Claude instances talk to each other

https://github.com/rblank9/cross-claude-mcp
2•rblank9•16m ago•0 comments

Poll

2•consumer451•17m ago•1 comments

I'm 60 years old. Claude Code has ignited a passion again

6•shannoncc•18m ago•1 comments

SYNX – a config format that parses 67× faster than YAML, built for AI pipelines

https://github.com/kaiserrberg/synx-format
2•Kaiserrberg•18m ago•0 comments

All of this refugee case's filings should be online

https://www.lawdork.com/p/law-dork-objection-refugee-case
1•hn_acker•19m ago•1 comments

Plasma Bigscreen – 10-foot interface for KDE plasma

https://plasma-bigscreen.org
23•PaulHoule•24m ago•5 comments

GitHub appears to be hiding repo stars for signed-out users

3•ramoz•26m ago•1 comments

Garrett Langley of Flock Safety on building technology to solve crime

https://cheekypint.substack.com/p/garrett-langley-of-flock-safety-on
1•hhs•26m ago•0 comments

Kafka 101

https://highscalability.com/untitled-2/
1•medbar•27m ago•0 comments

Show HN: MCP server that finds dev tool credits in your workflow

1•janaksunil•28m ago•0 comments

Helix: A post-modern text editor

https://helix-editor.com/
5•doener•29m ago•0 comments
Open in hackernews

Never Bet Against x86

https://www.osnews.com/story/144527/never-bet-against-x86/
62•raphinou•6h ago

Comments

trvz•5h ago
Seems like a silly thing to say right when x86 is getting pummelled to death by Apple and Valve, maybe slowly, but steadily, while the rest of the gang also watches on.
skydhash•5h ago
I believe it’s more from the point of view of Kernel, Compiler, and Driver developers, not from manufacturers and users. Standards, while not very flexible, are good for building ecosystems.
michaelbuckbee•5h ago
I'd add AWS + Gravitron to that list as well.
PaulHoule•4h ago
Lately I've made making some AWS Lambda functions to do some simple things in Python and chose to use the ARM-based instances because there wasn't any reason not to.
samuelknight•5h ago
What does Valve ship without x86?
jsheard•5h ago
Nothing yet, but the upcoming Steam Frame VR headset is ARM based. The relevant detail is they're bankrolling the open source FEX x86 emulator, with the goal of bringing the whole Steam back-catalogue to ARM systems.
invl•4h ago
The Steam Link was ARM-based.
creatonez•5h ago
> Valve

This is a funny thing to say when Valve hasn't actually released any ARM device yet, and the Steam Deck is still fully reliant on x86. The ARM hardware they do plan to release relies on x86 emulation, which is something that historically usually doesn't pan out.

beagle3•5h ago
Worked very well for Apple in their transition to Apple Silicon.
i_am_a_peasant•4h ago
for real, rosetta is crazy good
jmalicki•4h ago
The Mac silicon is nice in that it has partial x86 emulation in that it can work in x86 memory store mode.

Since they had control over the hardware, they could punt on one of the hard parts of Rosetta and bake it into Silicon.

Understanding the memory ordering requirements from binary without source and without killing performance by being overly conservative (and hell, the source itself probably has memory ordering bugs if it was only tested on x86) sounds next to impossible.

jsheard•4h ago
> Understanding the memory ordering requirements from binary without source and without killing performance by being overly conservative (and hell, the source itself probably has memory ordering bugs if it was only tested on x86) sounds next to impossible.

It is hard, but Microsoft came up with a hack to make it easier. MSVC (since 2019) annotates x86 binaries with metadata describing the codes actual memory ordering requirements, to inform emulators of when they need to be conservative or can safely YOLO ordering. Obviously that was intended to assist Microsoft's Prism emulator, but the open source FEX emulator figured out the encoding (which I believe is undocumented) and implemented the same trick on their end.

Emulators still have to do it the hard way when running older MSVC binaries of course, or ones compiled with Clang or GCC. Most commercial games are built with MSVC at least.

shmerl•4h ago
Steam Frame is using ARM. Not sure exactly what was the reason for them to do it there.

They also use emulation backing this project: https://github.com/FEX-Emu/FEX

gaigalas•5h ago
That is actually addressed in the article. Several architectures "pummelled" x86 before. PowerPC, for example. They did not stood the test of time though.
beagle3•5h ago
What they did not win was the popularity contest, mostly thanks to Windows - the Wintel market was just too massive to compete with.

But that’s changed somewhat - Apple has managed a larger mind and market share (while switching into ARM). The vast majority of uses are now available on the web, which is CPU agnostic, and there is a huge amount of open source software available.

The only things for which x86 still shines a little brighter are games, and native office. But office is mostly available on web, on Mac, and on Winarm. So games. Which aren’t big enough market mass to sustain the x86’s popularity — and is a segment (soon) under attack by Valve.

gaigalas•4h ago
servers
wolrah•4h ago
> The only things for which x86 still shines a little brighter are games, and native office. But office is mostly available on web, on Mac, and on Winarm. So games. Which aren’t big enough market mass to sustain the x86’s popularity — and is a segment (soon) under attack by Valve.

You've missed a huge segment:

Random in-house apps or niche vertical market apps that are closely tethered with a business workflow to the point that replacing them is a massive undertaking, where the developers at best aren't interested in improving anything and at worst no longer exist.

beagle3•1h ago
No I did not miss it. That has moved to web, either directly Or through an RDP/VNC interface where the actual windows virtual machine is hidden.

Embedded/hardware is the last segment still not replaced by web.

pjmlp•4h ago
Most people outside US, and similar G8 countries, aren't going to pay Apple.
beagle3•1h ago
No, but Microsoft are also going arm. Where the us goes, the world goes eventually.
p_ing•5h ago
How? x86 leads on performance. It's reasonably low power now, too; perhaps not the best, but it's not aughts-era power consumption.
cedws•4h ago
Anecdotally at work (SME) we are pretty much all in on ARM. MacBooks with M-series, AWS Graviton instances, even our CI runners are now ARM to match local development.
pjmlp•4h ago
People should look into consumer market share numbers before commenting.
2OEH8eoCRo0•3h ago
And the article explains why they'll never "win."
Pannoniae•5h ago
The future of x86 is worrying but it's nowhere dead yet. I saw the C&C article yesterday and did some research, TL;DR:

- Apple took over the single-threaded crown a while ago.

- ARM also caught up in integer workloads.

- ARM Cortex is still behind in floating-point.

- Both are behind in multithreaded performance. (mostly because there are more high-end x86 systems...)

- Both are way behind in SIMD/HPC workloads. (ARM is generally stuck on 128-wide, x86 is 256-wide on Intel and 512-wide on AMD. Intel will return to 512-wide on the consumer segment too)

- ARM generally have way bigger L1 caches, mostly due to the larger pagesize, which is a significant architectural advantage.

- ARM is reaching these feats with ~4.5Ghz clocks compared to the ~5.5Ghz clocks on x86. (very rough approximation)

Overall, troubling for x86 for the future... it's an open question whether it will go the way of IBM POWER, legacy support with strict compatibility but no new workloads at all, or if it will keep adapting and evolving for the future.

p_ing•5h ago
https://browser.geekbench.com/v6/cpu/15805010

I see x86 on top (the first valid result is 6841, which is x86), if that is the sole benchmark we're going to look at. You can further break that down into the individual tasks it performs, but I'm not going to :-)

> - ARM generally have way bigger L1 caches, mostly due to the larger pagesize, which is a significant architectural advantage.

Larger pages mean more potential for waste.

future10se•4h ago
> https://browser.geekbench.com/v6/cpu/15805010

Not to bash on x86 or anything, but that's an outlier. Very overclocked with a compressor chiller or similar. Also the single-threaded and multi-threaded scores are the same; it's probably not stable at full load across all cores.

I don't think that's really representative of the architecture at scale, unless you're making the case for how overclockable (at great power/heat cost) x86 is.

adrian_b•5h ago
ARM CPUs are quite good in "general-purpose" applications, like Internet browsing and other things that do not have great computational requirements, as they mostly copy, move, search or compare things, with only few more demanding computations.

On the other hand, most ARM-based CPUs, even those of Apple, have quite poor performance for things like arithmetic operations with floating-point numbers or with big integer numbers. Geekbench results do not reflect at all the performance of such applications.

This is a serious problem for those who need computers for solving problems of scientific/technical/engineering computing.

During the half of century when IBM PC compatible computers have been dominant, even if the majority of the users never exploited the real computational power of their CPUs, buying a standard computer would automatically provide at a low price a good CPU for the "power" users that need such CPUs.

Now, with the consumer-oriented ARM-based CPUs that have been primarily designed for smartphones and laptops, and not for workstations and servers, such computers remain good for the majority of the users, but they are no longer good enough for those with more demanding applications.

I hope that Intel/AMD based computers will remain available for a long time, to be able to still buy computers with good performance per dollar, when taking into account their throughput for floating-point and big integer computations.

Otherwise, if only the kinds of computers made by Apple and Qualcomm would be available, users like me would have to buy workstations and servers with a many times lower performance per dollar than achievable with the desktop CPUs of today.

This kind of evolution already happened in GPUs, where a decade ago one could buy a cheap GPU like those bought by gamers, but which nevertheless also had excellent performance for scientific FP64 computing. Then such GPUs have disappeared and the gaming GPUs of today can no longer be used for such purposes, for which one would have to buy a "datacenter" GPU, but those cost an arm and a leg.

bryanlarsen•4h ago
The performance/watt delta for M1 over contemporary x86 is massively larger than M5 vs Panther Lake. M5 and Panther Lake are roughly comparable.

So by that measure the future of x86 seems to be less troubling today than it was 5 years ago.

phendrenad2•5h ago
I don't think my gaming PC will ever use an ARM core. When you want true "big iron" you want x86. Intel and AMD have a duopoly on high-performance, no-TDP-spared chips, and they aren't sharing that market with anyone.

The reason ARM is making inroads in the server market is we've reached the point where cooling is a significant cost factor in server farms, so lowering TDP is starting to become a relevant factor in total cost.

beagle3•4h ago
Hardcore gamers are not a big enough market segment to sustain x86. If everyone else switches to ARM/RISC-V, games will too, eventually.
Strom•4h ago
Hardcore gamers were the reason behind a whole new chip type being introduced - the GPU. This was also when this market was a lot smaller. I don’t see this changing. The market will continue rewarding chips that cater to it. It is absolutely big enough to sustain several different completely bespoke chip types, regardless of what non-gamers are doing.

x86 will lose to ARM/RISC in gaming only if those chips provide a better gaming experience.

expedition32•4h ago
Yeah things like heat and energy use don't matter much for gamers. Most of that comes from the GPU anyway.
Analemma_•5h ago
This feels like a take from 10 years ago, when Intel was struggling to deliver 10nm but a lot of people assumed it would all shake out in the end. I could see a defensible case for betting on x86 then, and most of the author’s bullet points seem tailored for that era.

But now? I can’t think of a single segment where x86 is doing well. Its out of mobile entirely, it’s slowly getting squeezed out of servers as e.g. Graviton takes over, it has no presence in the AI gold rush, and in consumer desktops/laptops it’s position is precarious at best.

I’m quite bearish on x86.

PaulHoule•4h ago
e.g. the reason why x86 clobbered everyone else in the 1985-2005 period was that nobody else shipped enough units to keep ahead in terms of technology development. The slogan should be "Never bet against the CPU architecture that ships the most units" and today that translates to "Never bet against ARM"
cmrdporcupine•4h ago
As others have pointed out, gaming would be the place.

And in terms of squeezing out of servers, this is happening way more slowly than you're implying.

I say this as a person running an NVIDIA Spark as my daily driver. We're not there yet.

dana321•5h ago
You mean never bet against AMD64
hard_times•5h ago
...which is an extension of x86, the same way AArch64 is an extension of ARM.
dmitrygr•4h ago
aarch64 is not an extension. It is a whole new architecture having NOTHING in common with ARMv7 and below. Nothing!
Koshkin•5h ago
Never say never...
hard_times•5h ago
Not mentioned in the article, but the latest generation Xbox and PlayStation run completely custom firmware with their own proprietary boot chains, and locked-down hardware. So much for the "uniform" x86-64 "ecosystem". I'm sure there are more examples.
mifydev•4h ago
I'm quite concerned about x86 future, but the article has a point if you read it past the title.

It says that x86 is highly standardised - even with different combinations of chips, peripherals and motherboards you know it will work just fine. It's not the case for ARM systems - can you even have something similar to IBM PC with ARM?

I personally know that adding support for ARM devices on Linux is a huge and manual task - e.g. look at devicetree, it's a mess. There is no standard like ACPI for ARM devices, so even powering off the computer is a problem, everything is proprietary and custom.

I don't agree with the article though, x86 is dying and my worry is that ARM devices will bring an end to such an open platform like modern PCs are.

davidkwast•4h ago
RISC-V can be essential for this open future
cardanome•4h ago
As far as I understand RISC-V has the same lack of standardization that ARM has, no?
mjg59•4h ago
If anything, worse - there's much wider variety in the set of CPU extensions available.
bee_rider•4h ago
RISV-V is messy but for good reason, with real standards, although lots of them, which can be hard to keep track of.

X86 is de-facto standardized by vendor fiat.

ARM is in an unfortunate middle ground.

adrian_b•4h ago
RISC-V has a beautiful license, but it is one of the ugliest and least efficient computer ISAs ever designed.

Any competent computer engineer can design a much better ISA than RISC-V.

The problem is that designing a CPU ISA is easy and it can be done in a few weeks at most. On the other hand, writing all the software tools that you need to be able to use an ISA, e.g. assemblers, linkers, debuggers, profilers, compilers for various programming languages etc. requires a huge amount of work, of many man-years.

The reason why everybody who uses neither x86 nor Arm tends to use RISC-V is in order to reuse the existing software toolchains, and not because the RISC-V ISA would be any good. The advantage of being able to use already existing software toolchains is so great that it ensures the use of RISC-V regardless how bad it is in comparison with something like Aarch64.

The Intel ISA, especially its earlier versions, has also been one of the ugliest ISAs, even if it seems polished when compared to RISC-V. It would be sad if after so many decades during which the Intel/AMD ISA has displaced other better ISAs, it would eventually be replaced by something even worse.

As one of the main examples of why RISC-V sucks, I think that any ISA designer who believes that omitting from the ISA the means for detecting integer overflow is a good idea deserves the death penalty, unless the ISA is clearly declared as being a toy ISA, unsuitable for practical applications.

fragmede•4h ago
What does that mean in a world where writing software just got a few orders of magnitude cheaper? An Andrew Huang could create a new ISA replete with everything and get it done.
kode-targz•3h ago
It didn't though. Not good software at least. AI (which is what I'm guessing you're referring to here) is simply incapable of writing such mission -critical low-level code, especially for a niche and/or brand new ISA. It simply can't. It has nothing to plagiarize from, contrary to the billions of lines of JavaScript and python it has access to. This kind of work can most definitely be AI-assisted, but my estimate is that the time gained would be minimal. An LLM is able to write some functional arduino code, maybe even some semi-functional bare-metal esp32 code, but nothing deeper than that.
craftkiller•4h ago
> can you even have something similar to IBM PC with ARM

Yes, it's called SBBR which requires UEFI and ACPI. It is more common on server hardware than on consumer-grade embedded devices. The fact that it is not ubiquitous is really holding back ARM.

M95D•4h ago
Will you PLEASE stop promoting UEFI and ACPI?! These are closed-source blobs that the manufacturers will never update and have complete control over the system at ring -2. Why would you even consider it?

Device tree does the same thing and it's open source. Even if you only extract it in binary form a proprietary kernel or uboot, you can decompile it very easily.

craftkiller•3h ago
The person I was replying to was specifically asking for ACPI for ARM and they specifically stated their negative opinion of device tree.
mjg59•4h ago
There's a standard like ACPI for Arm devices - it's called ACPI, and it's a requirement for the non-Devicetree SystemReady spec (https://www.arm.com/architecture/system-architectures/system...). But it doesn't describe the huge range of weirdness that exists in the more embedded end of the Arm world, and it's functionally impossible for it to do so while Arm vendors see devices as an integrated whole rather than a general purpose device that can have commodity operating systems installed on them.
M95D•4h ago
> my worry is that ARM devices will bring an end to such an open platform like modern PCs are.

Modern PCs are NOT open platform anymore. Not since signed bootloaders, UEFI, secure boot. ARM on the other hand, as long as they don't require signed bootloaders (like phones) or a closed source driver for GPU or something, are in fact open.

fragmede•4h ago
You can still boot Linux on PCs though. ARM devices, you're SOL in most cases. Device tree is a total shit show. For random ARM device, better hope randomInternetHero42 on a random forum has it for your device. Just asking the device itself what exists would be stupid question in ARM world.
M95D•4h ago
I don't know what you're talking about. If the device boots, you find the device tree in /sys/firmware/fdt, or in unpacked human-readable form in /sys/firmware/devicetree/* .
rep_lodsb•2h ago
Secure boot can be disabled even on modern PCs.
toast0•4h ago
> can you even have something similar to IBM PC with ARM?

AFAIK, ARM does not have port mapped i/o, so that makes it difficult to really match up with the PC. That said, an OS can require system firmware to provide certain things and you get closer to an IBM like world. Microsoft requires UEFI for desktop Windows (maybe WP8 and WM10 as well, but I believe those were effectively limited to specific qualcomm socs, whereas I feel like Desktop windows is supposed to be theoretically open to anything that hits the requirements).

ACPI for ARM is a thing that exists, but not all ARM systems will have it. Technically, not all x86 systems have it either, but for the past several generations of Intel and AMD, all the hardware you need for ACPI is embedded in the CPU, so only old hardware or really weird firmware would be missing it. Also, PC i/o is so consistent, either by specification or by consensus, that it's easy to detect hardware anyway: PCI is at a specific i/o port by specificiation; cpu ID/MSR lets you locate on-chip memory mapped perhipherals that's aren't attached via PCI, and PCI has specificied ways to detect attached hardware. There's some legacy interfaces that aren't on PCI that you might want, and you need ACPI to find them properly, but you can also just poke them at their well known address and see if they respond. AFAIK, you don't get that on other systems... many perhipherals will be memory mapped directly, rather than attached via PCI; the PCI controller/root is not at a well known address, etc, every system is a little different because there's no obvious system to emulate.

Mostly ACPI is about having hardware description tables in a convenient place for the OS to find it. Certainly standardized understanding of power states and the os-independent description of how to enter them is important too.

There are/were other proposals, but if you want something like UEFI and ACPI, and you have clout, you can just require it for systems you support. The market problem is Apple doesn't let their OS run on anything non-Apple, and Android has minimal standards in this area; whereas the marketplace for software for the IBM PC relied heavily on the IBM BIOS, the marketplace of software for Android relies on features of the OS; SoC makers can build a custom kernel with the hardware description hardcoded and there's no need to provide an in firmware system of hardware description. Other OSes lose out because they too need custom builds for each SoC.

dmitrygr•4h ago
Graviton, Apple M-series...

That variable-length encoding and strongly ordered memory model will do x86 in sooner and not later.

Zeetah•4h ago
I wonder if we'll still be running x86 code a hundred years after it came out (according to Wikipedia, it came out in 1978). We are already 48 years in.
cmrdporcupine•4h ago
Really when people say x86 now tho they don't mean that. They really mean the variant introduced with the 386 which has a linear memory model, memory protection, etc. Or x86_64 which is philosophically akin to the 386 but really a new ISA.

So it's really more like mid-80s or early 2000s, not late 70s.

M95D•4h ago
^ that!

You can't run a COM program today. Not without emulation. Recent PCs can't even run DOS EXE because they're missing the BIOS interrupts most DOS programs use.

anthk•4h ago
No, you are wrong. DOS COM files if 16 bit can't be run on 64 bit CPU's but 32 bit DOS binaries can be run under 32 bit GNU/Linux installs with DosEMU straight by just emulating the BIOS part, the rest is native.
M95D•3h ago
You actually confirmed what I said. :)
anthk•2h ago
50/50, because once you boot a 32 bit os you can run 16 bit binaries :)

I'm pretty sure that if I make a dual-kernel 9front (9pc and 9pc64 available at boot) in a 64 bit machine and I compile emu2 for it, DOS COM binaries might be trapped enough to run simple text mode tools under the 386 port.

rep_lodsb•3h ago
It has nothing to do with being unable to run 16-bit code, that's a myth.

https://man7.org/linux/man-pages/man2/modify_ldt.2.html

Set seg_32bit=0 and you can create 16-bit code and data segments. Still works on 64 bit. What's missing is V86 mode, which emulates the real mode segmentation model.

mifydev•4h ago
You can just boot freedos to run them, it will execute in real mode which has the same cpu instructions as 40 years ago.
M95D•3h ago
UEFI switches the CPU into 32bit v86 mode or directly in 64bit mode and you can't go back to real mode without a CPU reset, which v86 won't allow (you don't have ring -2 privileges) and 64bit mode can't do at all. I don't have a UEFI system, so I might be wrong (I even hope I'm wrong - it would mean slightly more freedom still exists), but from what I read about it, I'm 90% certain it's not possible.
rep_lodsb•3h ago
You're confusing several things here. The only x86 processor that didn't allow returning to real mode was the 16-bit 80286 - on all later ones it's as simple as clearing bit 0 of CR0 (and also disabling paging if that was enabled).

Nothing more privileged than ring 0 is required for that.

"v86" is what allowed real mode to be virtualized under a 32-bit OS. This is no longer available in 64-bit mode, but the CPU still includes it (as well as newer virtualization features which could be used to do the same thing).

cmrdporcupine•29m ago
Even if you could/can it is an anachronism. Architecturally there's just a huge difference between 8086 and even 80286 and the 386. Before the 386 I wouldn't touch a machine with an Intel processor in it. Once the 386/486 penetrated the market and became cheap it was game over for everything else because it was good enough (linear address space, memory protection, larger address space, 32-bit etc etc), smart enough, gosh darn it it was cheap and everywhere.
klelatti•4h ago
The point about the difficulties with Arm may be fair comment but the positioning and outlook of this post is decidedly weird. It seems to pretend that competitive desktop Arm processors already exist and ignores the existence of Arm ACPI.

On the conclusion - x86 didn't eventually win in smartphones.

And of course having a choice of processor designs from precisely two firms is absolutely something that we should continue to be happy with (and the post ignores RISC-V).

M95D•4h ago
x86 always had standards: same two IRQ controllers, same UART chips, same keyboard controller, same PC speaker I/O, same ISA, same PCI, same AGP, VGA ROM that init the GPU with the same framebuffer address, all PATA controllers used the same I/O and IRQ and a single driver worked for all, same de-facto standards for audio (OPL aka. Adlib / SoundBlaster / MIDI), simple/bidi/ECP/EPP standards for parallel port and de-facto ESC/P standard for printers, etc. Hell, even USB there were only 2 at the beginning: Intel (UHCI) and AMD (OHCI), and then they cooperated and made universal EHCI.

ARM is a complete jungle by comparison. Each ARM manufacturer licenses a different UART, different USB, different PCIe (or none at all), different SATA, different GPU, different audio even if it's just I2s, different I2c, different SPI, different GPIO controller, different MMC/SDHCI, etc. etc. And each one needs, of course, a different driver!

The big mistake ARM (the company) made was to design only CPUs, not complete SoCs with peripherals, or at least require standard I/O addresses. And now they're trying to patch it up with UEFI and ACPI: closed-source ring -2 blobs that will never be updated or bug-fixed by any manufacturer.

anthk•4h ago
ARM device trees suck. ACPI for sure it's hell, but a DTB per device it's a damn disaster. U-Boot it's open, but it sucks at having to plug a damn USB-serial cable in 2026 in order to get a prompt. That should come builtin, and with an easy builting help or some text based menu.
M95D•3h ago
It's either a DTB per device or a firmware blob per device. I'll take the open source device tree anytime!
mattnewton•4h ago
Not what the article is talking about, but I think betting against x86 in terms of the investment of companies (not individuals buying PC parts) has been a pretty good bet!

Being long AAPL and NVDA has crushed AMD and INTC, and that's with AMD's gains which I would argue are mostly due to non-x86 chips. Even Broadcom + Qualcom + ARM has been a better basket to hold for most of the last 5 years.

While PCs still need x86 because of the standardization the article talks about, more appliance-like computers like mobile phones and even server hardware have stolen a lot of market share and I think are the dominant way people will do their computing in the future. This comment was written on a m2 macbook that I use to ssh into a gb200 server.