frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Strange Attractors

https://blog.shashanktomar.com/posts/strange-attractors
83•shashanktomar•1h ago•10 comments

Futurelock: A subtle risk in async Rust

https://rfd.shared.oxide.computer/rfd/0609
231•bcantrill•7h ago•93 comments

A theoretical way to circumvent Android developer verification

https://enaix.github.io/2025/10/30/developer-verification.html
68•sleirsgoevy•4h ago•41 comments

Introducing architecture variants

https://discourse.ubuntu.com/t/introducing-architecture-variants-amd64v3-now-available-in-ubuntu-...
158•jnsgruk•1d ago•105 comments

The Last PCB You'll Ever Buy [video]

https://www.youtube.com/watch?v=A_IUIyyqw0M
26•surprisetalk•4d ago•9 comments

Addiction Markets

https://www.thebignewsletter.com/p/addiction-markets-abolish-corporate
147•toomuchtodo•6h ago•131 comments

Leaker reveals which Pixels are vulnerable to Cellebrite phone hacking

https://arstechnica.com/gadgets/2025/10/leaker-reveals-which-pixels-are-vulnerable-to-cellebrite-...
175•akyuu•1d ago•90 comments

Use DuckDB-WASM to query TB of data in browser

https://lil.law.harvard.edu/blog/2025/10/24/rethinking-data-discovery-for-libraries-and-digital-h...
131•mlissner•7h ago•34 comments

My Impressions of the MacBook Pro M4

https://michael.stapelberg.ch/posts/2025-10-31-macbook-pro-m4-impressions/
102•secure•14h ago•146 comments

Hacking India's largest automaker: Tata Motors

https://eaton-works.com/2025/10/28/tata-motors-hack/
121•EatonZ•2d ago•42 comments

Perfetto: Swiss army knife for Linux client tracing

https://lalitm.com/perfetto-swiss-army-knife/
87•todsacerdoti•12h ago•9 comments

How We Found 7 TiB of Memory Just Sitting Around

https://render.com/blog/how-we-found-7-tib-of-memory-just-sitting-around
93•anurag•1d ago•21 comments

AI scrapers request commented scripts

https://cryptography.dog/blog/AI-scrapers-request-commented-scripts/
177•ColinWright•8h ago•119 comments

Llamafile Returns

https://blog.mozilla.ai/llamafile-returns/
80•aittalam•2d ago•13 comments

Nix Derivation Madness

https://fzakaria.com/2025/10/29/nix-derivation-madness
149•birdculture•10h ago•52 comments

S.a.r.c.a.s.m: Slightly Annoying Rubik's Cube Automatic Solving Machine

https://github.com/vindar/SARCASM
6•chris_overseas•1h ago•1 comments

Show HN: Pipelex – Declarative language for repeatable AI workflows

https://github.com/Pipelex/pipelex
70•lchoquel•3d ago•15 comments

Active listening: the Swiss Army Knife of communication

https://togetherlondon.com/insights/active-listening-swiss-army-knife
5•lucidplot•4d ago•1 comments

Signs of introspection in large language models

https://www.anthropic.com/research/introspection
98•themgt•1d ago•45 comments

Pangolin (YC S25) Is Hiring a Full Stack Software Engineer (Open-Source)

https://docs.pangolin.net/careers/software-engineer-full-stack
1•miloschwartz•7h ago

Lording it, over: A new history of the modern British aristocracy

https://newcriterion.com/article/lording-it-over/
42•smushy•6d ago•83 comments

Sustainable memristors from shiitake mycelium for high-frequency bioelectronics

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0328965
106•PaulHoule•11h ago•53 comments

Photographing the rare brown hyena stalking a diamond mining ghost town

https://www.bbc.com/future/article/20251014-the-rare-hyena-stalking-a-diamond-mining-ghost-town
7•1659447091•1h ago•0 comments

x86 architecture 1 byte opcodes

https://www.sandpile.org/x86/opc_1.htm
65•eklitzke•6h ago•29 comments

Attention lapses due to sleep deprivation due to flushing fluid from brain

https://news.mit.edu/2025/your-brain-without-sleep-1029
502•gmays•11h ago•247 comments

The cryptography behind electronic passports

https://blog.trailofbits.com/2025/10/31/the-cryptography-behind-electronic-passports/
121•tatersolid•13h ago•78 comments

The 1924 New Mexico regional banking panic

https://nodumbideas.com/p/labor-day-special-the-1924-new-mexico
38•nodumbideas•1w ago•1 comments

Apple reports fourth quarter results

https://www.apple.com/newsroom/2025/10/apple-reports-fourth-quarter-results/
117•mfiguiere•1d ago•158 comments

How to build silos and decrease collaboration on purpose

https://www.rubick.com/how-to-build-silos-and-decrease-collaboration/
109•gpi•5h ago•37 comments

It's the “hardware”, stupid

https://haebom.dev/archive?post=4w67rj24q76nrm5yq8ep
61•haebom•6d ago•105 comments
Open in hackernews

Introducing architecture variants

https://discourse.ubuntu.com/t/introducing-architecture-variants-amd64v3-now-available-in-ubuntu-25-10/71312
157•jnsgruk•1d ago

Comments

theandrewbailey•1d ago
A reference for x86-64 microarchitecture levels: https://en.wikipedia.org/wiki/X86-64#Microarchitecture_level...

x86-64-v3 is AVX2-capable CPUs.

smlacy•8h ago
I presume the motivation is performance optimization? It would be more compelling to include some of the benefits in the announcement?
embedding-shape•8h ago
They do mention it in the linked announcement, although not really highlighted, just as a quick mention:

> As a result, we’re very excited to share that in Ubuntu 25.10, some packages are available, on an opt-in basis, in their optimized form for the more modern x86-64-v3 architecture level

> Previous benchmarks we have run (where we rebuilt the entire archive for x86-64-v3 57) show that most packages show a slight (around 1%) performance improvement and some packages, mostly those that are somewhat numerical in nature, improve more than that.

mobilio•8h ago
Announce was here: https://discourse.ubuntu.com/t/introducing-architecture-vari...

and key point: "Previous benchmarks we have run (where we rebuilt the entire archive for x86-64-v3 57) show that most packages show a slight (around 1%) performance improvement and some packages, mostly those that are somewhat numerical in nature, improve more than that."

juujian•8h ago
Are there any use cases where that 1% is worth any hassle whatsoever?
adgjlsfhk1•8h ago
it's very no uniform. 99% see no change, but 1% see 1.5-2x better performance
Insanity•8h ago
I read it as, across the board a 1% performance improvement. Not that only 1% of packages get a significant improvement.
IAmBroom•8h ago
In a complicated system, a 1% overall benefit might well be because of a 10% improvement in just 10% of the system (or more in a smaller contributor).
2b3a51•8h ago
I'm wondering if 'somewhat numerical in nature' relates to lpack/blas and similar libraries that are actually dependencies of a wide range of desktop applications?
adgjlsfhk1•7h ago
blas and lapack generally do manual multi-versioning by detecting CPU features at runtime. This is more useful 1 level up the stack in things like compression/decompression, ode solvers, image manipulation and so on that are still working with big arrays of data, but don't have a small number of kernels (or as much dev time), so they typically rely on compilers for auto-vectorization
dehrmann•8h ago
Anything at scale. 1% across FAANG is huge.
Havoc•7h ago
Arguable same across consumers too. It’s just harder to measure than central datacenters
notatoad•5h ago
nah, performance benefits are mostly wasted on consumers, because consumer hardware is very infrequently CPU-constrained. in a datacentre, a 1% improvement could actually mean you provision 99 CPUs instead of 100. but on your home computer, a 1% CPU improvement means that your network request completes 0.0001% faster, or your file access happens 0.000001% faster, and then your CPU goes back to being idle.

an unobservable benefit is not a benefit.

wongarsu•7h ago
If every computer built in the last decade gets 1% faster and all we have to pay for that is a bit of one-off engineering effort and a doubling of the storage requirement of the ubuntu mirrors that seems like a huge win

If you aren't convinced by your ubuntu being 1% faster, consider how many servers, VMs and containers run ubuntu. Millions of servers using a fraction of a percent less energy multiplies out to a lot of energy

vladms•6h ago
Don't have a clear opinion, but you have to factor in all the issues that can be due to different versions of software. Think of unexposed bugs in the whole stack (that can include compiler bugs but also software bugs related to numerical computation or just uninitialized memory). There are enough heisenbugs without worrying that half the servers run on a slightly different software.

It's not for nothing that some time ago "write once, run everywhere" was a selling proposition (not that it was actually working in all cases, but definitely working better than alternatives).

sumtechguy•6h ago
That comes out to about 1.5 hours faster per week for many tasks. If you are running full tilt. But that seems like an ok easy win.
Aissen•7h ago
You need 100 servers. Now you need to only buy 99. Multiply that by a million, and the economies of scale really matter.
iso1631•7h ago
1% is less than the difference between negotiating with a hangover or not.
gpm•6h ago
What a strange comparison.

If you're negotiating deals worth billions of dollars, or even just millions, I'd strongly suggest not doing so with a hangover.

Pet_Ant•6h ago
> If you're negotiating deals worth billions of dollars, or even just millions, I'd strongly suggest not doing so with a hangover.

...have you met salespeople? Buying lap dances is a legitimate business expense for them. You'd be surprised how much personal rapport matters and facts don't.

In all fairness, I only know about 8 and 9 figure deals, maybe at 10 and 11 salespeople grow ethics...

bregma•5h ago
I strongly suspect ethics are inversely proportional to the size of the deal.
glenstein•4h ago
That's more an indictment of sales culture than a critique of computational efficiency.
squeaky-clean•3h ago
Well sure, because you want the person trying buy something from you for a million dollars to have a hangover.
tclancy•9m ago
Sounds like someone never read Sun Tzu.

(Not really, I just know somewhere out there is a LinkedInLunatic who has a Business Philosophy based on being hungover.)

PeterStuer•7h ago
A lott of improvements are very incremental. In agregate, they often compound and are vey significant.

If you would only accept 10x improvements, I would argue progress would be very small.

colechristensen•7h ago
Very few people are in the situation where this would matter.

Standard advice: You are not Google.

I'm surprised and disappointed 1% is the best they could come up with, with numbers that small I would expect experimental noise to be much larger than the improvement. If you tell me you've managed a 1% improvement you have to do a lot to convince me you haven't actually made things 5% worse.

noir_lord•3h ago
No but a lot of people are buying a lot of compute from Google, Amazon and Microsoft.

At scale marginal differences do matter and compound.

wat10000•6h ago
It's rarely going to be worth it for an individual user, but it's very useful if you can get it to a lot of users at once. See https://www.folklore.org/Saving_Lives.html

"Well, let's say you can shave 10 seconds off of the boot time. Multiply that by five million users and thats 50 million seconds, every single day. Over a year, that's probably dozens of lifetimes. So if you make it boot ten seconds faster, you've saved a dozen lives. That's really worth it, don't you think?"

I put a lot of effort into chasing wins of that magnitude. Over a huge userbase, something like that has a big positive ROI. These days it also affects important things like heat and battery life.

The other part of this is that the wins add up. Maybe I manage to find 1% every couple of years. Some of my coworkers do too. Now you're starting to make a major difference.

rossjudson•5h ago
Any hyperscaler will take that 1% in a heartbeat.
locknitpicker•5h ago
> Are there any use cases where that 1% is worth any hassle whatsoever?

I don't think this is a valid argument to make. If you were doing the optimization work then you could argue tradeoffs. You are not, Canonical is.

Your decision is which image you want to use, and Canonical is giving you a choice. Do you care about which architecture variant you use? If you do, you can now pick the one that works best for you. Do you want to win an easy 1% performance gain? Now you have that choice.

gwbas1c•5h ago
> some packages, mostly those that are somewhat numerical in nature, improve more than that

Perhaps if you're doing CPU-bound math you might see an improvement?

ilaksh•3h ago
They did say some packages were more. I bet some are 5%, maybe 10 or 15. Maybe more.

Well one example could be llama.cpp . It's critical for them to use every single extension the CPU has move more bits at a time. When I installed it I had to compile it.

This might make it more practical to start offering OS packages for things like llama.cpp

I guess people that don't have newer hardware aren't trying to install those packages. But maybe the idea is that packages should not break on certain hardware.

Blender might be another one like that which really needs the extensions for many things. But maybe you so want to allow it to be used on some oldish hardware anyway because it still has uses that are valid on those machines.

godelski•2h ago

  > where that 1% is worth any hassle
You'll need context to answer your question, but yes there are cases.

Let's say you have a process that takes 100hrs to run and costs $1k/hr. You save an hour and $1k every time you run the process. You're going to save quite a bit. You don't just save the time to run the process, you save literal time and everything that that costs (customers, engineering time, support time, etc).

Let's say you have a process that takes 100ns and similarly costs $1k/hr. You now run in 99ns. Running the process 36 million times is going to be insignificant. In this setting even a 50% optimization probably isn't worthwhile (unless you're a high frequency trader or something)

This is where the saying "premature optimization is the root of all evil" comes from! The "premature" part is often disregarded and the rest of the context goes with it. Here's more context to Knuth's quote[0].

  There is no doubt that the holy grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.

  Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified.
Knuth said: "Get a fucking profiler and make sure that you're optimizing the right thing". He did NOT say "don't optimize".

So yes, there are plenty of times where that optimization will be worthwhile. The percentages don't mean anything without the context. Your job as a programmer is to determine that context. And not just in the scope of your program, but in the scope of the environment you expect a user to be running on. (i.e. their computer probably isn't entirely dedicated to your program)

[0] https://dl.acm.org/doi/10.1145/356635.356640 (alt) https://sci-hub.se/10.1145/356635.356640

ninkendo•7h ago
> show that most packages show a slight (around 1%) performance improvement

This takes me back to arguing with Gentoo users 20 years ago who insisted that compiling everything from source for their machine made everything faster.

The consensus at the time was basically "theoretically, it's possible, but in practice, gcc isn't really doing much with the extra instructions anyway".

Then there's stuff like glibc which has custom assembly versions of things like memcpy/etc, and selects from them at startup. I'm not really sure if that was common 20 years ago but it is now.

It's cool that after 20 years we can finally start using the newer instructions in binary packages, but it definitely seems to not matter all that much, still.

Amadiro•6h ago
It's also because around 20 years ago there was a "reset" when we switched from x86 to x86_64. When AMD introduced x86_64, it made a bunch of the previously optional extension (SSE up to a certain version etc) a mandatory part of x86_64. Gentoo systems could already be optimized before on x86 using those instructions, but now (2004ish) every system using x86_64 was automatically always taking full advantage of all of these instructions*.

Since then we've slowly started accumulating optional extensions again; newer SSE versions, AVX, encryption and virtualization extensions, probably some more newfangled AI stuff I'm not on top of. So very slowly it might have started again to make sense for an approach like Gentoo to exist**.

* usual caveats apply; if the compiler can figure out that using the instruction is useful etc.

** but the same caveats as back then apply. A lot of software can't really take advantage of these new instructions, because newer instructions have been getting increasingly more use-case-specific; and applications that can greatly benefit from them will already have alternative code-pathes to take advantage of them anyway. Also a lot of the stuff happening in hardware acceleration has moved to GPUs, which have a feature discovery process independent of CPU instruction set anyway.

mikepurvis•3h ago
> AVX, encryption and virtualization

I would guess that these are domain-specific enough that they can also mostly be enabled by the relevant libraries employing function multiversioning.

izacus•34m ago
You would guess wrong.
slavik81•2h ago
The llama.cpp package on Debian and Ubuntu is also rather clever in that it's built for x86-64-v1, x86-64-v2, x86-64-v3, and x86-64-v4. It benefits quite dramatically from using the newest instructions, but the library doesn't have dynamic instruction selection itself. Instead, ld.so decides which version of libggml.so to load depending on your hardware capabilities.
oivey•6h ago
This should build a lot more incentive for compiler devs to try and use the newer instructions. When everyone uses binaries compiled without support for optional instruction sets, why bother putting much effort into developing for them? It’ll be interesting to see if we start to see more of a delta moving forward.
Seattle3503•1h ago
And application developers to optimize with them in mind?
ploxiln•5h ago
FWIW the cool thing about gentoo was the "use-flags", to enable/disable compile-time features in various packages. Build some apps with GTK or with just the command-line version, with libao or pulse-audio, etc. Nowadays some distro packages have "optional dependencies" and variants like foobar-cli and foobar-gui, but not nearly as comprehensive as Gentoo of course. Learning about some minor custom CFLAGS was just part of the fun (and yeah some "funroll-loops" site was making fun of "gentoo ricers" way back then already).

I used Gentoo a lot, jeez, between 20 and 15 years ago, and the install guide guiding me through partitioning disks, formatting disks, unpacking tarballs, editing config files, and running grub-install etc, was so incredibly valuable to me that I have trouble expressing it.

mpyne•4h ago
I still use Gentoo for that reason, and I wish some of those principles around handling of optional dependencies were more popular in other Linux distros and package ecosystems.

There's lots of software applications out there whose official Docker images or pip wheels or whatever bundle everything under the sun to account for all the optional integrations the application has, and it's difficult to figure out which packages can be easily removed if we're not using the feature and which ones are load-bearing.

zerocrates•48m ago
I started with Debian on CDs, but used Gentoo for years after that. Eventually I admitted that just Ubuntu suited my needs and used up less time keeping it up to date. I do sometimes still pull in a package that brings a million dependencies for stuff I don't want and miss USE flags, though.

I'd agree that the manual Gentoo install process, and those tinkering years in general, gave me experience and familiarity that's come in handy plenty of times when dealing with other distros, troubleshooting, working on servers, and so on.

harha•2h ago
Would it make a difference if you compile the whole system vs. just the programs you want optimized?

As in, are there any common libraries or parts of the system that typically slow things down, or was this more targeting a time when hardware was more limited so improving all would have made things feel faster in general.

suprjami•1h ago
I somehow have the memory that there was an extremely narrow time window where the speedup was tangible and quantifiable for Gentoo, as they were the first distro to ship some very early gcc optimisation. However it's open source software so every other distro soon caught up and became just as fast as Gentoo.
pizlonator•7h ago
That 1% number is interesting but risks missing the point.

I bet you there is some use case of some app or library where this is like a 2x improvement.

dang•6h ago
Thanks - we've merged the comments from https://news.ycombinator.com/item?id=45772579 into this thread, which had that original source.
theandrewbailey•8h ago
A reference for x86-64 microarchitecture levels: https://en.wikipedia.org/wiki/X86-64#Microarchitecture_level...

x86-64-v3 is AVX2-capable CPUs.

jsheard•8h ago
> x86-64-v3 is AVX2-capable CPUs.

Which unfortunately extends all the way to Intels newest client CPUs since they're still struggling to ship their own AVX512 instructions, which are required for v4. Meanwhile AMD has been on v4 for two generations already.

theandrewbailey•8h ago
At least Intel and AMD have settled on a mutually supported subset of AVX-512 instructions.
wtallis•7h ago
The hard part was getting Intel and Intel to agree on which subset to keep supporting.
cogman10•4h ago
Even on the same chip.

Having a non-uniform instruction set for one package was a baffling decision.

jsheard•4h ago
I think that stemmed from their P-core design being shared between server and client. They needed AVX512 for server so they implemented it in the P-cores, and it worked fine there since their server chips are entirely P-cores or entirely E-cores, but client uses a mixture of both so they had to disable AVX512 to bring the instruction set into sync across both sides.
wtallis•3h ago
Server didn't really have anything to do with it. They were fine shipping AVX 512 in consumer silicon for Cannon Lake (nominally), Ice Lake, Tiger Lake, and most damningly Rocket Lake (backporting an AVX 512-capable core to their 14nm process for the sole purpose of making a consumer desktop chip, so they didn't even have the excuse that they were re-using a CPU core floorplan that was shared with server parts).

It's pretty clear that Alder Lake was simply a rush job, and had to be implemented with the E cores they already had, despite never having planned for heterogenous cores to be part of their product roadmap.

jiggawatts•4h ago
It’s a manifestation of Conway’s law: https://en.wikipedia.org/wiki/Conway%27s_law

They had two teams designing the two types of cores.

physicsguy•8h ago
This is quite good news but it’s worth remembering that it’s a rare piece of software in the modern scientific/numerical world that can be compiled against the versions in distro package managers, as versions can significantly lag upstream months after release.

If you’re doing that sort of work, you also shouldn’t use pre-compiled PyPi packages for the same reason - you leave a ton of performance on the table by not targeting the micro-architecture you’re running on.

colechristensen•7h ago
Most of the scientific numerical code I ever used had been in use for decades and would compile on a unix variant released in 1992, much less the distribution version of dependencies that were a year or two behind upstream.
owlbite•6h ago
Very true, but a lot of stuff builds on a few core optimized libraries like BLAS/LAPACK, and picking up a build of those targeted at a modern microarchitecture can give you 10x or more compared to a non-targeted build.

That said, most of those packages will just read the hardware capability from the OS and dispatch an appropriate codepath anyway. You maybe save some code footprint by restricting the number of codepaths it needs to compile.

niwtsol•7h ago
Thanks for sharing this. I'd love to learn more about micro-architectures and instruction sets - would you have any recommendations for books or sources that would be a good starting place?
jeffbee•7h ago
I wonder who downvoted this. The juice you are going to get from building your core applications and libraries to suit your workload are going to be far larger than the small improvements available from microarchitectural targeting. For example on Ubuntu I have some ETL pipelines that need libxml2. Linking it statically into the application cuts the ETL runtime by 30%. Essentially none of the practices of Debian/Ubuntu Linux are what you'd choose for efficiency. Their practices are designed around some pretty old and arguably obsolete ideas about ease of maintenance.
PaulHoule•6h ago
My RSS reader trains a model every week or so and takes 15 minutes total with plain numpy, scikit-learn and all that. Intel MKL can do the same job in about half the time as the default BLAS. So you are looking at a noticeable performance boost but zero bullshit install with uv is worth a lot. If I was interested in improving the model than yeah I might need to train 200 of them interactively and I’d really feel the difference. Thing is the model is pretty good as it is and to make something better I’d have to think long and hard about what ‘better’ means.
ciaranmca•5h ago
Out of interest, what reader is this? Sounds interesting
PaulHoule•5h ago
I've talked about it a lot here, see https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
zipy124•6h ago
Yup, if you're using OpenCV for instance compiling instead of using pre-built binaries can result in 10x or more speed-ups once you take into account avx/threading/math/blas-libraries etc...
oofbey•5h ago
Yup. The irony is that the packages which are difficult to build are the ones that most benefit from custom builds.
zozbot234•7h ago
What are the changes to dpkg and apt? Are they being shared with Debian? Could this be used to address the pesky armel vs. armel+hardfloat vs. armhf issue, or for that matter, the issue of i486 vs. i586 vs. i686 vs. the many varieties of MMX and SSE extensions for 32-bit?

(There is some older text in the Debian Wiki https://wiki.debian.org/ArchitectureVariants but it's not clear if it's directly related to this effort)

Denvercoder9•4h ago
Even if technically possible, it's unlikely this will be used to support any of the variants you mentioned in Debian. Both i386 and armel are effectively dead: i386 is reduced to a partial architecture only for backwards compatibility reasons, and armel has been removed entirely from development of the next release.
zozbot234•3h ago
What you said is correct wrt. official support, but Debian also has an unofficial ports infrastructure that could be repurposed towards enabling Debian for older architecture variants.
bobmcnamara•3h ago
This would allow mixing armel and softvfp ABIs, but not hard float ABIs, at least across compilation unit boundaries (that said, GCC never seems to optimize ABI bottlenecks within a compilation unit anyway)
dfc•7h ago
> you will not be able to transfer your hard-drive/SSD to an older machine that does not support x86-64-v3. Usually, we try to ensure that moving drives between systems like this would work. For 26.04 LTS, we’ll be working on making this experience cleaner, and hopefully provide a method of recovering a system that is in this state.

Does anyone know what the plans are to accomplish this?

dmoreno•6h ago
If I were them I would make sure the V3 instructions are not used until late in the boot process, and some apt command that makes sure all installed programs are in the right subarchitecture for the running system, reinstalling as necessary.

But that does not sound like a simple for non technical users solution.

Anyway, non technical users using an installation on another lower computer? That sounds weird.

zer0zzz•7h ago
There was a fat elf project to solve this problem at one point I thought.
DrNosferatu•6h ago
Link?
mariusor•5h ago
Maybe parent is referring to icculus' FatELF proposal from fifteen years ago? https://icculus.org/fatelf/
stabbles•7h ago
Seems like this is not using glibc's hwcaps (where shared libraries were located in microarch specific subdirs).

To me hwcaps feels like a very unfortunate feature creep of glibc now. I don't see why it was ever added, given that it's hard to compile only shared libraries for a specific microarch, and it does not benefit executables. Distros seem to avoid it. All it does is causing unnecessary stat calls when running an executable.

sluongng•7h ago
Nice. This is one of the main reasons why I picked CachyOS recently. Now I can fallback to Ubuntu if CachyOS gets me stuck somewhere.
yohbho•6h ago
CachyOS uses this one percent of performance gains? Since it uses every performance gain, unsurprising. But now I wonder how my laptop from 2012 did run CachyOS, they seem to switch based on hardware, not during image download and boot.
topato•6h ago
correct, it just sets the repository in the pacman.conf to either cachyos, -v3, or -v4 during install time based on hardware probe
shmerl•7h ago
Will Debian do it?
bmitch3020•6h ago
https://wiki.debian.org/ArchitectureVariants
shmerl•5h ago
Hm, discussion is from 2023. Did anything come out of it?
bmitch3020•4h ago
I believe it's just discussions right now. If/when something happens, I'm hoping they'll update the wiki.
amelius•7h ago
Can we please have an "apt rollback" function?
riskable•7h ago
If you're using btrfs, you do get that feature: https://moritzmolch.com/blog/2506.html
o11c•7h ago
That fundamentally requires a snapshot-capable filesystem, so you need to use a distro designed around such.
amelius•7h ago
Not necessarily. You can use the ptrace() system call to trace a process and store what it reads/writes into a journal, etc.

https://man7.org/linux/man-pages/man2/ptrace.2.html

julian-klode•5h ago
Yes sure

apt (3.1.7) unstable; urgency=medium . [ Julian Andres Klode ] * test-history: Adjust for as-installed testing . [ Simon Johnsson ] * Add history undo, redo, and rollback features

benatkin•7h ago
There's an unofficial repo for ArchLinux: https://wiki.archlinux.org/title/Unofficial_user_repositorie...

> Description: official repositories compiled with LTO, -march=x86-64-vN and -O3.

Packages: https://status.alhp.dev/

zdw•7h ago
Many other 3rd party software has already required x86-64-v2 or -v3 already.

I couldn't run something from NPM on a older NAS machine (HP Microserver Gen 7) recently because of this.

justahuman74•6h ago
If this goes well - will they do v4 as well?
jnsgruk•4h ago
Maybe - likely we’ll trade-off the added build/test/storage cost of maintaining each variant - so you might not see amd64v4, but possibly amd64v5 depending on how impactful they turn out to be.

The same will apply to different arm64 or riscv64 variants.

skywhopper•6h ago
This sure feels like overkill that leaks massive complexity into a lot more areas than it’s needed in. For the applications that truly need sub-architecture variants, surely different packages or just some sort of meta package indirection would be better for everyone involved.
ElijahLynn•5h ago
I clicked on this article expecting an M series variant for Apple hardware...
westurner•5h ago
"Gentoo x86-64-v3 binary packages available" (2024) https://news.ycombinator.com/item?id=39255458

"Changes/Optimized Binaries for the AMD64 Architecture v2" (2025) https://fedoraproject.org/wiki/Changes/Optimized_Binaries_fo... :

> Note that other distributions use higher microarchitecture levels. For example RHEL 9 uses x86-64-v2 as the baseline, RHEL 10 uses x86-64-v3, and other distros provide optimized variants (OpenSUSE, Arch Linux, Ubuntu).

whalesalad•5h ago
> means to better exploit modern processors without compromising support for older hardware

very odd choice of words. "better utilize/leverage" is perhaps the right thing to say here.

JohnKemeny•5h ago
"exploit": make full use of and derive benefit from
Hasz•5h ago
Getting a 1% across the board general purpose improvement might sound small, but is quite significant. Happy to see Canonical invest more heavily in performance and correctness.

Would love to see which packages benefited the most in terms of percentile gain and install base. You could probably back out a kWh/tons of CO2 saved metric from it.

malkia•5h ago
This is awesome, but ... If you process requires deterministic results (speaking about floats/doubles mostly here), then you need to get this straight.
tommica•4h ago
Once they have rebuilt with rust, they get to move away from GPL licenses and get to monetize things.
rock_artist•3h ago
So if it got it right, This is mostly a way to have branches within a specific release for various levels of CPUs and their support of SIMD and other modern opcodes.

And if I have it right, The main advantage should come with package manager and open sourced software where the compiled binaries would be branched to benefit and optimize newer CPU features.

Still, this would be most noticeable mostly for apps that benefit from those features such as audio dsp as an example or as mentioned ssl and crypto.

jeffbee•3h ago
I would expect compression, encryption, and codecs to have the least noticeable benefit because these already do runtime dispatch to routines suited to the CPU where they are running, regardless of the architecture level targeted at compile time.
WhyNotHugo•3h ago
OTOH, you can remove the runtime dispatching logic entirely if you compile separate binaries for each architecture variant.

Especially the binaries for the newest variant, since they can entirely conditionals/branching for all older variants.

jeffbee•3h ago
That's a lot of surgery. These libraries do not all share one way to do it. For example zstd will switch to static BMI2 dispatch if it was targeting Haswell or later at compile time, but other libraries don't have that property and will need defines.
brucehoult•1h ago
So now they can support RISC-V RVA20 and RVA23 in the same distro?

All the fuss about Ubuntu 25.10 and later being RVA23 only was about nothing?