It's crazy how projects this large and influential can get by on so little cash. Of course a lot of people are donating their very valuable labour to the project, but the ROI from Gentoo is incredible compared to what it costs to do anything in commercial software.
Red Hat also has a nasty habit of pushing their decisions onto the other distributions; e.g.
- systemd
- pulseaudio (this one was more Fedora IIRC)
- Wayland
- Pipewire (which, to be fair, wasn't terrible by the time I tried it)
I guess Debian, SUSE, Canonical, etc get that email from Red Hat just go along with it. We better make the switch, we don’t want our ::checks notes:: competitor made at us.
FF to last year, I was working with OpenGL (in linux), I thought "I will add sound" boy... I was smashed by the zoo of APIs, subsystems one on top of another, lousy documentation... Audio, which for me was WAY easier as video, suddenly was way more complicated. From the userland POV, last year I also wanted to make a kind of BT speaker with a raspeberry pi, and also was terrible experience.
So, I don't know... maybe I should give a try to pipewire, at the time I was done after fighting with alsa and pulseaudio, the first problem I killed it.
Back in the day when the boxes were on display in brick-and-mortar stores, SuSE was a great way to get up and running with Linux.
It's a rock-solid distro, and if I had a use for enterprise support, I'd probably look into SLES as a pretty serious contender.
The breadth of what they're doing seems unparalleled, i.e. they have rolling release (Tumbleweed), delayed rolling release (Slowroll) which is pretty unique in and of itself, point release (Leap), and then both Tumbleweed and Leap are available in immutable form as well (MicroOS, and Leap Micro respectively), and all of the aforementioned with a broad choice of desktops or as server-focused minimal environments with an impressively small footprint without making unreasonable tradeoffs. ...if you multiply out all of those choices it gives you, it turns into quite a hairy ball of combinatorics, but they're doing a decent job supporting it all.
As far as graphical tools for system administration go, YaST is one of the most powerful and they are currently investing in properly replacing it, now that its 20-year history makes for an out-of-date appearance. I tried their new Agama installer just today, and was very pleased with the direction they're taking.
...so, not quite sure what you're getting at with your "Back in the day..." I, too, remember the days of going to a brick-and-mortar store to buy Linux as a box set, and it was between RedHat and SuSE. Since then, I think they've lost mindshare because other options became numerous and turned up the loudness, but I think they've been quiety doing a pretty decent job all this time and are still beloved by those who care to pay attention.
SUSE has always been pretty big in Europe but never was that prominent in North America except for IBM mainframes, which Red Hat chipped away at over time. (For a period, SUSE supported some mainframe features that Red Hat didn't--probably in part because some Red Hat engineering leadership was at least privately dismissive of the whole idea of running Linux on mainframes.)
Tbh it feels like NixOS is convenient in a large part because of systemd and all the other crap you have to wire together for a usable (read compatible) Linux desktop. Better to have a fat programming language, runtime and collection of packages which exposes one declarative interface.
Much of this issue is caused by the integrate-this-grab-bag-of-tools-someone-made approach to system design, which of course also has upsides. Redhat seems to be really helping with amplifying the downsides by providing the money to make a few mediocre tools absurdly big tho.
systemd.services.rclone-photos-sync = {
serviceConfig.Type = "oneshot";
path = [ pkgs.rclone ];
script = ''
rclone \
--config ${config.sops.secrets."rclone.conf".path} \
--bwlimit 20M --transfers 16 \
sync /mnt/photos/originals/ photos:
'';
unitConfig = {
RequiresMountsFor = "/mnt/photos";
};
};
systemd.timers.rclone-photos-sync = {
timerConfig = {
# Every 2 hours.
OnCalendar = "00/2:00:00";
# 5 minute jitter.
RandomizedDelaySec = "5m";
# Last run is persisted across reboots.
Persistent = true;
Unit = "rclone-photos-sync.service";
};
partOf = [ "rclone-photos-sync.service" ];
wantedBy = [ "timers.target" ];
};
In my view, using Nix to define your systemd services beats copying and symlinking files all over the place :)I agree the systemd interface is rather simple (just translate nix expression to config file). But NixOS is a behemoth; Completely change the way how every package is built, introduce a functional programming language and filesystem standard to somehow merge everything together, and then declare approximately every package to ever exist in this new language + add a boatloat of extra utilities and infra.
An OS is first of all is a set of primitives to accomplish other things. What classic worse-is-better Unix does really well is do just enough to make you able to get on with whatever those things are. Write some C program to gather some simulation data, pipe its output to awk or gnuplot to slice it. Maybe automate some of that workflow with a script or two.
Current tools can do a bit more and can do it nicer or more rigorously sometimes, but you loose the brutal simplicity of a bunch of tools all communicating with the same conventions and interfaces. Instead you get a bunch of big systems all with their own conventions and poor interop. You've got Systemd and the other Redhat-isms with their custom formats and bad CLI interfaces. You've got every programming language with it's own n package managers. A bunch of useful stuff sure, but encased in a bunch of reinvented infrastructure and conventions.
--config ${config.sops.secrets."rclone.conf".path} \
NixOS let you build the abstraction you want, and mix them with abstractions provided by others, and this single line illustrates this point extremely well as `sops` is not yet part of NixOS.Secret management would likely come in NixOS in the future, but in the mean time you can add either use https://github.com/Mic92/sops-nix or https://github.com/ryantm/agenix to make it possible to manage files which have content that should not be public.
Other package managers also provide some abstraction over the packages, and would likely see the same systemd configuration abstracted the same way in post-install scripts. Yet, the encrypted file for `rclone.conf` would come as a static path in `/etc`.
You could resume NixOS as having moved the post-install script logic before the installation, yet this tiny detail gives you additional abilities to mix the post-install scripts and assert consistency ahead of making changes to the system.
This, despite the fact that Rocky, Alma, Oracle Enterprise Linux, etc exist because of the hard work and money spent by Red Hat.
And what are those companies doing to fix this issue you claim Red Hat causes? Nothing. Because they like money, especially when all you have to do is rebuild and put your name on other people’s hard work.
And what exactly is incomprehensible? What exactly is it that they’re doing to the Linux desktop that make it so that people can’t fix their own problems? Isn’t the whole selling point of Rocky and Alma by most integrators is that it’s so easy you don’t need red hat to support it?
So I'm not being critical. Yes, Red Hat employees do contribute to projects that are most relevant to the desktop even if doing so is not generally really the focus of their day jobs. And, no, other companies almost certainly haven't done more.
To some extent Valve. They have to, since the Steam Deck's desktop experience depends on the "Linux desktop" being a good experience.
It looks like they're second to Intel, at least by LF's metric. That said driver code tends to be take up a lot of space compared to other areas. Just look at the mass of AMD template garbage here: https://github.com/torvalds/linux/tree/master/drivers/gpu/dr...
One example is virtualization: the virtio stack is maintained by Red Hat (afaik). This is a huge driver behind the “democratization” of virtualization in general, allowing users and small companies to access performant virt without selling a kidney to VMware.
Also, Red Hat contributes to or maintains all of the components involved in OpenShift and OpenStack (one of which is virtio!).
Red Hat primarily contributes code to the kernel and various OSS projects, paid for by the clients on enterprise contracts. A paying client needs something and it gets done. Then the rest of us get to benefit by receiving the code for free. It’s a beautiful model.
If you look at lists of top contributors, Red Hat (along with the usual suspects in enterprise) are consistently at the top.
For example:
- Red Hat Identity Management -> FreeIPA (i.e. Active Directory for Linux)
- Red Hat Satellite -> The Foreman + Katello
- Ansible ... Ansible.
- Red Hat OpenShift -> OKD
- And more I'm not going to list.Not really comparable to the experiences i have running keycloak where the upstream documentation is complete or freeipa where it’s identical to idm and you can just use the redhat docs. Those are both excellent pieces of software we are lucky to have.
It is the Microsoft of the Linux world.
Being the base of ChromeOS makes it highly influential.
ChromeOS market share is >5% in many countries, sometimes around double digits.
Gentoo also runs the backend infra of Sony's Playstation Cloud gaming service
Anecdotal evidence claims it used to also run the NASDAq
While other distributions are struggling to bootstrap their package repositories for new ISAs and waiting for build farms to catch up, Gentoo's source based nature makes it architecture agnostic by definition. I applaud the risque team for having achieved parity with amd64 for the @system set. This proves that the meta-distribution model is the only scalable way to handle the explosion of hardware diversity we are seeing post 2025. If you are building an embedded platfrm or working on custom silicon, Gentoo is a top tier choice. You cross-compile the stage1 and portage handles the rest.
Since you've been on the ride since '04, I'm curious to hear your thoughts. How do you feel the maintenance burden compares today versus the GCC 3.x era? With the modern binhost fallback and the improvements in portage, I feel like we now spend less time fighting rebuild loops than back then? But I wonder if long time users feel the same.
I'm another one on it since the same era :)
In general stable has become _really_ stable, and unstable is still mostly usable without major hiccups. My maintenance burden is limited nowadays compared to 10y ago - pretty much running `emerge -uDN @world --quiet --keep-going` and fixing issues if any, maybe once a month I get package failures but I run a llvm+libcxx system and also package tests, so likely I get more issues than the average user on GCC.
For me these days it's not about the speed anymore of course, but really the customization options and the ability to build pretty much anything I need locally. I also really like the fact that ebuilds are basically bash scripts, and if I need to further customize or reproduce something I can literally copy-paste commands from the package manager in my local folder.
The project has successfully implemented a lot of by-default optimizations and best practices, and in general I feel the codebases for system packages have matured to the point where it's odd to run in internal compiler errors, weird dependency issues, whole-world rebuilds etc. From my point of view it also helped a lot that many compilers begun enforcing more modern and stricter C/C++ standards over time, and at the same time we got Github, CI workflows, better testing tools etc.
I run `emerge -e1 @world` maybe once a year just to shake out stuff lurking in the shadows (like stuff compiled with clang 19 vs clang 21), but it's really normally not needed anymore. The configuration stays pretty much untouched unless I want to enable a new USE for a new package I'm installing.
its been years since I had a build failure, and I even accept several on ~amd64. (with gcc)
Anyway, to answer grandparent, I basically never had rebuild loops in 19 years.. just emerge -uU world every day or sometimes every week. I have been running the same base system since..let's see:
qlop -tvm|h1
2007-01-18T19:50:33 >>> x11-base/xorg-server-1.1.1-r4: 9m23s
I have never once had to rebuild the whole system from scratch in those 19 years. (I've just rsync'd the rootfs from machine to machine as I upgraded HW and gradually rebuilt because as many others here have said, for me it wasn't about "perf of everything" or some kind of reproducible system - "more customization + perf of some things".) The upgrade from monolithic X11 to split X11 was "fun", though. /sI do engage in all sorts of package.mask/per-package use/many global use. I have my own portage/local overlay for things where I disagree with upstream. I even have an automated system to "patch" my disagreements in. E.g, I control how fast I upgrade my LLVM junk so I do it on my own timeline. Mostly I use gcc. I control that, too. Any really slow individual build, basically.
If over the decades, they ever did anything that made it look like crazy amounts of rebuilds would happen, I'd tend to wait a few days/week or so and then figure something out. If some new dependency brings in a mountain of crap, I usually figure out how to block that.
I tried Gentoo around the time that OP started using it, and I also really liked that aspect of it. Most package managers really struggle with this, and when there is configuration, the default is usually "all features enabled". So, when you want to install, say, ffmpeg on Debian, it pulls in a tree of over 250 (!!) dependency packages. Even if you just wanted to use it once to convert a .mp4 container into .mkv.
You are trading off having a system able to handle everything you will throw at it, and having the same binaries as everyone else for, well, basically nothing. You have a supposedly smaller exploitable surface but you have to trust that the Gentoo patches cutting these things out don't introduce new vulnerabilities and don't inadvertently shut off hardening features. You have slightly smaller packages but I'm hard pressed to think of a scenario where it would matter in 2026.
To me, the worst debuggability and the inability to properly communicate with the source project make it a bad idea. I find Arch's pledge to only ship strictly vanilla software much more sensible.
Additionally gentoo has become way more strict with use flag dependencies, and it also checks if binaries are depending on old libs, and doesnt remove them when updating a package, such that the "app depends on old libstdc++" doesnt happen anymore. It then automatically removes the old when nothing needs it anymore
I have been running gentoo since before 04, continously, and things pretty much just work. I would be willing to put money that I spend less time "managing my OS" than most who run other systems such as osx, windows, debian etc. Sure, my cpu gets to compile a lot, but thats about it.
And yes, the "--omg-optimize" was never really the selling point, but rather the useflags, where theres complete control. Pretty much nothing else comes close, and it is why gentoo is awesome
I'd say "the fastest" is a side effect of "allowing one to tune their systems to their utmost liking." -march=native, throw away unused bits and pieces, integrate modules into the kernel, replace bits and pieces with faster -- if more limited -- bits and pieces. And so on.
Other distros don't support Risc-V because nobody has taken the time to bother with it because the hardware base is almost nonexistent.
I can speak for yocto being completely built from source and has a huge variety of BSPs, usually vendor-created.
Id Software provided a Doom 3 Linux client when the game was first released. I found Doom 3 ran better on a custom built Gentoo Linux system compared to Windows XP.
Are you look at Gentoo to maximize performance with compiling everything with custom build parameters and kernel configuration versus pre-built binaries and a generic kernel loaded with modules?
Custom Gentoo just adds more time with having to wait to install software upgrades. It is like having all your Arch packages only being provided by AUR. There is also a chance the build will fail and the parameters might need to be changed. Majority of the time everything compiles without issue once the build parameters are figured out. It was rare when something did not.
Where you lose time is in trying to optimize your system and packages using the multiple switches that Gentoo provides. If you're the OCD twiddler type, Gentoo can be both extremely satisfying and major time sink.
I think that for some years already - Gentoo has been providing binaries for "normal" packages - as long as your config/use-flags match (and if you turned on the option/flag to use binary packages).
And of course places with more than just a few Gentoo boxes were usually already running their own BINHOST setups long time ago.
Installation is done by booting a liveCD, manually partitioning your storage, unpacking a Gentoo STAGE3 archive, chrooting in it, doing basic configuration such as network, timezone, portage (package manager) base profile and servers, etc., compiling and installing a kernel and then rebooting into the new system.
Then you get to play with /etc/portage/make.conf which is the root configuration of the package manager. You get to set CPU instruction sets (CPU_FLAGS), gcc CFLAGS flags, MAKE flags, video card targets, acceptable package licenses, global USE flags (those are simplified ./configure arguments that usually apply to several packages), which Apache modules get built, which qemu targets get built, etc. These are all env vars that portage (the package manager) uses to build packages for your system.
The more you use Gentoo, the more features of make.conf you discover. Never ending fun.
Then, you start installing packages and updates (same procedure):
1) You start the update by reviewing USE flags for each added/updated package - several screens of dense text.
For example, PHP has these USE flags: https://packages.gentoo.org/packages/dev-lang/php - mouse hover to see what they do. You get to play with them in /etc/portage/package.use and there's no end to tweaking them.
If you have any form of OCD, stay away from Gentoo or this will be your poison forever!
2) Then the compilation begins and that takes hours or days depending on what you install and uses a lot of CPU and either storage I/O or memory (if you have lots of memory, you can compile in a tmpfs a lot faster).
I'm not sure it is OK to compile the updates on a live server, especially during busy hours, but Gentoo has alternatives, including binary packages (recently added, but must match your USE flags with theirs), building packages remotely on another system (distcc), even on a different arch (crossdev). You could run an ARM server and build packages for it on a x86 workstation. I didn't use "steve", so I can't tell you what wonderful things that tool can do, yet.
3) Depending on architecture, some less used packages may fail to compile. You get to manually debug that and submit bug reports. You can also add patches to /etc/portage/patches/<package> that will automatically be applied when the package is built, and that includes the kernel.
I recommend you to run emerge with --keep-going to have the package manager continue after an error with the remaining packages.
4) When each package is done compiling, it's installed automatically. There are no automatic reboots or anything. The files are replaced live, both executables and libraries. Running services continue to use old files from memory until you restart them or reboot manually - they will appear red/yellow in htop until you do.
There were a few times, very very few, when I had crashes in new packages that were succesfuly built. It only happened on armv7, which is a practically abandoned platform everywhere. In those cases you can revert to the old ones and mask the bugged version to prevent it from being updated to next time.
5) Last step is to review the config changes. dispatch-conf will present a diff of all proposed changes to .ini and .cfg files for all updated packages. You get to review, accept, reject the changes or manually edit the files.
That's all. Simple. :)
With Red Hat, Anaconda is the installer. With Ubuntu, ubiquity.
etc ...
With Gentoo -- YOU are the installer. This means you have to be ready to perform -- more or less manually -- many of the tasks automated in other distributions. I sorta see this as the same as a tutorial level in a video game: you learn how to read and follow the wiki which is essentially the key to success in Gentoo.
Portage/emerge is very much automated and once you set it up it runs updates with just a confirmation, unless you feel the need to tweak something.
I didn't say Gentoo has no package manager (it does; and it's great!)
And it's literally yes, yes, next, next - the defaults are pretty good.
1) Calculate Linux is 100% Gentoo with more profiles (e.g. server, desktop-kde, desktop-gnome ...etc) and after switching from vanilla Gentoo to Calculate - I didn't need to tweak any use flags of any packages.
Profiles are so good that everything works nicely together
2) There are prebuild binaries for your profile use-flag combo - can't recall last time I had to wait for something to compile
3) Much less likely to happen since you get binaries for everything - but there's additional cl-xxx tooling that makes even that easier
4) I don't think that's a bad thing. Though sure I could agree that having option to automatically restart services would be nice.
5) Yes - and you can also archive and basically have git-log on conf changes.
If that's your thing, sure. I find even Gentoo too automated for my preferences. I'm using the most basic from the available profiles and tweak everything manually in package.use. I stopped using openrc and switched to just sysvinit/inittab.
But then, if you want binary packages and such, why use Gentoo or a fork?
Then one day at work I wanted to print something and I think I needed to add LDAP and CUPS use flags ... Rebuilding world with those new flags was not finished by the time I was back from lunch break, or maybe it even failed.
Then I discovered Calculate and it's desktop (e.g. KDE) profile turned out to have all those useful use flags already set in it's profiles.
Anyway ...
IMHO main reason to choose/stay with Gentoo/Calculate is flexibility and choice (like not having to use systemd, but also being able to). Habit is a part too - though due to work I've got familiar with CentOS and Ubuntu.
I don't necessarily want binary packages. Sure they are handy/convenient for speed/ease/etc. And even though I can't recall last time I needed to tweak some package/feature use flag (maybe V4L2 virtual camera in OBS?) - I really don't want to give that flexibility up ... As without it - it would be back to manually figuring out compile/run-time dependencies when all you want is just slightly differently configured/built package.
With some notebooks, some of which were getting on in years, it was simply too resource-intensive to update. Only GHC, for example, often took 12+ hours to compile on some older notebooks.
https://blogs.gentoo.org/mgorny/2024/08/20/gentoo-profiles-and-keywords-rather-than-releases/Very cool to see that it's still going strong - I remember managing many machines at scale was a bit of a challenge, especially keeping ahead of vulnerabilities.
I used to tinker a lot with my systems but as I gotten older and my time became more limited, I've abandoned a lot of it and now favor "getting things done". Though I still tinker a lot with my systems and have my workflow and system setup, it is no longer at the level of re-compiling the kernel with my specific optimization sort of thing, if that makes sense. I am now paid to "tinker" with my clients' systems but I stay away from the unconventional there, if I can.
I did reach a point where describing systems is useful at least as a way of documenting them. I keep on circling around nixos but haven't taken the plunge yet. It feels like containerfiles are an easier approach but they(at least docker does) sort of feel designed around describing application environments as opposed to full system environments. So your approach is intriguing.
They absolutely are! I actually originally just wanted a base container image for running services on my hosts that a.) I could produce a full source code listing for and b.) have full visibility over the BoM, and realized I could just ‘FROM scratch’ & pull in gentoo’s stage3 to basically achieve that. That also happens to be the first thing you do in a new gentoo chroot, and I realized that pretty much every step in the gentoo install media that you run after (installing software, building the kernel, setting up users, etc) could also be run in the container. What are containers if not “portable executable chroots” after all? My first version of this build system was literally to then copy / on the container to a mounted disk I manually formatted. Writing to disk is actually the most unnatural part of this whole setup since no one really has a good solution for doing this without using the kernel; I used to format and mount devices directly in a privileged container but now I just boot a qemu VM in an unprivileged container and do it in an initramfs since I was already building those manually too. I found while iterating on this that all of the advantages you get from Containerfiles (portability, repeatability, caching, minimal host runtime, etc) naturally translated over to the OS builder project, and since I like deploying services as containers anyways there’s a high degree of reuse going on vs needing separate tools and paradigms everywhere.
I’ll definitely write it up and post it to HN at some point, trying to compact the whole project in just that blurb felt painful.
The example project uses Alpine base container images, but I'm using a Debian base container for something else I'm working on.
That said, I haven't tried Gentoo with binaries from official repositories yet. Maybe that makes it less time-consuming to keep your system up to date.
It's still 100% pure Gentoo (and actually these days even vanilla Gentoo itself offers precompiled binaries) so you still can compile things in rare cases that binary isn't already compiled with use/config that you want.
I have had Gentoo in at least one nearby system (physical and/or VM) since about 15 years ago. It's always a blast interacting with it.
https://blog.nawaz.org/posts/2023/May/20-years-of-gentoo/
Prior HN discussion: https://news.ycombinator.com/item?id=35989311
Edit: Curious, why the downvote?
I can see no reason for it.
AFAIK Calculate provides more profiles (predefined set of use flags) - instead of just Gnome or KDE/Plasma - it also has Cinnamon, LXQt, MATE and Xfce, as well as one for server(s).
And Calculate also provides binaries for those profiles.
[0] https://www.pcworld.com/article/481872/how_linux_mastered_wa...
I wish I had more time I could dedicate to maintaining my system, I'm marooned on Arch due to lack of time, such a shame.
The game changer for me was using my NAS as a build host for all my machines. It has enough memory and cores to compile on 32 threads. But a full install from a stage3 on my ageing Thinkpad X13 or SBCs would fry the poor things and just isn't feasible to maintain.
I have systemd-nspawn containers for the different microarchitectures and mount their /var/cache/binpkgs and /etc/portage dirs over NFS on the target machines. The Thinkpad can now do an empty tree emerge in like an hour and leaving out the bdeps cuts down on about 150 packages.
Despite being focused on OpenRC, I have had the most pleasant experience with systemd on Gentoo over all the other distros I've tried.
I have this dream of moving all my ubuntu servers to gentoo but I don't have a clear enough picture of how to centralize management of a fleet of gentoo machines
I use NFS to mount the container's /etc/portage to /mnt/portage and symlink the files to the Thinkpad's /etc/portage so I can cherry pick what I want to keep in sync with the build container. Don't have to mess with repos.conf either because portage will look to /var/cache/binpkgs by default.
make.conf is a directory on both machines and has files like 01-common-flags.conf and 02-binhost-flags.conf. The Thinkpad has 01-common-flags.conf and 03-target-flags.conf with EMERGE_DEFAULT_OPTS="--with-bdeps=n --usepkgonly" set, so running emerge -avuDN on the Thinkpad will only update with binaries from the mounted /var/cache/binpkgs. I keep the software in sync by using /etc/portage/sets instead of the world file. Then all the package.* dirs are symlinks as well.
The Thinkpad binhost is a znver3, so the build container has CFLAGS="--march=x86-64-v3 --mtune=alderlake" set. There's some SIMD extensions that two don't have in common and it has to build code that runs on both machines, otherwise you could use the target architecture in --march. Using the --mtune option in my case apparently sets the L2 cache size of the produced code to that of the Intel chip.
Systemd-nspawn containers are super easy to spin up, as you basically install Gentoo from stage3 and it works like a chroot but with a full init. I run updates irregularly, there's still some manual effort for maintenance, but it's mostly just kicking off emerge and letting it build in a tmux session.
What has kept me on Gentoo since the first Opteron days (20+ years ago) is that once you do an install, you also learn in part how to fix the things you installed, which can be helpful later on. I also do world rebuilds often which I think is just the equivalent of testing an OS backup for a source based OS. :)
I want to highlight something: Gentoo's developer onboarding system is EXCELLENT. Starting as an active member of the general community, you talk an existing developer into being your mentor and fill out an open book test ( https://projects.gentoo.org/comrel/recruiters/quizzes/ebuild... ) which later is graded/corrected in a couple of meetings which I'd equate to the "job interview". I wish more open source projects (including my own) had such well-documented, straightforward processes to gain commit access. I appreciated the process of doing the quiz as it helped me close gaps in my knowledge.
Now I am happily gaming on arch Linux and while generally developing on a Mac, need to jump into debian in docker and such. But I do hope kids these days end up trying Gentoo, the hacker skillset it builds is priceless. No offense to adults still using it! I just hope there aren't as many "my system doesn't build anymore" situations as I remember.
(I use Fedora, btw)
danielscrubs•3w ago
gylterud•3w ago
MarsIronPI•3w ago
zppln•3w ago
I will say though that my valgrind is broken due to march native. :)
paulf38•3w ago
speed_spread•3w ago
techcode•3w ago
TL;DR: you can pre-configure and keep updating/building new versions of your own live-boot image of Gentoo/Calculate. Which kind of get's you "previous known-good builds" just the other way around.
Oh and the other thing I also never needed to use is update/rescue of Gentoo/Calculate installation through it's flip-flopping between two root partitions.
Calculate installer by default creates two root partitions, but I've only ever used one. And so far `cl-update` never broke the system - even when I was so far behind that my version of python and glibc got masked (or maybe even removed).
Back on vanilla Gentoo - being that far behind usually meant it was easier to reinstall Gentoo from stage3 :D
arendtio•3w ago