https://www.debian.org/releases/trixie/release-notes/issues....
>"You can return to /tmp being a regular directory by running systemctl mask tmp.mount as root and rebooting."
I kind of wish the distros had decided on a new /tmpfs (or /tmp/tmpfs, etc) directory for applications to opt-in to using ram-disk rather than replacing /tmp and having to opt-out.
This too can be turned off.
The problem with /tmp was many people and apps used it as an inter-user communication medium and expected persistency there, so it created both security problems and wasted disk space over time.
Since not many packaged apps used the /tmp like that and used the folder the way it should be used, the change was made.
I'm running Debian testing on one of my systems, and the change created no ill effects whatsoever. Not eating SSD write cycles can be considered a plus, even.
However, as I also noted in the relevant thread, the approach might have a couple of downsides in some scenarios.
If you have the time and the desire, discussion starts at https://lists.debian.org/debian-devel/2024/05/msg00014.html
Alas it's still not suitable as a daily driver for the average home user and probably never will be. It is unfortunate that Ubuntu has to reign supreme in that regard.
I think that's fine for Debian. Maybe even a good thing.
Debian supplies a rock solid base for many general purpose tasks. Ubuntu and other distros are free to package that up in a user friendly way, but as a technical user I want to be able to go upstream and get a basic Linux system without extra stuff.
> Alas it's still not suitable as a daily driver for the average home user and probably never will be. It is unfortunate that Ubuntu has to reign supreme in that regard.
It's true that Ubuntu used to be the OOB ready version of Debian, which "just worked", while base Debian took look of fiddling to even have wifi working.
These day though I find the opposite to be true: Ubuntu does lots of weird things I don't want, and I have to "fiddle" to disable all that. A base Debian install however (ISO with firmware bundled), just works.
For me, Ubuntu is officially off my list of distros I bother spending my time on.
In fact, Ubuntu has never been an especially user friendly distro. At the beginning it was just a debian that was installed with debian's experimental installer before they decided to use it in stable. Nothing more, nothing less.
If you wanted to find a distro that was making efforts towards beginners looking for Gui config tools, you had to look at Suse and Mandrake (now Mandriva).
The only specific thing Ubuntu did for beginners is sending CDs for free at a time when not everybody had fast internet connections and would look for paper magazine to come with CD/DVD. And they have stopped doing that a loooooong time ago.
Assuming you are not malicious I will kindly help with your bad memory, Ubuntu had always very good proprietary driver support, this made laptops actually work and helped beginners. I also remember they had a graphical installer compared to Debian and for sure this was beginners friendly. Maybe some other distro offered easy way to install and come with proprietary drivers setup but I can't remember a deb based distro doing that.
Anyway you were wrong, the CDs were not the only thing made Ubuntu appeal for beginners, there were Linux magazines with CDs each month and they were not super expensive , my first linux was a Kubuntu 6.10 from a magazine and I am still running Kubuntu today though i ran Debian, Sidux, Arch, Mandriva, SUSE in the past when I had time to try different distros, compile custom kernels etc.
Proprietary driver installation was the sole reason of existence of Linux Mint which was a fork of ubuntu, so your memory is incorrect.
Lucky 13 and all... And not a single issue so far. Very happy!
Thanks to the Debian team for putting out yet another high quality, reliable release :)
From my build box:
chroot $MOUNTPOINT/ /bin/bash -c "http_proxy=$aptproxy apt-get -y --purge --allow remove-essential install sysvinit-core sysvinit-utils systemd-sysv- systemd-"
There is a weird depends you cannot get around without simultaneously removing and installing in parallel. A Debian bug highlighted the above, with a "-" for systemd-sysv- systemd- as a fix, along with allow remove essential.After this fix, sysvinit builds with debootstrap were almost identical as to bookworm. This includes for desktops.
As per with bookworm through buster, you'll still need something like this too:
$ cat /etc/apt/preferences.d/systemd
# this is the only systemd package that is required, so we up its priority first...
Package: libsystemd0
Pin: release trixie
Pin-Priority: 700
# exclude the rest
Package: systemd
Pin: release *
Pin-Priority: -1
Package: *systemd*
Pin: release *
Pin-Priority: -1
Package: systemd:i386
Pin: release *
Pin-Priority: -1
Package: systemd:amd64
Pin: release *
Pin-Priority: -1
I run a full desktop too, without it. Multiple variants.
I don't use gnome's Desktop Environment though (although I do run gtk/gnome software), so cannot comment on that.
Impressive that i386 support made it all the way to August 2025. I have Debian 10 Buster running on a Pentium 3 which only EOL'd last year in June 2024. It's still useful on that hardware and I'm grateful support continued as long as it did!
OpenBSD still supports i386 for those looking for a modern OS on old 32-bit hardware.
I am not happy about unnecessary ewaste, but an i386 almost certainly has and order of magnitude less horsepower than a raspberry pi or N100.
(Admittedly, the 32-bit support Ubuntu ships is less than a full OS and you can't install Ubuntu on a 32-bit machine these days)
> The i386 architecture is now only intended to be used on a 64-bit (amd64) CPU.
It would probably take a few days to start Steam on one of those considering its load times on current hardware.
If that's all there's to it, you can still use debootstrap, compile a kernel, and point the root parameter to your shiny new install.
If the official i386 arch was built with instructions that your hardware doesn't support, tough cookies.
While theoretically possible, that would only happen on processors older than 30 years. Debian's i386 architecture still uses -march=i686 as its baseline compiler target, which is the venerable Pentium Pro: https://en.wikipedia.org/wiki/P6_(microarchitecture)
$ curl -s http://deb.debian.org/debian/dists/trixie/main/binary-amd64/Packages.gz | zgrep ^Package: | wc -l
68737
$ curl -s http://deb.debian.org/debian/dists/trixie/main/binary-i386/Packages.gz | zgrep ^Package: | wc -l
66958
True i386 support would mean compatible with the original Intel 386 processor from 1985. The 486 added a few additional instructions in 1989 but things really changed with the Pentium in 1993 - that gave us i586 which is the bare minimum for most modern software today. Much software can still run on regular Pentiums today if compiled for it, but SSE2 optimizations requires at least a Pentium 4 or Core CPUs instead.
I play with retro PCs often and found OpenBSD's i386 target stopped supporting real 386 CPUs after the 4.1 release, and dropped support for i486 somewhat recently in 6.8. It now requires at least a Pentium class CPU, i586, though the arch is still referred to as i386 likely because it's a common proxy for "32-bit".
What does this mean? If all 69k+ packages are installed, it will take up this much space?
I like Debian's measured pragmatism with ideology, how it's a distro of free software by default but it also makes it easy to install non-free software or firmware blobs. I like Debian's package guidelines, I like dpkg, I like the Debian documentation even if Arch remains the best on that front. I like the stable/testing package streams, which make it easy to choose old but rock-stable vs just a bit old and almost as stable.
And one of the best parts is, I've never had a Debian system break without it being my fault in some way. Every case I've had of Debian being outright unbootable or having other serious problems, it's been due to me trying to add things from third-party repositories, or messing up the configuration or something else, but not a fault of the Debian system itself.
In what way Ubuntu went downhill?
Using apt to install some packages installs snap plumbing and downloads the package as a snap automatically. You don't have to install it manually.
There's no malicious intent though, it's made to "impose a positive pressure on the snap team to produce better work and keep their quality high" (paraphrased, but this was the official answer).
Debian has been a safe haven since.
The topic is not whether snaps are avoidable or not, but the Ubuntu is going downhill. And snaps are purported to be part of that downhill, which would be Ubuntu's NIH syndrome. As far as I know, Ubuntu's only successful development is Ubuntu itself - the other projects have all failed over the years, and snap, while ongoing, is not winning any popularity contests either.
But in practice even for flatpak the only realistic place you can publish your flatpak if you want any traction at all would be flathub, so both formats have only one store right now. But flatpak allows a custom store while for some strange reason Canonical decided not to allow snap that freedom.
Also, rugpulling users and migrating things to snaps without asking their users in order to "create a positive pressure on snap team to keep their quality high" didn't sit well with the users.
> But in practice even for flatpak the only realistic place you can publish your flatpak if you want any traction at all would be flathub
But, for any size of fleet from homelab to an enterprise client farm, I can host my local flathub and install my personal special-purpose flatpaks without paying anyone and thinking whether my packages will be there next morning.
Freedom matters, esp. it that's the norm in that ecosystem.
I was neutral-ish about Ubuntu, but I flat out avoid them now, and migrate any remaining Ubuntu server to Debian in shortest way possible.
I'm using Debian for the last 20 years or so, BTW.
Red Hat do the same. They reinvented the wheel on multiple occasions (systemd and it's whole ecosystem like systemd-resolved and timed and the whole kitchen sink; podman, buildah, dnf, etc etc.)
They just have more success on getting their NIH babies accepted as the standard by everyone else. Canonical just fail at that (often for good reasons, Unity was downright crap for some time) and abandon stuff, which doesn't help their future causes.
Of course that goes against the spirit of FOSS, but there's a bit more nuance there than simply saying "snaps are proprietary".
I can't seem to find it. Any pointers would be helpful, so at least I can know the latest state of this thing.
Containers, popular as they may be on servers, can only add breakage and overhead to desktops, especially for an established and already much better organized system like Debian's apt. There just haven't been any new desktop apps for way over a decade that would warrant yet another level of indirection.
Switch to sudo-rs, uu-coreutils (rust based stuff), etc., etc.
It's not a Debian derivative anymore. It's something else.
Was not my cup of tea before, it's even more not my cup of tea now.
snap, lxd (not lxc!), mir, upstart, ufw.
It's neverending, and it's always failing.
Seamless LXC and virtual machine management with clustering, a clean API, YAML templates and a built-in load balancer, it's like Kubernetes for stateful workloads.
This made it more fragile. It was really nice in the late 2000s, but gradually became worse.
In the early days it had a differing and usually better aligned release schedule for the critical graphics stack.
As a function of time, you are increasingly likely to get rug pulled once Shuttleworth decides to collect his next ransom.
I think it’s worth keeping that in mind with all the hate Ubuntu gets. Most users are just silently getting their work done on an LTS they update every two years.
Most of the Linux-based (enterprise and/or embedded) appliances are built upon Debian, for example.
P.S.: The total number of Debian installations and their derivatives are unknown BTW. Debian installations and infra do not collect such information. You can install "popularity-contest", but the question defaults to "no" during installation, so most people do not send in package selection lists, unlike Canonical's tracking of snap installations.
I had a few issues caused by Ubuntu that weren't upstream. One was Tracker somehow eating up lots of CPU power and slowing the system down. Another was with input methods, I need to type in a pretty rare language and that was just broken on Ubuntu one day. Not upstream.
The bigger problem was Ubuntu adding stuff before it was ready. The Unity desktop, which is now fine, was initially missing lots of basic features and wasn't a good experience. Then there was the short-lived but pretty disastrous attempt to replace Xorg with Mir.
My non-tech parents are still on Ubuntu, have been for some twenty years, and it's mostly fine there. I wouldn't recommend it if you know your way around a Linux system but for non-tech, Ubuntu works well. Still, just a few months ago I was astonished by another Ubuntu change. My mom's most important program is Thunderbird, with her long-running email archive. The Thunderbird profile has effortlessly moved across several PCs as it's just a copy of the folder. Suddenly, Ubuntu migrated to the snap version of Thunderbird, so after a software update she found herself with a new version and an empty profile. Because of course the new profile is somewhere under ~/snap and the update didn't in any way try to link to the old profile.
Then there were stupid things like Amazon search results in the Unity dash search when looking for your files or programs. Nah. Ubuntu isn't terrible by any means but for a number of years now, I'd recommend Linux Mint as the friendly Debian derivative.
Debian is great but I can't say this is a shared experience. In particular, I've been bitten by Debian's heavy patching of kernel in Debian stable (specifically, backport regressions in the fast-moving DRM subsystem leading to hard-to-debug crashes), despite Debian releases technically having the "same" kernel for a duration of a release. In contrast, Ubuntu just uses newer kernels and -hwe avoids a lot of patch friction. So I still use Debian VMs but Ubuntu on bare metal. I haven't tried kernel from debian-backports repos though.
On the other hand, I had and still have many Debian installations, some with Intel integrated graphics. None of them created any problems for a very, very long time. To be honest, I don't remember even any of my Intel iGPU systems crashed.
...and I use Debian for almost two decades, and I have seen tons of GPU problems. I used to write my Xorg.conf files without using man, heh. :)
Maybe you can give Debian another chance.
I’ve thought about (ab)using a Proxmox repository on an otherwise stock Debian system before just for the kernel…
Needs citation.
Debian stable uses upstream LTS kernels and I'm not aware of any heavy patching they do on top of that.
Upstream -stable trees are very relaxed in patches they accept and unfortunately they don't get serious testing before being released either (you can see there's a new release in every -stable tree like every week), so that's probably what you've been bit by.
You're right stability comes from testing, not enough testing happens around Linux period, regardless of which branch is being discussed.
It's not easy testing kernels, but the bar is pretty low.
That's curious, because when I was learning to make Debian packages, I found the official documentation to be far better than I had seen from any other distro. The Policy Manual in particular is very detailed, continually improving, and even documents incremental changes from each version to the next. (That last bit makes it easy for package maintainers to keep up with current best practices.)
Does Arch have something better in this department?
Are you perhaps comparing the Arch wiki to Debian's wiki? On that front I would agree with you.
I’m not familiar with the metric definition they use, but I’d be worried if close to 100% of the packages they included in bookworm hadn’t been updated in the roughly 2 years between releases.
I use Debian for most of my servers, so I’m sure there is a valid explanation of that phrase.
Code doesn't "go bad" and not everything is affected by ecosystem churn and CVEs.
An established package not having updates for 2y is not in and of itself problematic.
And even if it was?
If you look at the number of packages in Debian, only a small portion have CVEs. There are nearly 30k package sources, and an output of 60k binary packages.
Yet we only get a few security updates weekly.
Another example? Both trixie and bookworm use the same firefox ESR (extended release) version. Both will get updated when firefox forces everyone to the next ESR.
Beyond that, some packages are docs. Some are 'glue' packages, eg scripts to manage Debian. These may not change between releases.
Lastly, Debian actually maintains an enormous number of upstream orphaned packages. In those cases, the version number is the same (sometimes), but with security updates slapped on if required.
From my perspective, outside of timely and quick security updates, I have zero desire for a lot of churn. Why would I? Churn means work. Churn means changed stability.
We get plenty of fun and churn from kernel, and driver related changes (X, Wayland, audio/nic, etc), and desktop apps. And of course from anything running forward, with scissors, like network connected joy.
On a personal note, Trixie is very exciting for me because my side project, ntfy [1], was packaged [2] and is now included in Trixie. I only learned about the fact that it was included very late in cycle when the package maintainer asked for license clarifications. As a result the Debian-ized version of ntfy doesn't contain a web app (which is a reaaal bummer), and has a few things "patched out" (which is fine). I approached the maintainer and just recently added build tags [3] to make it easier to remove Stripe, Firebase and WebPush, so that the next Debian-ized version will not have to contain (so many) awkward patches.
As an "upstream maintainer", I must say it isn't obvious at all why the web app wasn't included. It was clearly removed on purpose [4], but I don't really know what to do to get it into the next Debian release. Doing an "apt install ntfy" is going to be quite disappointing for most if the web app doesn't work. Any help or guidance is very welcome!
[1] https://github.com/binwiederhier/ntfy
[2] https://tracker.debian.org/pkg/ntfy
[3] https://github.com/binwiederhier/ntfy/pull/1420
[4] https://salsa.debian.org/ahmadkhalifa/ntfy/-/blob/debian/lat...
> The webapp is a nodejs app that requires packages that are not currently in debian.
Since vendoring dependencies inside packages is frowned upon in Debian, the maintainer would have needed to add those packages themselves and maintain them. My guess is that they didn't want to take on that effort.
Woah. Shouldn’t Node and Golang be in Debian’s official repos by now?
Debian sources need to be sufficient to build. So for npm projects, you usually have a debian-specific package.json where each npm dependency (transitively, including devDependencies needed for build) needs to either be replaced with its equivalent debian package (which may also need to be ported), vendored (usually less ideal, especially for third-party code), or removed. Oh, and enjoy aligning versions for all of that. That's doable but non-trivial work with such a sizable lockfile. If I would guess the maintainer couldn't justify the extra effort and taking on combing through all those packages.
I also think in either case the Debian way would probably be to split it out as a complementary ntfy-web package.
Have had my RPi on Debian since Debian 9, with smooth upgrades every time.
The thing I like most about Debian is that you need to know at least a little about what is going on to use it. For me, it does a good job of following "as simple as possible and no simpler."
Then my private laptop has had a bunch of graphic issues after upgrading to 13 (it manifests differently in a lot of applications and it changes when you pick a different desktop theme, not even sure how to describe it). The new pipewire (pulseaudio replacement, idk why that needed replacing) does not work properly when the CPU is busy (so I currently play games without game sounds or music in the background). The latter then also sometimes (1 in 5 times maybe?) crashes when resuming from suspend, but instead of dying, spams systemd which diligently stores it all in a shitty binary file (that you can't selectively prune), runs completely out of disk space, and breaks various things on the rest of the system until you restart the pipewire process and purge any and all logs (remember, no selective pruning)... Tried various things I found in web searches and threw an LLM at it as well, but no dice. I assume these issues are from it not being a fresh install, so no blame/complaint here really, just annoying and I haven't had these issues when doing previous upgrades. Not yet sure how to resolve, perhaps I'll end up doing a completely new install and seeing what configs I can port until issues start showing up
Surely these things are not a Debian-specific issue, but I haven't noticed something like that with either 11 or 12
Edit: oh yeah, and the /tmp(fs) counter is at 1 so far. I wonder how many times I'll have run out of RAM by Debian 14, by forgetting I can't just dump temporary files into /tmp anymore without estimating the size correctly beforehand
https://www.debian.org/releases/trixie/release-notes/issues....
# example:
udevadm test-builtin net_setup_link /sys/class/net/eno4 2>/dev/null
ID_NET_LINK_FILE=/usr/lib/systemd/network/99-default.link
ID_NET_LINK_FILE_DROPINS=
ID_NET_NAME=eno4 <-- note the NIC name that will happen after reboot
Here's a one-liner, excluding a bond interface and lo. Gives a nice list of pre and post change. for x in $(cat /etc/network/interfaces | grep auto | cut -d ' ' -f 2 | grep -Ev 'lo|bond0'); do echo -n $x:; udevadm test-builtin net_setup_link /sys/class/net/$x 2>/dev/null | grep NET_NAME| cut -d = -f 2; done
The doc's logic is that after you've upgraded to trixie, and before reboot, you're running enough of systemd to see what it will name interfaces after reboot.So far I have not had an interface change due to upgrade, so I cannot say that the above does detect it.
*haha
My first system migrated in less than 10 minutes, incl. package downloads and reboot. It's not a beast either. N100 mini PC connected to a ~50mbps network.
Minimal: https://cdimage.debian.org/debian-cd/current/amd64/bt-cd/deb...
Full: https://cdimage.debian.org/debian-cd/current/amd64/bt-dvd/de...
"trixie"
64-bit PC (amd64),
64-bit ARM (arm64),
ARM EABI (armel),
ARMv7 (EABI hard-float ABI, armhf),
64-bit little-endian PowerPC (ppc64el),
64-bit little-endian RISC-V (riscv64),
IBM System z (s390x)
It's good to see RISC-V becoming a first-class citizen, despite the general lack of hardware using it at the moment.I do wonder, where are PowerPC and IBM System z being used these days? Are there modern Linux systems being deployed with something other than amd64, arm64, and (soon?) riscv64?
them being kept by major distros is therefore not as "natural" as other architectures
From a developer perspective, s390x is also the last active big-endian architecture (I guess there's SPARC as well, but that's on life support and Oracle doesn't care about anyone running anything but Solaris on it), so it's useful for picking up endianness bugs.
Another interesting thing is that the only two 32-bit architectures left supported are armel and armhf. Debian has already announced that this will be the last release that supports armel (https://www.debian.org/releases/trixie/release-notes/issues....), so I guess it'll be a matter of time before they drop 32-bit support altogether.
What I did is switch to NetBSD.
Fortunately, bookworm will continue to receive updates for almost 3 years, so I am not in a hurry to look for a new OS for these relics. OpenBSD looks like the natural successor, but I am not sure if the wifi chips are supported. (And who knows how long these netbooks will continue to work, they were built in 2008 and 2009, so they've had a long life already.)
EDIT: Hooray, thanks to everyone who made this possible, is what I meant to say.
I've found it pretty easy though to use some KDE components built from source on top of the standard Debian packages. Build with kdesrc-build, then have those binaries linked to from your ~/bin and you're set. It might get difficult if you want to rebuild some key components like plasmashell itself but I've been using locally built versions of Kate and Konsole without issue.
Not necessarily forever, though. Bookworm got minor Plasma updates, so I wouldn't be surprised if Trixie does as well.
Hardware support is good and UI is great! It feels snappier than Ubuntu, may be due to lack of snap and fewer services and applications installed by default.
See below:
APT is moving to a different format for configuring where it downloads packages from. The files /etc/apt/sources.list and *.list files in /etc/apt/sources.list.d/ are replaced by files still in that directory but with names ending in .sources, using the new, more readable (deb822 style) format. For details see sources.list(5). Examples of APT configurations in these notes will be given in the new deb822 format.
If your system is using multiple sources files then you will need to ensure they stay consistent.
- https://wiki.debian.org/SourcesList#APT_sources_format- https://www.debian.org/releases/trixie/release-notes/upgradi...
"apt modernize-sources" command can be used to simulate and replace ".list" files with the new ".sources" format.
Modernizing will replace .list files with the new .sources format, add Signed-By values where they can be determined automatically, and save the old files into .list.bak files.
This command supports the 'signed-by' and 'trusted' options. If you have specified other options inside [] brackets, please transfer them manually to the output files; see sources.list(5) for a mapping.
Wow, I'm amazed a third of packages haven't seen an update in, ehm checks
> After 2 years, 1 month, and 30 days of development, the Debian project is proud to present its new stable version
I'm a fan of old software myself, in the sense that I find it cool to see F-Droid having a (usually tiny) package that is over 10 years old but it does exactly what I want with no bugs and it works perfectly on Android 10. I wonder if those 30% more commonly fall in the "it's fine as it is" category or in the "no maintainers available" category
My first contact with Linux was with Debian 2.1. Exactly with this distro CDs https://archive.org/details/linux-actual-06-2/LinuxActual_01...
To be honest, it was a miserable experience to install it on your main computer without anything else available to look for help in case of problems. It was also hard to really try it due to lack of drivers for current (at that moment) ADSL modems.
But here I am a crapload of years later, still loving it :-)
That being said, I like Flatpak, so I installed it (was super easy and Flathub provides instructions), and I added a few Gnome Shell extensions (a Dock so my wife can find apps when she occasionally uses my laptop).
Debian gives you a feeling of ownership of your computer in a way the corporate distros don't, but is still pretty user friendly (unlike Arch).
I'd definitely install Debian Stable on a grandparents' computer.
gorgoiler•3h ago
Debian has been the stable footing of my Free computing life for three decades. Everything about their approach — from showing me Condorcet, organising stable chaos, moving forward by measured consensus, and basing everything on hard wrought principles — has had an effect on me in some way, from technical to social and back again.
I love this project and the immeasurable impact it has had on the world through their releases and culture.
With all my love, g’o xx