https://www.debian.org/releases/trixie/release-notes/issues....
>"You can return to /tmp being a regular directory by running systemctl mask tmp.mount as root and rebooting."
I kind of wish the distros had decided on a new /tmpfs (or /tmp/tmpfs, etc) directory for applications to opt-in to using ram-disk rather than replacing /tmp and having to opt-out.
This too can be turned off.
The problem with /tmp was many people and apps used it as an inter-user communication medium and expected persistency there, so it created both security problems and wasted disk space over time.
Since not many packaged apps used the /tmp like that and used the folder the way it should be used, the change was made.
I'm running Debian testing on one of my systems, and the change created no ill effects whatsoever. Not eating SSD write cycles can be considered a plus, even.
However, as I also noted in the relevant thread, the approach might have a couple of downsides in some scenarios.
If you have the time and the desire, discussion starts at https://lists.debian.org/debian-devel/2024/05/msg00014.html
Alas it's still not suitable as a daily driver for the average home user and probably never will be. It is unfortunate that Ubuntu has to reign supreme in that regard.
I think that's fine for Debian. Maybe even a good thing.
Debian supplies a rock solid base for many general purpose tasks. Ubuntu and other distros are free to package that up in a user friendly way, but as a technical user I want to be able to go upstream and get a basic Linux system without extra stuff.
> Alas it's still not suitable as a daily driver for the average home user and probably never will be. It is unfortunate that Ubuntu has to reign supreme in that regard.
It's true that Ubuntu used to be the OOB ready version of Debian, which "just worked", while base Debian took look of fiddling to even have wifi working.
These day though I find the opposite to be true: Ubuntu does lots of weird things I don't want, and I have to "fiddle" to disable all that. A base Debian install however (ISO with firmware bundled), just works.
For me, Ubuntu is officially off my list of distros I bother spending my time on.
In fact, Ubuntu has never been an especially user friendly distro. At the beginning it was just a debian that was installed with debian's experimental installer before they decided to use it in stable. Nothing more, nothing less.
If you wanted to find a distro that was making efforts towards beginners looking for Gui config tools, you had to look at Suse and Mandrake (now Mandriva).
The only specific thing Ubuntu did for beginners is sending CDs for free at a time when not everybody had fast internet connections and would look for paper magazine to come with CD/DVD. And they have stopped doing that a loooooong time ago.
Assuming you are not malicious I will kindly help with your bad memory, Ubuntu had always very good proprietary driver support, this made laptops actually work and helped beginners. I also remember they had a graphical installer compared to Debian and for sure this was beginners friendly. Maybe some other distro offered easy way to install and come with proprietary drivers setup but I can't remember a deb based distro doing that.
Anyway you were wrong, the CDs were not the only thing made Ubuntu appeal for beginners, there were Linux magazines with CDs each month and they were not super expensive , my first linux was a Kubuntu 6.10 from a magazine and I am still running Kubuntu today though i ran Debian, Sidux, Arch, Mandriva, SUSE in the past when I had time to try different distros, compile custom kernels etc.
Proprietary driver installation was the sole reason of existence of Linux Mint which was a fork of ubuntu, so your memory is incorrect.
I think your memory is incorrect, you might be thinking of video codecs and maybe Flash not proprietary drivers, since Ubuntu already had support for easy install of drivers before Mint.
Why not?
My family members need little more than a web browser, media player, and office suite. Debian Stable is very suitable here; arguably more so than other distros, which tend to require maintenance more often.
Lucky 13 and all... And not a single issue so far. Very happy!
Thanks to the Debian team for putting out yet another high quality, reliable release :)
From my build box:
chroot $MOUNTPOINT/ /bin/bash -c "http_proxy=$aptproxy apt-get -y --purge --allow remove-essential install sysvinit-core sysvinit-utils systemd-sysv- systemd-"
There is a weird depends you cannot get around without simultaneously removing and installing in parallel. A Debian bug highlighted the above, with a "-" for systemd-sysv- systemd- as a fix, along with allow remove essential.After this fix, sysvinit builds with debootstrap were almost identical as to bookworm. This includes for desktops.
As per with bookworm through buster, you'll still need something like this too:
$ cat /etc/apt/preferences.d/systemd
# this is the only systemd package that is required, so we up its priority first...
Package: libsystemd0
Pin: release trixie
Pin-Priority: 700
# exclude the rest
Package: systemd
Pin: release *
Pin-Priority: -1
Package: *systemd*
Pin: release *
Pin-Priority: -1
Package: systemd:i386
Pin: release *
Pin-Priority: -1
Package: systemd:amd64
Pin: release *
Pin-Priority: -1
I run a full desktop too, without it. Multiple variants.
I don't use gnome's Desktop Environment though (although I do run gtk/gnome software), so cannot comment on that.
Impressive that i386 support made it all the way to August 2025. I have Debian 10 Buster running on a Pentium 3 which only EOL'd last year in June 2024. It's still useful on that hardware and I'm grateful support continued as long as it did!
OpenBSD still supports i386 for those looking for a modern OS on old 32-bit hardware.
I am not happy about unnecessary ewaste, but an i386 almost certainly has and order of magnitude less horsepower than a raspberry pi or N100.
(Admittedly, the 32-bit support Ubuntu ships is less than a full OS and you can't install Ubuntu on a 32-bit machine these days)
> The i386 architecture is now only intended to be used on a 64-bit (amd64) CPU.
It would probably take a few days to start Steam on one of those considering its load times on current hardware.
e.g. notice that i386 is still listed at the bottom of https://packages.debian.org/trixie/bash
You can recycle e-waste (and yes, I know SOME e-waste ends up in China/India/etc. Not all does.)
The e-waste is of substantially less concern than the massive difference in carbon footprint from power consumption.
The goal of universal compatibility that separates the Debian project from commercial software and even other open-source projects.
The legacy x86 architecture is still far more popular than some that platforms that Debian advertises as having official support for and there has been x86 based processors manufactured for niche applications until recently, eg, AMD Geode and others.
I find it really unfortunate Debian Project is removing official support for new x86 installations. The silver lining is it seems like they'll be an unofficial port and it's likely niche distributions like MX Linux and AntiX will maintain their own builds.
It would be ideal if open-source can develop stronger mechanims to keep support for the large numbers of these relatively niche architectures (eg, through increased usage of emulation over real hardware).
If that's all there's to it, you can still use debootstrap, compile a kernel, and point the root parameter to your shiny new install.
If the official i386 arch was built with instructions that your hardware doesn't support, tough cookies.
While theoretically possible, that would only happen on processors older than 30 years. Debian's i386 architecture still uses -march=i686 as its baseline compiler target, which is the venerable Pentium Pro: https://en.wikipedia.org/wiki/P6_(microarchitecture)
It was used in the OLPC XO-1. The Cisco ASA line of firewalls also used Geode processors at least at some point in its lifetime.
$ curl -s http://deb.debian.org/debian/dists/trixie/main/binary-amd64/Packages.gz | zgrep ^Package: | wc -l
68737
$ curl -s http://deb.debian.org/debian/dists/trixie/main/binary-i386/Packages.gz | zgrep ^Package: | wc -l
66958
(Answering the "to what end?" question, a lot of 32bit-only hardware is still available and dirt cheap in the second-hand market (e.g. early "netbooks"), much of it quite well-built and enjoyable to use. While such hardware can no longer realistically browse the "modern" web, it can still find a lot of use for more lightweight tasks, including acting as a "thin client" for more powerful machines.)
https://wiki.debian.org/LTS/Bullseye https://wiki.debian.org/LTS/Using#Check_for_unsupported_pack...
https://www.debian.org/releases/
Buster has not been supported by Debian for many years.
Buster LTS was EOL last summer. Note that LTS is supported by volunteers via a non-profit, not Debian (though they do a good job).
ELTS is paid support, again not by Debian.
Do look at Debian's wiki for more info on support timeframes, and what LTS and ELTS means.
https://wiki.debian.org/LTS https://wiki.debian.org/LTS/Team https://wiki.debian.org/LTS/Extended https://wiki.debian.org/LTS/Funding
Back to LTS:
Debian LTS is not handled by the Debian Security and Release teams, but by a separate group of volunteers and companies interested in making it a success.
To the point, Freexian is 100% not Debian, not "part" of Debian, it merely uses Debian's infra gratis for LTS. This does not detract from the good work they do, but we must also not confuse a private company, and its goals, with Debian and its goals.
LTS tries its best, but only supports what it can. Not its fault. Thus they do give preference to packages which are more widely used, and which they have received donations for.
So wildly popular things such as apache2, mariadb, and so on are very much going to be handled. Some rare package which has 400 users worldwide? Not so much.
LTS will very much take patches and any help, but that still ties in to the number of users. If a packages has 400 users worldwide, and most have moved on to the next release? Well, I hope you see my point.
(I've moved customers off of LTS for using rare packages, whilst reassuring them that LAMP servers are very much supported due to this. Popularity counts here, due to efforts of volunteers and externals.)
--
ELTS only supports a further subset of packages. It's not "full" support. I think one would be exceptionally unwise to use it, for say a desktop. That is, unless they were paying for support and had obtained a list of all packages supported.
--
https://www.freexian.com/lts/extended/docs/debian-10-support...
"Note that when you request a quote, we send you back a list of packages that are not supported or that have limitations in their support so that you can take an informed decision."
Yes, I know that page has a git repo and so on for some support information.
But my points are; not the full distro is supported, you have to track this yourself, you need to be diligent, and even so you need to be sure you're not running rarer packages.
Once again, I do want to reiterate, these are both excellent programs. They do a good job, they're dedicated, but we must be aware of the limitations here.
An example being the differences between security support for main, non-free, contrib in stable Debian:
https://www.debian.org/security/faq#contrib
As you can see, there is no actual guaranteed security support for contrib and non-free. The reasons are logical, however, users need to be aware of the nuance here.
Just as they need to be aware of the nuance of LTS and ELTS.
For example, all of my server installs have non-free, non-free-firmware and contrib blocked via pinning in preferences.d, with only specific absolutely required packages then allowed back in.
(For example I may allow command line apps, but not anything network connected, and only with a once over of functionality and SUID bits and other such things)
--
Really, I see LTS as a crutch that normal users should never use. I suggest we collectively not encourage Desktop users (for example) to use LTS.
At least for bullseye, the LTS team supposedly support all packages, except for games and a few other packages. Its trivial to find out which packages aren't supported too, just run a command, no need to email anyone.
https://wiki.debian.org/LTS/Bullseye https://wiki.debian.org/LTS/Using#Check_for_unsupported_pack... https://salsa.debian.org/debian/debian-security-support/-/bl...
Agreed on the rest, although do note LTS contributors are paid, the security team probably aren't (although some are).
I think in practice, when contrib/non-free stuff has security updates from upstreams, Debian does get updates in stable/LTS. For example the Intel microcode, or WiFi firmware.
I too feel like Debian having LTS is a waste of time, people should be able to upgrade to the next stable within the one year of regular security support for oldstable.
BTW, Ubuntu security support has a similar issue; main is supported, universe is not.
True i386 support would mean compatible with the original Intel 386 processor from 1985. The 486 added a few additional instructions in 1989 but things really changed with the Pentium in 1993 - that gave us i586 which is the bare minimum for most modern software today. Much software can still run on regular Pentiums today if compiled for it, but SSE2 optimizations requires at least a Pentium 4 or Core CPUs instead.
I play with retro PCs often and found OpenBSD's i386 target stopped supporting real 386 CPUs after the 4.1 release, and dropped support for i486 somewhat recently in 6.8. It now requires at least a Pentium class CPU, i586, though the arch is still referred to as i386 likely because it's a common proxy for "32-bit".
It was a bit of a strange decision since there were undoubtedly more 386, 486, and Pentium users than some of the platforms Linux continued to support, but that's the choice they made. But they weren't alone. Even NetBSD requires a 486DX or better.
However, that doesn't stop one from installing a Pentium Overdrive in an old Socket 3 board and running the latest release. ;)
What does this mean? If all 69k+ packages are installed, it will take up this much space?
I like Debian's measured pragmatism with ideology, how it's a distro of free software by default but it also makes it easy to install non-free software or firmware blobs. I like Debian's package guidelines, I like dpkg, I like the Debian documentation even if Arch remains the best on that front. I like the stable/testing package streams, which make it easy to choose old but rock-stable vs just a bit old and almost as stable.
And one of the best parts is, I've never had a Debian system break without it being my fault in some way. Every case I've had of Debian being outright unbootable or having other serious problems, it's been due to me trying to add things from third-party repositories, or messing up the configuration or something else, but not a fault of the Debian system itself.
In what way Ubuntu went downhill?
Using apt to install some packages installs snap plumbing and downloads the package as a snap automatically. You don't have to install it manually.
There's no malicious intent though, it's made to "impose a positive pressure on the snap team to produce better work and keep their quality high" (paraphrased, but this was the official answer).
Debian has been a safe haven since.
The topic is not whether snaps are avoidable or not, but the Ubuntu is going downhill. And snaps are purported to be part of that downhill, which would be Ubuntu's NIH syndrome. As far as I know, Ubuntu's only successful development is Ubuntu itself - the other projects have all failed over the years, and snap, while ongoing, is not winning any popularity contests either.
But in practice even for flatpak the only realistic place you can publish your flatpak if you want any traction at all would be flathub, so both formats have only one store right now. But flatpak allows a custom store while for some strange reason Canonical decided not to allow snap that freedom.
Also, rugpulling users and migrating things to snaps without asking their users in order to "create a positive pressure on snap team to keep their quality high" didn't sit well with the users.
> But in practice even for flatpak the only realistic place you can publish your flatpak if you want any traction at all would be flathub
But, for any size of fleet from homelab to an enterprise client farm, I can host my local flathub and install my personal special-purpose flatpaks without paying anyone and thinking whether my packages will be there next morning.
Freedom matters, esp. it that's the norm in that ecosystem.
I was neutral-ish about Ubuntu, but I flat out avoid them now, and migrate any remaining Ubuntu server to Debian in shortest way possible.
I'm using Debian for the last 20 years or so, BTW.
Red Hat do the same. They reinvented the wheel on multiple occasions (systemd and it's whole ecosystem like systemd-resolved and timed and the whole kitchen sink; podman, buildah, dnf, etc etc.)
They just have more success on getting their NIH babies accepted as the standard by everyone else. Canonical just fail at that (often for good reasons, Unity was downright crap for some time) and abandon stuff, which doesn't help their future causes.
https://bbs.archlinux.org/viewtopic.php?pid=1149530#p1149530
> like systemd-resolved and timed
They're not forced on anybody, they're not required by systemd, and many distributions use more feature-rich alternatives (including, afaik, RHEL — last time I looked at it, they used dnsmasq and chrony). They're also often shipped as separate optional packages:
$ apt search 'systemd-timesyncd|systemd-resolved'
systemd-resolved/testing,now 257.7-1 amd64
systemd-timesyncd/testing 257.7-1 amd64
> podman, buildahStill not anywhere near as popular as Docker. Although technically they're far better than Docker, and if anyone is using them, it's for that reason.
> dnf
Only used by RHEL and its upstream Fedora?
---
All of this makes very little sense.
> Still not anywhere near as popular as Docker. Although technically they're far better than Docker, and if anyone is using them, it's for that reason.
NIH packages are generally expected to be less popular, yes. They have some technical merit, though in my opinion that's mostly trade-offs rather than one being strictly better than the other. I would be surprised if everybody using them is using them because of technical merit as opposed to it being pushed by the distro.
Special mention goes to NetworkManager, which has become the de facto standard way to configure networking because it's good. And with nmcli I can even remember how to connect to wifi from single user mode.
This depends on the phrasing. We could also say that Red Hat produces actually useful software, in contrast with Canonical, whose developments don't seem to provide value over existing solutions.
We could also say that Canonical tries really hard to do exactly what Red Hat does, but in a slightly different space, and not very successfully.
Packaged as: https://github.com/justinclift/snapd-empty/releases/download...
It's just an empty package that tells the system snap is installed, to stop the broken dependency chains you otherwise get from force uninstalling snap.
It's been working fine on a handful of Ubuntu 24.04 systems I've been handed and can't change the OS of, for about half a year now.
Then you add e.g. the mozilla PPA such that its firefox package gets installed instead.
Of course that goes against the spirit of FOSS, but there's a bit more nuance there than simply saying "snaps are proprietary".
I can't seem to find it. Any pointers would be helpful, so at least I can know the latest state of this thing.
Containers, popular as they may be on servers, can only add breakage and overhead to desktops, especially for an established and already much better organized system like Debian's apt. There just haven't been any new desktop apps for way over a decade that would warrant yet another level of indirection.
For example, with flatpak you select a base runtime for your package that contains mostly system-agnostic libraries. With snap, you specify an Ubuntu version as a base runtime and additional dependencies that are Ubuntu packages.
The end result should be similar to FlatPak where you have practically no dependencies as it should package almost everything.
I didn't say "the snap format".
The server isn't, and the client is hostile to using an alternative server. Snaps are a solution, and picking out one piece is deceptive.
Switch to sudo-rs, uu-coreutils (rust based stuff), etc., etc.
It's not a Debian derivative anymore. It's something else.
Was not my cup of tea before, it's even more not my cup of tea now.
snap, lxd (not lxc!), mir, upstart, ufw.
It's neverending, and it's always failing.
Seamless LXC and virtual machine management with clustering, a clean API, YAML templates and a built-in load balancer, it's like Kubernetes for stateful workloads.
This made it more fragile. It was really nice in the late 2000s, but gradually became worse.
In the early days it had a differing and usually better aligned release schedule for the critical graphics stack.
As a function of time, you are increasingly likely to get rug pulled once Shuttleworth decides to collect his next ransom.
Their lawyers' willingness to risk shipping pre-built zfs kernel modules (that are always in sync with the kernel). Pretty important if you're into that sort of thing, it's easier to remove cruft once post-install than to keep an eye on DKMS for years (making sure that it hasn't disassembled itself and continues working).
I think it’s worth keeping that in mind with all the hate Ubuntu gets. Most users are just silently getting their work done on an LTS they update every two years.
Most of the Linux-based (enterprise and/or embedded) appliances are built upon Debian, for example.
P.S.: The total number of Debian installations and their derivatives are unknown BTW. Debian installations and infra do not collect such information. You can install "popularity-contest", but the question defaults to "no" during installation, so most people do not send in package selection lists, unlike Canonical's tracking of snap installations.
I had a few issues caused by Ubuntu that weren't upstream. One was Tracker somehow eating up lots of CPU power and slowing the system down. Another was with input methods, I need to type in a pretty rare language and that was just broken on Ubuntu one day. Not upstream.
The bigger problem was Ubuntu adding stuff before it was ready. The Unity desktop, which is now fine, was initially missing lots of basic features and wasn't a good experience. Then there was the short-lived but pretty disastrous attempt to replace Xorg with Mir.
My non-tech parents are still on Ubuntu, have been for some twenty years, and it's mostly fine there. I wouldn't recommend it if you know your way around a Linux system but for non-tech, Ubuntu works well. Still, just a few months ago I was astonished by another Ubuntu change. My mom's most important program is Thunderbird, with her long-running email archive. The Thunderbird profile has effortlessly moved across several PCs as it's just a copy of the folder. Suddenly, Ubuntu migrated to the snap version of Thunderbird, so after a software update she found herself with a new version and an empty profile. Because of course the new profile is somewhere under ~/snap and the update didn't in any way try to link to the old profile.
Then there were stupid things like Amazon search results in the Unity dash search when looking for your files or programs. Nah. Ubuntu isn't terrible by any means but for a number of years now, I'd recommend Linux Mint as the friendly Debian derivative.
The two things I can remember were problems with NFS out of the box (outside having to install nfs-common, which I'm fine with) and apt-cache not displaying descriptions of packages. There were lots of other, minor annoyances that affected people like me but wouldn't affect someone who got into Linux desktops after, say, 2010. My memory sucks though so those are the two I remember. Yes, there were bug reports filed and yes, they sat in the tracker for years with no attention from Ubuntu.
I wound up back on Debian once I got old enough that I didn't care about being behind the times a couple years.
Debian is great but I can't say this is a shared experience. In particular, I've been bitten by Debian's heavy patching of kernel in Debian stable (specifically, backport regressions in the fast-moving DRM subsystem leading to hard-to-debug crashes), despite Debian releases technically having the "same" kernel for a duration of a release. In contrast, Ubuntu just uses newer kernels and -hwe avoids a lot of patch friction. So I still use Debian VMs but Ubuntu on bare metal. I haven't tried kernel from debian-backports repos though.
On the other hand, I had and still have many Debian installations, some with Intel integrated graphics. None of them created any problems for a very, very long time. To be honest, I don't remember even any of my Intel iGPU systems crashed.
...and I use Debian for almost two decades, and I have seen tons of GPU problems. I used to write my Xorg.conf files without using man, heh. :)
Maybe you can give Debian another chance.
I’ve thought about (ab)using a Proxmox repository on an otherwise stock Debian system before just for the kernel…
Needs citation.
Debian stable uses upstream LTS kernels and I'm not aware of any heavy patching they do on top of that.
Upstream -stable trees are very relaxed in patches they accept and unfortunately they don't get serious testing before being released either (you can see there's a new release in every -stable tree like every week), so that's probably what you've been bit by.
You're right stability comes from testing, not enough testing happens around Linux period, regardless of which branch is being discussed.
It's not easy testing kernels, but the bar is pretty low.
The wiki has more info on this.
https://wiki.debian.org/LTS https://wiki.debian.org/LTS/Team https://wiki.debian.org/LTS/Funding https://wiki.debian.org/LTS/Extended
Despite the reputations, I've had far fewer issues on Arch-based desktop distros than back when I was rolling Ubuntu and Debian.
That said, Debian on a server every time.
When people switch to arch they typically set things up from scratch, end up choosing simple tools and avoid most of the unstable stuff distros push onto you.
You can do that well enough with Debian's "testing" and "unstable" release channels. Aside from the few months leading up to a new "stable" release, which usually isn't a big deal (and fixing regressions in "stable" should then be a higher priority anyway). Just don't install it on systems that you actually depend on to keep working. But running it on your desktop at home that you only use to play and experiment with is just fine.
Whether that qualifies as "heavy" or not is of course a matter of opinion, but it's not nothing.
That's curious, because when I was learning to make Debian packages, I found the official documentation to be far better than I had seen from any other distro. The Policy Manual in particular is very detailed, continually improving, and even documents incremental changes from each version to the next. (That last bit makes it easy for package maintainers to keep up with current best practices.)
Does Arch have something better in this department?
Are you perhaps comparing the Arch wiki to Debian's wiki? On that front I would agree with you.
But you can't (easily) configure package X itself before you install it; and after you install it, it runs immediately so you only get to configure it after the first run.
echo exit 101 > /usr/sbin/policy-rc.d
chmod +x /usr/sbin/policy-rc.d
I think this is the recommended way to avoid autostarting services on Debian.Still should be the default behavior.
I learned nftables with Bookworm and labwc with Trixie.
labwc supports Wayland with Openbox configuration.
You're not trying hard enough ;-)
I have Debian on an old MacBook Pro and had it on an even older iMac, and I've had a few problems over the years. Always with proprietary drivers - WiFi, graphics, webcams, etc. - Apple really don't want people using free software on their hardware. There's always been a fix, but there have been a few stressful moments and hoops to jump through.
But it's definitely my favorite distro, and I run it everywhere I can. Pretty much always "just works" anywhere but Apple.
So here is what I _don't_ like about Debian :-)
- I don't like Debian package tooling (dpkg, debootstrap, de build...). Actually I hate everything about the experience of Debian packaging. Every time I package for Debian, I end up with a messed up setup of chroots and have to make triple sure nothing leaked from my environment.
- Debian has a habit of repackaging everything at their own sauce, disregarding upstream philosophy. Debian packages will have their own microcosm of configuration directories, defaults, paths, etc. orthogonal to what a pristine installation look like.
- Debian has the annoying habit of default starting installed services. So you always have to dance around your configuration management to disable services, install them, configure them, then restart them.
I still use Debian but it's hard to forget stuff like that even after all these years.
There is plenty that could be said of Debian but as far as I’m concerned that’s not part of it.
Debian patches software for purely ideological reasons because they think they are not free enough. That’s not pragmatism. That’s the reverse of pragmatism. It certainly is a real drag on the teams developing the software they try to ship.
https://blog.kronis.dev/blog/debian-updates-are-broken
https://blog.kronis.dev/blog/debian-and-grub-are-broken
Then again, I’ve had most software occasionally break, I’m thankful that Debian exists.
I’m not familiar with the metric definition they use, but I’d be worried if close to 100% of the packages they included in bookworm hadn’t been updated in the roughly 2 years between releases.
I use Debian for most of my servers, so I’m sure there is a valid explanation of that phrase.
Code doesn't "go bad" and not everything is affected by ecosystem churn and CVEs.
An established package not having updates for 2y is not in and of itself problematic.
And even if it was?
If you look at the number of packages in Debian, only a small portion have CVEs. There are nearly 30k package sources, and an output of 60k binary packages.
Yet we only get a few security updates weekly.
Another example? Both trixie and bookworm use the same firefox ESR (extended release) version. Both will get updated when firefox forces everyone to the next ESR.
Beyond that, some packages are docs. Some are 'glue' packages, eg scripts to manage Debian. These may not change between releases.
Lastly, Debian actually maintains an enormous number of upstream orphaned packages. In those cases, the version number is the same (sometimes), but with security updates slapped on if required.
From my perspective, outside of timely and quick security updates, I have zero desire for a lot of churn. Why would I? Churn means work. Churn means changed stability.
We get plenty of fun and churn from kernel, and driver related changes (X, Wayland, audio/nic, etc), and desktop apps. And of course from anything running forward, with scissors, like network connected joy.
And yeah it must be an incredible amount of work to stay on top of all this
https://security-tracker.debian.org/ https://security-team.debian.org/
On a personal note, Trixie is very exciting for me because my side project, ntfy [1], was packaged [2] and is now included in Trixie. I only learned about the fact that it was included very late in cycle when the package maintainer asked for license clarifications. As a result the Debian-ized version of ntfy doesn't contain a web app (which is a reaaal bummer), and has a few things "patched out" (which is fine). I approached the maintainer and just recently added build tags [3] to make it easier to remove Stripe, Firebase and WebPush, so that the next Debian-ized version will not have to contain (so many) awkward patches.
As an "upstream maintainer", I must say it isn't obvious at all why the web app wasn't included. It was clearly removed on purpose [4], but I don't really know what to do to get it into the next Debian release. Doing an "apt install ntfy" is going to be quite disappointing for most if the web app doesn't work. Any help or guidance is very welcome!
[1] https://github.com/binwiederhier/ntfy
[2] https://tracker.debian.org/pkg/ntfy
[3] https://github.com/binwiederhier/ntfy/pull/1420
[4] https://salsa.debian.org/ahmadkhalifa/ntfy/-/blob/debian/lat...
> The webapp is a nodejs app that requires packages that are not currently in debian.
Since vendoring dependencies inside packages is frowned upon in Debian, the maintainer would have needed to add those packages themselves and maintain them. My guess is that they didn't want to take on that effort.
Woah. Shouldn’t Node and Golang be in Debian’s official repos by now?
Debian follows the same philosophy as for other more traditional languages and expects that all these dependencies are packaged as individual Debian packages.
Debian sources need to be sufficient to build. So for npm projects, you usually have a debian-specific package.json where each npm dependency (transitively, including devDependencies needed for build) needs to either be replaced with its equivalent debian package (which may also need to be ported), vendored (usually less ideal, especially for third-party code), or removed. Oh, and enjoy aligning versions for all of that. That's doable but non-trivial work with such a sizable lockfile. If I would guess the maintainer couldn't justify the extra effort and taking on combing through all those packages.
I also think in either case the Debian way would probably be to split it out as a complementary ntfy-web package.
> The ntfy image is available for amd64, armv6, armv7 and arm64. It should be pretty straight forward to use.
My advise to you is to deny all support from people using the Debian version of your software and automatically close all bug tickets from Debian saying you don’t support externally patched software.
You would be far from the first to do so and it’s a completely rational and sane decision. You don’t have to engage with the insanity that Debian own policies force on its maintainers and users.
Have had my RPi on Debian since Debian 9, with smooth upgrades every time.
The thing I like most about Debian is that you need to know at least a little about what is going on to use it. For me, it does a good job of following "as simple as possible and no simpler."
My first Debian install was in 1996. I had no real idea what I was doing, but it was amazing to me that I could remote-display windows from machines across campus, and it was alien compared to the windows 3.x/95 I was used to at that point. There was no apt at that point, or none that I was aware of, and adding new stuff was painful.
I started using debian preferentially as my workstation/desktop OS in about 2005, and was installing it on embedded systems (linksys nslu2) to make micro servers by … etch I think it was.
By 2008 I was at IBM and they allowed a choice of windows or redhat on your laptop, and if you were adventurous there was experimental support for Ubuntu which might work on Debian. I made it work and discovered that among 330k people there were 22 of us running it!
Always loved it, it always just made more sense than other distros somehow. My daily driver is a Mac now, but I still have a few Debian machines around.
Then my private laptop has had a bunch of graphic issues after upgrading to 13 (it manifests differently in a lot of applications and it changes when you pick a different desktop theme, not even sure how to describe it). The new pipewire (pulseaudio replacement, idk why that needed replacing) does not work properly when the CPU is busy (so I currently play games without game sounds or music in the background). The latter then also sometimes (1 in 5 times maybe?) crashes when resuming from suspend, but instead of dying, spams systemd which diligently stores it all in a shitty binary file (that you can't selectively prune), runs completely out of disk space, and breaks various things on the rest of the system until you restart the pipewire process and purge any and all logs (remember, no selective pruning)... Tried various things I found in web searches and threw an LLM at it as well, but no dice. I assume these issues are from it not being a fresh install, so no blame/complaint here really, just annoying and I haven't had these issues when doing previous upgrades. Not yet sure how to resolve, perhaps I'll end up doing a completely new install and seeing what configs I can port until issues start showing up
Surely these things are not a Debian-specific issue, but I haven't noticed something like that with either 11 or 12
Edit: oh yeah, and the /tmp(fs) counter is at 1 so far. I wonder how many times I'll have run out of RAM by Debian 14, by forgetting I can't just dump temporary files into /tmp anymore without estimating the size correctly beforehand
In Gnome, Blender becomes unresponsive but everything else is still usuable. In Cinnamon, the entire system becomes unresponsive.
https://www.debian.org/releases/trixie/release-notes/issues....
# example:
udevadm test-builtin net_setup_link /sys/class/net/eno4 2>/dev/null
ID_NET_LINK_FILE=/usr/lib/systemd/network/99-default.link
ID_NET_LINK_FILE_DROPINS=
ID_NET_NAME=eno4 <-- note the NIC name that will happen after reboot
Here's a one-liner, excluding a bond interface and lo. Gives a nice list of pre and post change. for x in $(cat /etc/network/interfaces | grep auto | cut -d ' ' -f 2 | grep -Ev 'lo|bond0'); do echo -n $x:; udevadm test-builtin net_setup_link /sys/class/net/$x 2>/dev/null | grep NET_NAME| cut -d = -f 2; done
The doc's logic is that after you've upgraded to trixie, and before reboot, you're running enough of systemd to see what it will name interfaces after reboot.So far I have not had an interface change due to upgrade, so I cannot say that the above does detect it.
*haha
enoX should always stay stable, as it's the BIOS (in some ACPI table) telling that this device/port has this ID.
ensX means the NIC in PCIe slot X, but in your PCIe tree you can have PCIe bridges, so technically you could have multiple NIC in the same slot (what the BIOS declare as a slot), so there was a lot of breaking NIC naming changes over the years in systemd to figure out the right heuristics that are safe, enabling/disabling slot naming if there is a PCIe bridge, but just in some cases.
Also for historical reasons the PCIe slot number was read indirectly leading to some conflicts in some cases (this was fixed in systemd 257)
Every year's cope with systemd.
My first system migrated in less than 10 minutes, incl. package downloads and reboot. It's not a beast either. N100 mini PC connected to a ~50mbps network.
If your sources file references the release name (e.g.:bookworm), you change them to trixie, then “apt update && apt dist-upgrade”.
or,
If your sources file directly reference distro-suites (e.g.: stable), you just “apt update && apt dist-upgrade” since stable is now pointing to trixie.
In the first reboot, you run “apt autopurge” to remove packages which are not needed anymore.
…and you’re done.
Minimal: https://cdimage.debian.org/debian-cd/current/amd64/bt-cd/deb...
Full: https://cdimage.debian.org/debian-cd/current/amd64/bt-dvd/de...
"trixie"
64-bit PC (amd64),
64-bit ARM (arm64),
ARM EABI (armel),
ARMv7 (EABI hard-float ABI, armhf),
64-bit little-endian PowerPC (ppc64el),
64-bit little-endian RISC-V (riscv64),
IBM System z (s390x)
It's good to see RISC-V becoming a first-class citizen, despite the general lack of hardware using it at the moment.I do wonder, where are PowerPC and IBM System z being used these days? Are there modern Linux systems being deployed with something other than amd64, arm64, and (soon?) riscv64?
them being kept by major distros is therefore not as "natural" as other architectures
From a developer perspective, s390x is also the last active big-endian architecture (I guess there's SPARC as well, but that's on life support and Oracle doesn't care about anyone running anything but Solaris on it), so it's useful for picking up endianness bugs.
Another interesting thing is that the only two 32-bit architectures left supported are armel and armhf. Debian has already announced that this will be the last release that supports armel (https://www.debian.org/releases/trixie/release-notes/issues....), so I guess it'll be a matter of time before they drop 32-bit support altogether.
The end of an era.
IBM.
And they own redhat, so I imagine they put a lot of time and money into making the kernel work.
Why Debian in particular, not sure.
What I did is switch to NetBSD.
I suppose as some kind of headless home server it could still have been useful. OTOH for something that runs 24/7 a RPi would use a fraction of the electricity and still be a lot more powerful.
So yes, beyond nostalgia and some embedded/industrial usecases, it's hard to see a use for a 32-bit only PC these days.
They all still have DVD reader drives and are nice for ripping CDs. Despite the fact that the drives are nearing 20 years of age (machines are from ~2005) they still perform better than most “new” external drives. Of course one could also move the drives to a newer machine but many of them use the IDE connector which is not commonly found on modern systems. Also, modern cases typically don't account for (multiple) 5.25" drives.
The other use case is to flash microcontrollers. When fiddeling around with electronics there is always a risk of a short circuit or other error that could in worst case kill the attached PC's mainboard. I feel much safer attaching my self-built electronics to an old machine than to my amd64 workstation.
Due to their age, I think the old machines may not live much longer -- I fear not even 10 more years, some of my old 32-bit laptops have already failed. Hence even for me it does not make sense to try keeping up the software support. Maybe I switch them to a BSD or other Linux distribution if they live long enough but for now the machines run OK with Debian Bookworm (newly oldstable), too.
Fortunately, bookworm will continue to receive updates for almost 3 years, so I am not in a hurry to look for a new OS for these relics. OpenBSD looks like the natural successor, but I am not sure if the wifi chips are supported. (And who knows how long these netbooks will continue to work, they were built in 2008 and 2009, so they've had a long life already.)
EDIT: Hooray, thanks to everyone who made this possible, is what I meant to say.
So nothing critical. But something they are still good at, and being very small makes them a natural fit for these use cases.
My ~/.config/mpv/config:
#inicio
ytdl-format=bestvideo[height<=?480][fps<=?30]+bestaudio/best
vo=gl
audio-pitch-correction=no
quiet=yes
pause=no
vd-lavc-skiploopfilter=all
demuxer-cache-wait=yes
demuxer-max-bytes=4MiB
#fin
My ~/yt-dlp.conf #inicio de fichero
--format=bestvideo[height<=?480][fps<=?30]+bestaudio/best
#fin de fichero
For the rest, I use streamlink from virtualenv (I do the same with yt-dlp) with a wrapper
at $HOME/bin:yt-dlp wrapper
#!/bin/sh
. $HOME/src/yt-dlp/bin/activate
$HOME/src/yt-dlp/bin/yt-dlp "$@"
streamlink wrapper #!/bin/sh
. $HOME/src/streamlink/bin/activate
$HOME/src/streamlink/bin/yt-dlp "$@"
To install streamlink mkdir -p ~/src/streamlink
cd ~/src/streamlink
virtualenv .
. bin/activate
pip3 install -U streamlink
The same with yt-dlp: mkdir -p ~/src/yt-dlp
cd ~/src/yt-dlp
virtualenv .
. bin/activate
pip3 install -U yt-dlp
On the rest, I use mutt+msmtp+mbsync, slrn, sfeed, lynx/links, mocp, mupdf for PDF/CBZ/EPUB,
nsxiv for images, tut for Mastodon and Emacs just for Telegram (I installed tdlib from OpenBSD
packages and then I installed Telega from MELPA).Overall it's a really fast machine. CWM+XTerm+Tmux it's my main environment. I have some SSH connection open to somewhere else at the 3rd tag (virtual desktop), and the 2nd one for Dillo.
I've found it pretty easy though to use some KDE components built from source on top of the standard Debian packages. Build with kdesrc-build, then have those binaries linked to from your ~/bin and you're set. It might get difficult if you want to rebuild some key components like plasmashell itself but I've been using locally built versions of Kate and Konsole without issue.
Not necessarily forever, though. Bookworm got minor Plasma updates, so I wouldn't be surprised if Trixie does as well.
Even at this point, Plasma 6.4 has been out for almost two months and 6.3 will not get any more updates ever. While everyone else is upgrading, Debian is going to be stuck on an already unsupported version for another two years or however much.
Debian is great for what it is, but you better hope you don't run into issues with your desktop environment because they will not be addressed.
You can always not install QT or KDE packages and compile your desktop from source. It's a major pain in the ass but I did it for years. A side benefit is you can participate in testing and interact with KDE developers directly.
Another option is to go all FrankenUNIX and add neon sources to your apt cache. I've done similar but I don't recommend it.
Or you can just run unstable. Lots of people do. I did for a long time, and as long as you're willing to fix the package system occasionally it's not a bad experience. Certainly better than the two previous options.
Hardware support is good and UI is great! It feels snappier than Ubuntu, may be due to lack of snap and fewer services and applications installed by default.
See below:
APT is moving to a different format for configuring where it downloads packages from. The files /etc/apt/sources.list and *.list files in /etc/apt/sources.list.d/ are replaced by files still in that directory but with names ending in .sources, using the new, more readable (deb822 style) format. For details see sources.list(5). Examples of APT configurations in these notes will be given in the new deb822 format.
If your system is using multiple sources files then you will need to ensure they stay consistent.
- https://wiki.debian.org/SourcesList#APT_sources_format- https://www.debian.org/releases/trixie/release-notes/upgradi...
"apt modernize-sources" command can be used to simulate and replace ".list" files with the new ".sources" format.
Modernizing will replace .list files with the new .sources format, add Signed-By values where they can be determined automatically, and save the old files into .list.bak files.
This command supports the 'signed-by' and 'trusted' options. If you have specified other options inside [] brackets, please transfer them manually to the output files; see sources.list(5) for a mapping.
Oh nifty, I hand converted all mine a couple years back. It would have been nice to have that then (or know about it?). I do really like the new deb822 format, having the gpg key inline is nice. I do hope that once this is out there the folks with custom public apt repos will start giving out .sources files directly. Should be more straightforward than all the apt-key junk one used to have to do (especially when a key rotated).
[0]: https://docs.ansible.com/ansible/latest/collections/ansible/...
[0]: https://manpages.debian.org/buster/apt/sources.list.5.en.htm...
Wow, I'm amazed a third of packages haven't seen an update in, ehm checks
> After 2 years, 1 month, and 30 days of development, the Debian project is proud to present its new stable version
I'm a fan of old software myself, in the sense that I find it cool to see F-Droid having a (usually tiny) package that is over 10 years old but it does exactly what I want with no bugs and it works perfectly on Android 10. I wonder if those 30% more commonly fall in the "it's fine as it is" category or in the "no maintainers available" category
My first contact with Linux was with Debian 2.1. Exactly with this distro CDs https://archive.org/details/linux-actual-06-2/LinuxActual_01...
To be honest, it was a miserable experience to install it on your main computer without anything else available to look for help in case of problems. It was also hard to really try it due to lack of drivers for current (at that moment) ADSL modems.
But here I am a crapload of years later, still loving it :-)
That being said, I like Flatpak, so I installed it (was super easy and Flathub provides instructions), and I added a few Gnome Shell extensions (a Dock so my wife can find apps when she occasionally uses my laptop).
Debian gives you a feeling of ownership of your computer in a way the corporate distros don't, but is still pretty user friendly (unlike Arch).
I'd definitely install Debian Stable on a grandparents' computer.
https://www.debian.org/donations
Not affiliated, just a happy user for a long, long time.
That's too big. I'm going to need a smaller distro.
1.) sudo apt-get update && sudo apt-get --yes upgrade && sudo apt-get --yes autoremove --purge
2.) Update all entries of bookworm to trixie in /etc/apt/sources.list.
3.) sudo apt full-upgrade
4.) sudo reboot
5.) sudo apt modernize-sources
> 5.2.2. systemd message: System is tainted: unmerged-bin systemd upstream, since version 256, considers systems having separate /usr/bin and /usr/sbin directories noteworthy. At startup systemd emits a message to record this fact: System is tainted: unmerged-bin. It is recommended to ignore this message. Merging these directories manually is unsupported and will break future upgrades. Further details can be found in bug #1085370.
No option to disable this either, per discussion in https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1085370
http://0pointer.net/blog/projects/stateless.html
https://www.freedesktop.org/wiki/Software/systemd/TheCaseFor...
That said, my knee-jerk is also that this is about strong-arming distros. Which leaves a bad taste in my mouth. I’d be interested to hear other viewpoints though.
The discussion in that bug is that the Debian maintainer (and upstream dev) is open to an upstream patch to add such an option.
Wild interpretation right here.
There are only 2 realistic choices: Leave it as is or patch out the warning message in the Debian package
Debian maintainer is clearly deflecting the responsibility here because everyone knows very well that upstream wouldn't accept such a patch.
As it's already explained in the bug report, since Debian has no plan to do that migration in the near future, aforementioned warning isn't only useless and annoying, it's also potentially harmful, thus the correct action would be to remove it downstream like they did it in xscreensaver (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=819703#84)
But then they'd face the wrath of Lennart, so the only choice left is ignoring the report.
Once I get the hypervisor systemd-free (no systemd on FreeBSD), I can then install a minimal distro in a VM mean to do containerization (like, say, the Talos Linux distro for K8s, that only has a few executables and they're all immutable) and then I can run containers that, by design, have something that is precisely not systemd as PID1.
So life is good: there's a systemd-free world at the end of the tunnel.
Did you consider Devuan? Or is this just taking one annoyance as motivation to fix others at the same time?
Because they said so:
> As part of that we sometimes adopt schemes that were previously used by only one of the distributions and push it to a level where it's the default of systemd, trying to gently push everybody towards the same set of basic configuration [1]
Night-and-day decision-making process compared to Fedora and Arch, which both replaced SDL2 with sdl2-compat, broke a bunch of SDL2 apps because sdl2-compat isn't actually SDL2-compatible yet, and sent everyone to yell at the SDL team about it.
Even a bare Slackware with KDE and KDEi (and even XFCE) can do tons of work by itself by just adding an user and accepting the default group belonging array by pressing 'up' at the prompt.
Heck, even OpenBSD, minus the volume automount, which can be handled in a breeze with toadd or tray-app in seconds; and if you are smart you can figure DBUS/FDo mount points and integrate then with XFCE/Plasma/Gnome without too much issues (hotplugd can handle device umounting if you set up doas.conf accordinly).
The rest? MESA and X.org will handle most of the graphics stuff. Video and audio drivers are autodetected on almost every GNU and *BSD. Printers are often wireless bound so any assistant with look it up fast and attach it to CUPS.
Still, I can't handle DPKG/APT's slowness, even if there are libre distros as Trisquel with it. If they rebased their distro as a simpler Parabola LTS release with either Mate or LXDE setups, the user experience would be almost the same, but installing packages would happen at a much faster pace.
Does anyone have any suggestions for a 32-bit distro that's still being updated?
Congratulations to the team--phenomenal work!
Alternative to parsing the reproduce web site :)
pip3 install <whatever> --prefix=/usr/local
will install into /usr/local/local, so one has to use the prefix /usr. The same command on, say, OpenSuSE will install into /usr and break your system. Barking mad.https://sources.debian.org/src/python3.7/3.7.3-2+deb10u3/deb...
Certainly a terrible UX, but the motivation is clear: they're trying to get PEP 668 protections for older versions.
Virtual environments work a lot better anyway, honestly. (With a properly crafted `pyvenv.cfg`, it should be possible to convince Python that your /usr/local is a virtual environment, but I can't be sure offhand if there are any serious negative consequences of that.)
This makes Debian Trixie about 32 times larger than Windows XP with approximately 45 millions lines of code, arguably the best Windows OS ever.
Debian Trixie is released about 24 years after Windows XP.
gorgoiler•22h ago
Debian has been the stable footing of my Free computing life for three decades. Everything about their approach — from showing me Condorcet, organising stable chaos, moving forward by measured consensus, and basing everything on hard wrought principles — has had an effect on me in some way, from technical to social and back again.
I love this project and the immeasurable impact it has had on the world through their releases and culture.
With all my love, g’o xx