Then there's stuff like: "this project only compiles with an obsolete version of gcc" so the alternatives are dropping it or fixing it. Closely related are bugs that only manifest on certain architectures, because the developers only use amd64 and never compile or run it on anything else, so they make incorrect assumptions.
Then there's python that drops standard library modules every release, breaking stuff. So they get packaged separately outside of python.
There's also cherry picking of bugfixes from projects that haven't released yet.
Is there any reason you think debian developers are genetically more prone to making mistakes than anyone else? Considering that debian has an intensive test setup that most projects don't even have.
What gives you the idea I think Debian are any more prone to mistakes than anyone else? It’s one of the two distros I use at home. I admire the devs a great deal.
Debian will remove code that “calls home”
or tries to update software in a way that
bypasses the Debian packaging system.
Thank god. I'm so happy that such a distro exists.It is really hard to run netstat. Or tcpdump. /s
So I can add spotify or signal-desktop to NixOS via nixpkgs, and they won’t succeed at updating themselves. But they might try, which would be a violation of Debian’s guidelines.
It’s a tough line — I like modern, commercial software that depends on some service architecture. And I can be sure it will be sort of broken in 10-15 years because the company went bust or changed the terms of using their service. So I appreciate the principles upheld by less easily excited people who care about the long term health of the package system.
And yes, good point, this was indeed discussed when devbox enabled AI training by default. It somehow seems like there is more than one category of phoning home at play here, since it is obviously tolerated in other cases.
Windows is activly hostile to anything privacy related.
Arch comes with the default of do it yourself. Lots of footguns, but not hostile OS behavior. Great difference to me.
When I read comments like yours "Arch is a minefield" "With Arch it is so easy to shoot yourself in the foot", I never know what this could be specifically. How could this look like? Can you give me something more concrete? I'm really eager to know what everyone is talking about.
Imagine me: I'd consider myself a Linux noob, although I probably aren't anymore. I use Arch Linux for about 3 years now as my daily driver. I'm not young anymore - I didn't grew up with computers - I don't have it in my blood. I don't have formal education in anything computer and have never worked in the field. During Covid I learnt Linux from the Arch wiki. Now I'm using it. I configured some things and can control my computer through the command line.
Everytime I read comments like yours, I get the shivers. Did I miss something integral? What do I not know about? Especially network stuff is a blind spot for me. I didn't touch network stuff beyond the default wiki pages.
All because Ableton cannot be bothered to support Linux :/ I understand that though, just sucks...
More color: I was happy running Arch on a 2012 vintage Dell Latitude (Intel, integrated graphics) for several years. I'm currently quite happy running Arch on a Lenovo Thinkpad T14s (gen2, AMD, integrated graphics).
Arch wiki does have many pages about arch-on-a-particular-model to help once you get a short list of models you're interested in, like this: https://wiki.archlinux.org/title/Lenovo_ThinkPad_T14s_(AMD)_...
Imagine me: I'd consider myself a Linux noob, although I probably aren't anymore. I use Arch Linux for about 3 years now as my daily driver. I'm not young anymore - I didn't grew up with computers - I don't have it in my blood. I don't have formal education in anything computer and have never worked in the field. During Covid I learnt Linux from the Arch wiki. Now I'm using it. I configured some things and can control my computer through the command line.
Everytime I read comments like yours, I get the shivers. Did I miss something integral? What do I not know about? Especially network stuff is a blind spot for me. I didn't touch network stuff beyond the default wiki pages.
When I read comments like yours "Arch is a minefield" "With Arch it is so easy to shoot yourself in the foot", I never know what this could be specifically. How could this look like? Can you give me something more concrete? I'm really eager to know what everyone is talking about.
Let's take a look at the xz incident, then at how fast rolling release distros get their packages in. That's part of the equation. Bottom line is: you're the first line of defense against potential malicious supply chain attacks. This is why Fedora is Red Hat's testing distro, why Debian has an unstable branch or why openSUSE Tumbleweed exists. Now, Arch isn't just a "testing distro", but it is, by design, more susceptible to these attacks. Thinking bleeding edge is more secure is a fallacy. It is but a consequence of assuming the source maintainers are on your side, which is usually the case, but not always. Or, assuming software is properly tested for bugs every release. If you are still doubting this, look at npm.
Furthermore, have you ever asked why you need to constantly update package signing keys? There is no central build server for Arch. Maintainers are building packages in whatever machine they are on, signing with whatever keys they have there and uploading the binary blobs. This isn't trustworthy. There is now a clean chroot process and all, but still, maintainers are still able to build the packages in their machine and upload it.
The other problem is not having any mandatory access control security policy by default (SELinux, AppArmor, etc...). You can, of course, install your own and go through the trouble of actually creating the security profiles yourself for the various packages on the system. This is in stark contrast with other distros, where not only they provide a security policy by default but their packages also ship with security profiles when needed to make sure it actually works (Fedora and openSUSE come to mind).
Finally, the AUR is cool and all, but my god are you at the mercy of whatever is put on there. Sure the PKGBUILD is super legible, but are you really checking where things are being pulled from? There is a layer of filtering being taken away here, you are the one doing your due diligence.
Now I'm sure different people have different takes on this, some might say that security policies are dumb and useless, others might prefer to be in the bleeding edge assuming the latest and greatest is safer. But take all the layers I have mentioned here, and how their non-existence on Arch could affect security. I hope to have drawn a clearer picture.
(Edit: I must say, I like Arch, I've used it a lot and when in a pinch is my go to. But I've come to appreciate how other distros approach security, and how they layer the process so they have more time to assess vulnerabilities. It is a balancing game, and I hope Arch improves on their processes, I really do.)
I wonder what debian’s process is for dealing with such maintainers.
I hope they make “no phone home” actual policy soon.
"This daily count of users is what keeps us working on the project, because otherwise we have feel like we are coding into a void."
So, they wrote code to phone home (by default) and then digging in and defending it... just for their feelings? You've got to be kidding me!Is that better or worse than phoning home to serve ads?
Also, if feels misleading to me to call fetching a motd phoning home. You know Ubuntu does this too right? That feels more worthy of outrage than this.
If someone tells me, this software phones home, and it's not transmitting anything other than a ping; kinda feels like they're lying to me about what it's actually doing.
I'm not upset by the author wanting a bit of human connection to the people who enjoy his software. I empathize with the desire to see people enjoy the stuff I've made. Is it a privacy risk? Perhaps, but it's not even on the top 1k that I see daily. There's more important windmills to tilt at.
But... if you really just wanna be outraged; I recently wrote a DNS server that I use as the default for my home system. Currently It prints every request made, you might wanna try something like that. If you're that upset about this, you're gonna be blown away by what else is going on you didn't even know about.... and that's just dns queries, it's not even the telemetry getting sent!
I switched to devuan. It’s great, but it sucks that the community split over something so needlessly destructive.
Especially since I was novice at best before the systemd thing, and my Ubuntu dive involved trying to navigate all 3 of these pretty drastic changes at once (oh yea and throw containers on top of that).
I went into it with the expectation that it was going to piss me off, and boy did it easily exceeded that threshold.
Just one possible example, among many others that have telemetry code into them.
Telemetry is perfectly acceptable as long as it is opt-in and does not contain personal data, and both apply to Go's telemetry, so there's no need for a fork.
Telemetry contains personal data by definition. It just varies how sensitive & how it's used. Also it's been shown repeatedly that 'anonymized' is shaky ground.
In that popcon example, I'd expect some Debian-run server to collect a minimum of data, aggregate, and Debian maintainers using it to decide where to focus effort w/ respect to integrating packages, keeping up with security updates, etc. Usually ok.
For commercial software, I'd expect telemetry to slurp whatever is legally allowed / stays under users' radar (take your pick ;), vendor keeping datapoints tied to unique IDs, and sell data on "groups of interest" to the highest bidder. Not ok.
Personal preference: eg. a crash report: "report" or "skip" (default = skip), with a checkbox for "don't ask again". That way it's no effort to provide vendor with helpful info, and just as easy to have it get out of users' way.
It's annoying the degree to which vendors keep ignoring the above (even for paying customers), given how simple it is.
No. Please look up the definition of "telemetry" and "personal data". The latter always refers to an identifiable person.
“Person” isn’t either, unless the software knows for sure it’s not being uses by a person.
If you have a nginx log and store IP addresses, then yes: that contains PII. So the solution is: don't store the IP addresses, and the problem is solved. Same goes for telemetry data: write a privacy policy saying you won't store any metadata regarding the transmission, and say what data you will transmit (even better: show exactly what you will transmit). Telemetry can be done in a secure, anonymous way. I wonder how people who dispute this even get any work done at all. By your definitions regarding PII, I don't see how you could transmit any data at all.
On the server side you would not. Your application would just do the work it was intended to do and would not dial out for anything. All resources would be hosted within the data-center.
On the workstation it is up to the corporate policy and if there is a known data-leak it would be blocked by the VPN/Firewalls and also on the corporate managed workstations by IT by setting application policies. Provided that telemetry is not coded in a way to be a blocking dependency this should not be a problem.
Oh and this is not my definition. This is the definition within literally thousands of B2B contracts in the financial sector. Things are still loosely enforced on workstations meaning that it is up to IT departments to lock things down. Some companies take this very seriously and some do not care.
Why it has to include PII by definition? I'd say DNF Counting (https://github.com/fedora-infra/mirrors-countme) should be considered "telemetry", yet it doesn't seem to collect any personal data, at least by what I understand telemetry and personal data to mean.
I'm guessing that you'd either have to be able to argue that DNF Counting isn't telemetry, or that it contains PII, but I don't see how you could do either.
I would argue that users can't inherently trust the average developer anymore. Ideas about telemetry, phoning home, conducting A/B tests and other experiments on users, and fundamentally, making the software do what the developer wants instead of what the user wants, have been thoroughly baked in to many, many developers over the last 20 or so years. This is why actually taking privacy seriously has become a selling point: It stands out because most developers don't.
If you take the next step: “do not use software from vendors you don’t trust,” you are severely limiting the amount of software you can use. Each user gets to decide for himself whether this is a feasible trade off.
popcon is least likely to be turned on by:
- organizations with any kind of sensible privacy policy (which includes almost everyone running more than a handful of machines)
- individuals concerned about privacy
popcon is most likely to be turned on by Debian developers, and people new to Debian who have just installed it for the first time.
And if they, or someone else, use this for RCE ? Asking for a friend. /s
This changed somewhat recently. Telemetry is enabled by default (I think as of Golang 1.23?)
I am only aware since I relatively recently ran into something similar to this on a fresh VM without internet egress: https://github.com/golang/go/issues/68976
https://github.com/golang/go/issues/68946
If golang doesn't fully address this I guess Debian really should at least change the default (of they haven't already).
Debian indeed does this. In release FF has disabled telemetry: https://wiki.debian.org/Firefox
For example, when closing firefox on OpenSUSE Leap 15.6, "pingsender" is launched to collect telemetry:
It has been there for years. It is also on other distros.
Yes, I've disabled the update check. No, it doesn't solve the problem.
Most obvious example is Firefox. The Debian Project allows Firefox to update outside the packaging system, automatically, at the whim of Firefox.
And there's the inclusion of non-Free software in the base install, which is completely against the Debian Social Contract.
The Debian Project drastically changed when they decided to allow Ubuntu to dictate their release schedule.
What used to be a distro by sysadmins for sysadmins, and which prized stability over timeliness has been overtaken by Ubuntu and the Freedesktop.Org people. I've been running Debian since version 3, and I used to go _years_ between crashes. These days, the only way to avoid that is to 1) rip out all the Freedesktop.Org code (pulseaudio, udisks2, etc.), and 2) stick with Debian 9 or lower.
No, it's not. Stable ships ESR which has its update mechanism is disabled. Same for Testing/Unstable. It follows standard releases, but autoupdate is disabled.
Even Official Firefox Package for Debian from Mozilla has its auto-updates disabled and you get updates from the repository.
Only auto-updating version is the .tar.gz version which you extract to your home folder.
This is plain FUD.
Moreover:
Debian doesn't ship pulseaudio anymore. It's pipewire since forever. Many people didn't notice this, it was that smooth. Ubuntu's changes are not allowed to permeate without proper rigor (I follow debian-devel), and it's still released when it's ready. Ubuntu follows Debian Unstable, and Unstable suite is a rolling release, and they can snapshot it and start working on it whenever they want.
I'm using Debian since version 3 too, and I still reboot or tend my system only at kernel changes. It's way snappier w.r.t. Ubuntu with the same configuration for the same tasks, and is the Debian we all know and like (maybe sans systemd. I'll not open that can of worms).
It seems likely that you personally chose to install a flatpak or tar.gz version probably because you are running an older no longer supported version of Debian.
>These days, the only way to avoid that (crashes) is...
Running older unsupported versions with known never to be fixed security holes isn't good advice nor is ripping out the plumbing. Its almost never a great idea to start ripping out the floorboards to get at the pipes.
Pipewire seems pretty stable and if you really desire something more minimal it's better to start with something minimal than stripping something down.
Void is nice on this front for instance.
[1] https://en.wikipedia.org/wiki/OpenSSL#Predictable_private_ke...
Let's not forget that the patch had been posted on the OpenSSL mailing list and had received a go ahead comment before that.
Having said that, if you're asking if there's a penetration test team that reviews all the patches. No there isn't. Like there isn't any such thing on 99.999999999% of all software that exists.
Patches are maintained separately because debian doesn't normally repack the .tar.gz (or whatever) that the projects publish, as to not invalidate signatures and let people check that the file is in fact the same. An exception is done when the project publishes a file that contains files that cannot legally be redistributed.
Then you run the tests, and if they pass, you package and upload it.
This allows a patch(set) can be sent to the upstream as a package saying "we did this, and if you want to include them, this apply cleanly to version x.y.z, any feedback is welcome".
Last I knew Debian didn't do dedicated security review of patches to security-critical software, which is normal practice for other distributions.
No it wasn't. It was reading (and xoring into the randomness that would become the key being generated) uninitialised char values from an array whose address was taken, that results in unspecified values not undefined behaviour.
No, they XORed data from a bunch of entropy sources into an intermediate buffer (that was never initialised, because the whole point of it was to be random) and then XORed that into a buffer from which the key was made. Debian's patch removed that final XOR. It wasn't a bug in the original code (other than being hard to understand).
Can you please read about the issue before commenting more?
No it wasn't. The patch removed the read of the randomness buffer that folded it into another buffer (the MD_Update calls) because Valgrind was warning that the buffer it was reading from had never been initialised.
> Can you please read about the issue before commenting more?
Right back at you.
(meanwhile, long before this incident fedora just compiled openssl with -DPURIFY which disabled the bad behavior in a safe and correct way).
OpenSSL already had an option to safely disable the bad behavior, -DPURIFY.
* https://udd.debian.org/patches.cgi?src=gnupg2&version=2.4.7-...
There is a lot of political stuff in there related to standards. For a specific example see:
* https://sources.debian.org/src/gnupg2/2.4.7-19/debian/patche...
The upstream GnuPG project (and the standards faction they belong to) specifically opposes the use of keys without user IDs as it is a potential security issue. It is also specifically disallowed by the RFC4880 OpenPGP standard. By working through the Debian process, the proponents of such keys are bypassing the position of the upstream project and the standard.
To be fair, in Debian's case politics come with the territory. Debian is a vision of what an OS should be like. With policies, standards & guidelines aimed at that, treating the OS as a whole.
That goes well beyond "gather packages, glue together & upload".
Same goes for other distributions I suppose (some more, some less).
Why? Heartbleed.
as always: imho (!)
i remember this incident - if my memory doesn't trick me:
it was openssl which accessed memory it didn't allocated to collect randomness / entropy for key-generation.
and valgrind complained about a possible memory-leak - its a profiling-tool with the focus on detecting memory-mgmt problems.
instead of taking a closer look / trying to understand what exactly went on there / causes the problem, the maintainer simply commented out / disabled those accesses...
mistakes happen, but the debian-community handled this problem very well - as in my impression they always do and did.
idk ... i prefere the open and community-driven approach from debian anytime over distributions which are associated to companies.
last but not least, the have a social contract.
long story short: at least for me this was an argument for the debian gnu/linux distribution, not against :))
just my 0.02€
It’s doubly important to upstream issues for security libraries: There are numerous examples of bad actors intentionally sabotaging crypto implementations. They always make it look like an honest mistake.
For all we know, prior or future debian maintainers of that package are working for some three letter agency. Such changes should be against debian policy.
This is not to say that Debian is the sole example of this. The FreeBSD/NetBSD packages/ports systems have their share of globally useful stuff that is squirrelled away as a local patch. The point is not that Debian is a problem, but that it too systematizes the idea that (specifically) manual pages for external stuff go primarily into an operating system's own source control, instead of that being the last resort.
A randomly picked case in point:
Debian has had a local manual page for the original software's undocumented (in the old Sourceforge version) iptunnel(8) command for 7 years:
https://salsa.debian.org/debian/net-tools/-/blob/debian/sid/...
Independently, the original came up with its own, quite different, manual page 3 years later:
https://github.com/ecki/net-tools/blob/master/man/en_US/iptu...
Then Debian imported that!
https://salsa.debian.org/debian/net-tools/-/blob/debian/sid/...
This sort of thing isn't a rare occurrence.
And often it's not an unhelpful upstream, just an upstream that sees little use for man pages in their releases, and doesn't want to spend time maintaining documentation in parallel to what their README.md or --help provides (with which the man page must be kept in sync).
I also think that the idea that original authors must not accept manual pages is a way of explaining how the belief does not match reality, without accepting that it is the belief itself that is wrong. Certainly, the number of times that things work out like the net-tools example elsethread, where clearly the original authors do want manual pages, because they eventually wrote some, and end up duplicating Debian's (and FreeBSD's/NetBSD's) efforts, is evidence that contradicts the belief that there's some widespread no-manual-pages culture amongst developers.
I have sent about 50 or so patches upstream for the 300 packages I maintain and while it reduces the amount of work long-term it's also surprisingly amount of work.
Typically the Debian patches are licensed under the same license as the original project. So there is nothing stopping anyone who feels that more patches should be sent upstream to send them.
Typically the Debian maintai
If you're going to do that, then you should actually let people know. Otherwise don't do it. It's not about "but the license allows it", it's about what the right thing to do is.
Debian has given me the most grief of any Linux distro by far. Actually, Debian is the only system I can recall giving me grief. Debian pushes a lot of work to the broader ecosystem to people who never asked for it.
I didn't choose to be associated with Debian, but I have no choice in the matter. You did choose to be associated with the packages you maintain.
So don't give me any of that "but my unpaid time!". Either do the job properly or don't do it at all. Both are fine; no maintainer asked you to package anything. They're just asking you to not indirectly push work on them by shipping random (potentially broken and/or highly opinionated) patches they're never even told about.
Okay, I am hereby letting you know: Every single distro patches software. All of them. Debian, Arch, Fedora, Gentoo, NixOS, Alpine, Void, big, small, commercial, hobbyist. All of them.
It was exhausting though, and an uphill battle. Most patches were ignored for months or years, with common “is this still necessary?” or “please update the patch; it doesn’t apply anymore” responses. And it was generally a lot of effort. So patches staying in their distros is… “normal”.
Overall I feel it's one of those Debian policies stuck in 1995. There are other reasonable ways to get documentation these days, and while manpages can be useful for some types of programs, they're less useful for others.
I actually prefer the RHEL policy of leaving packages the way upstream packaged them, it means upstream docs are more accurate, I don't have to learn how my OS moves things around.
One example that sticks out in memory is postgres, RHEL made no attempt to link its binaries into PATH, I can do that myself with ansible.
Another annoying example that sticks out in Debian was how they create admin accounts in mysql, or how apt replaces one server software with another just because they both use the same port.
I want more control over what happens, Debian takes away control and attempts to think for me which is not appreciated in server context.
It swings both ways too, right now Fedora is annoying me with its nano-default-editor package. Meaning I have to first uninstall this meta package and then install vim, or it'll be a package conflict. Don't try and think for me what editor I want to use.
Are you kidding now? Red Hat was always notorious of patching their packages heavily, just look download an SRPM and have a look.
I don't think RHEL is the right choice if this is your criteria. Arch is probably what you are looking for
If you want packages that works just like the upstream documentation, run Slackware.
Debian does add some really nice features in many of their packages, like a easy way to configure multiple uWSGI application using a file per application in a .d directory. It's a feature of uWSGI, but Debian has just package it up really nicely.
And RedHat does a lot of fiddling in their distributions, you probably want something like Arch, which is more hands-off in that regard. Personally, I prefer Debian, it's the granite rock of Linux distributions.
Unless something has changed in the last 10 years that has passed since I last used anything RHEL-based, there are definitely no such policy.
Further, do they publish any change information publicly?
This is utter FUD, of course they do, it is an open source distribution. Everything can be found from packages.debian.org
You seem to be assuming malice in an innocent question though, that’s on you.
Yes, at a minimum the patches are in the Debian source packages. Moreover, maintainers are highly encouraged to send patches upstream, both for the social good and to ease the distribution's maintenance burden. An automated tool to look for such patches is the "patches not yet forwarded upstream" field on https://udd.debian.org/patches.cgi
https://udd.debian.org/patches.cgi?src=xscreensaver&version=...
Edit: can't find any that are for aesthetic reasons.
Debian keeps ancient versions that have many fixed bugs. Upstream maintainer has to deal with fallout of bug reports of obsolete version. To mitigate his workload, he added obsolete version warning. Debian removed it.
It’s somewhat reasonable. I agree Debian should patch out phone-home and autoupdate (aka developer RCE). They should have left the xscreensaver local-only warning in, though. It is not a privacy or system integrity issue.
jwz however is also off the rails with entitlement.
They’re both wrong.
Always remember to not link to his site from HN because you'll get a testicle NSFW image when you click on a link to his site from HN. dang used to have rel=noreferrer on outgoing links, but that led to even more drama with other people...
Some people in the FOSS scene just love to stir drama, and jwz is far from the only one. Another person with such issues IMHO is the systemd crowd, although in this case ... IMHO it's excusable to a degree, as they're trying to solve real problems that make life difficult for everyone.
What's his reason for targeting HN users this way?
[1] NSFW https://imgur.com/32R3qLv
[2] (Redirects to NSFW, so open in incognito or you'll get the testicles) https://www.jwz.org/blog/2011/11/watch-a-vc-use-my-name-to-s...
It really doesn't. It says he hates users from HN but it says nothing about why. Is it really just that he doesn't like the traffic?
If it's because he has a grudge against VCs, which is more understandable, why is he taking it out on HN users?
It's a small form of protest. Make people uncomfortable.
https://news.ycombinator.com/item?id=44061563
I don't think that approach is reasonable. When you are effectively making a fork, don't freeload on existing project name and burden him with problems you cause.
It's why I'm really glad flatpaks/snaps/appimages and containerization are where they are at now, because it's greatly dis-intermediated software distribution.
> it's effectively nothing but the "app store" model, having an activist distributor insert themselves between the user and software.
is just factually wrong. Distributions like Debian try to make a coherent operating system from tens of thousands of pieces of independently developed software. It's fine not to like that. It's fine to instead want to use and manage those tens of thousands of pieces of independent software yourself. But a distribution is neither an "app store", nor does it "insert itself" between the user and the software. The latter is impossible in the FOSS world. Many users choose to place distros between them and software. You can choose otherwise.
Said the developer.
Meanwhile the user is stuck with a broken software.
I'm just trying to correct the notion that somehow a distro is an "app store" that "inserts itself" between the software and its users. A distribution is an attempt to make lots of disparate pieces of software "work together", at varying degrees. Varying degrees of modification may or may not factor into that. On one extreme is perhaps just a collection of entirely disjoint software, without anything attaching those pieces of software together. On the other extreme is perhaps something like the BSDs. Arch and Debian sit somewhere in between, at either side.
Thoughtful people can certainly disagree about what the correct degree of "work together" or modification is.
Just scroll up to the second comment in the thread right now by the user rmccue. Given that Debian doesn't give the user any indication of the fact that it even has modified an upstream piece of software it's obviously perfectly possible for them to insert themselves without you even knowing it. And in that case, according to the developer, even introduced subtle bugs.
So you can run buggy software as a consequence of some maintainer thinking they know more than a developer, and not even know it because you have no practical info about that process. This is of course not a "choice" in any meaningful sense of the term.
Nobody ever actually wants to use a buggy php library maintained by debian over a functioning one maintained a by developer, they very likely just never even were aware that that is what they were served.
Debian also doesn't give any indication that we haven't modified an upstream piece of software. Modifying software is a central thing to do in the FOSS world. If the user wants to know if anything was was modified, and if so what, then the source is of course freely available.
> it's obviously perfectly possible for them to insert themselves without you even knowing it.
They didn't "insert themselves" anywhere! You, the user, inserted Debian!
> And in that case, according to the developer, even introduced subtle bugs.
Of course, any software modification can introduce, alter or fix bugs. Or all three.
> So you can run buggy software as a consequence of some maintainer thinking they know more than a developer
That can happen. And sometimes you run less buggy software as a consequence of "some maintainer" knowing more about the system as a whole than the software's original developer. Or caring more about safeguarding the user's freedoms or privacy.
> and not even know it because you have no practical info about that process.
"Practical info"? What exactly would you like, beyond a changelog and the the source?
> This is of course not a "choice" in any meaningful sense of the term.
Of course! You chose to run Debian. Debian is entirely upfront about this!
> Nobody ever actually wants to use a buggy php library maintained by debian over a functioning one maintained a by developer
I can't speak for PHP myself, I don't use it at all, but I'm almost always extremely grateful to the maintainers of the packages on my Debian systems for the (often very tiresome!) work they do in adapting upstream software to fit Debian Policy and the Debian Social Contract. (Disclaimer: I am a DD myself.)
> they very likely just never even were aware that that is what they were served.
Would it help if the Debian install guide hade a note saying "be aware that a Debian package may not be exactly the same was what was shipped by the software's original developers – Debian aims to build a coherent universal operating system centered around our Social Contract and Policy"?
Okay, then going back to the beginning of the argument and the comparison you denied is accurate. If merely choosing Debian is sufficient reason enough for the user to assume that Debian as the distributor acts as a middleman in all sorts of ways, how exactly is that different from an app store?
After all you choose to buy an iPhone or use the Google Play store, so if the argument is that consent, given exactly once invalidates any concern, that applies to any platform, nobody's ever been held at gunpoint to install an operating system.
I think what would help is indeed if maintainers make significant changes that the user is informed about that very visible during the installation process.
Because you decide the power of the middleman on your system. And you have full freedom to change whatever the middleman delivers. Neither is typically true of app stores.
> After all you choose to buy an iPhone or use the Google Play store, so if the argument is that consent, given exactly once invalidates any concern, that applies to any platform, nobody's ever been held at gunpoint to install an operating system.
Pray tell, what other choices do I have in the phone market? And how are my choices in the PC market, again?
> I think what would help is indeed if maintainers make significant changes that the user is informed about that very visible during the installation process.
OK. Then I don't think Debian is for you.
As a maintainer, I can certainly understand how it feels like that, I'd probably wouldn't feel great about it either. As a user, I'm curious what kind of modifications they felt were needed, what exactly did they change in your library?
This was a while ago (10+ years), but my recollection is that someone presumably had reported that parts of the library didn't conform to the spec, and Debian patched those. This broke parsing actual feeds, and caused weeks of debugging issues that couldn't be replicated. Had they reported upstream in the first instance, I could have investigated, but there was no opportunity to do so.
Good intentions, but unfortunately bad outcome.
There was a somewhat recent discussion on here on how OS projects on GitHub are pestered by reports as well. Some athors commented that it even took away their motivation to publish code.
It’s always the same mechanism isn’t it. The „why we can’t have nice things“ issue. Making everything at least slightly worse, because there are people who exploit a system or trust based relationship.
However, I do believe that in certain areas, they give too much freedom to package maintainers. The bar for being a package maintainer in Debian is relatively low, but once a package _has_ a maintainer--and barring any actual Debian policy violations--that person seems to have the final say in all decisions related to the package. Sometimes those decisions end up being controversial.
Your case is one example. Package maintainers ideally _should_ work with upstream authors, but are not required to because a LOT of upstream authors either cannot be reached, or actively refuse to be bothered by any downstream user. (The source tarball is linked on their home page and that's where their support ends!) I don't know what the solution is here, but there are probably improvements that could and should be made that don't require all upstream authors to subscribe to Debian development mailing lists.
Debian has earned that trust, and it's software update rules are battle-tested and well-understood.
It's typically an unglamorous, demanding, unpaid, volunteer position a few rungs above volunteering at a soup kitchen or food bank. It's unsurprising that the bar is low.
It's also trivial for upstream maintainers to set up their own competing Debian package repos that entirely ignore Debian rules - Microsoft has one for VS Code.
I've made patches to a bunch of stuff to improve kde on mobile/tablets. After short or very long time they do get merged, but meanwhile people (like me) who own a tablet can actually use the software.
Why wait several months or even years?
Related: netadata to be removed form Debian https://github.com/coreinfrastructure/best-practices-badge/i...
BonusPlay•1mo ago
alias_neo•1mo ago
Personally, I believe s/change/modify would make more sense, but that's just my opinion.
That aside, I'm a big fan of Debian, it has always "felt" quieter as a distro to me compared to others, which is something I care greatly about; and it's great to see that removing of calling home is a core principle.
All the more reason to have a more catchy/understandable title, because I believe the information in those short and sweet bullet points are quite impactful.
pabs3•1mo ago
https://wiki.debian.org/PrivacyIssues
alias_neo•1mo ago
> it is best to run opensnitch to mitigate some of those problems
Opensnitch is a nice recommendation for someone concerned about protecting their workstation(s); for me, I'm more concerned about the tens of VMs and containers running hundreds of pieces of software that are always-on in my Homelab, a privacy conscious OS is a good foundation, and there are many more layers that I won't go into unsolicited.
pabs3•1mo ago
twic•1mo ago
mnw21cam•1mo ago