If you use nix (especially nix flakes), this consideration falls out naturally from the nixpkgs repository reference (SHA, branch, etc) you choose to track. Nixpkgs has various branches for various appetites for the "cutting edge".
In other words, with nix you decide the spec of the software you want installed on your machine, not the maintainer of your chosen package manager. Depending on your use case and knowledge/experience level, either choice may be preferable.
Also, nixpkgs is definitively the widest-spanning "package manager" on the "market"; see link.
The recent(ish) concept of "nix flakes" means there are 2 related but different mechanisms for achieving this but the end result is the same.
* In the land of NixOS everything is a nix expression including the system packages, configuration files, install image. It's all locked to hashes of upstream sources and is in theory, fully byte-identical reproducible.
In the age of AI, I reduced my load on small utility libraries and just have the bigger ones that I'll follow semver and update to manager versions when it make sense and always take small patches but still look at the release notes for what changed.
Not a bad idea, but we'd need to have evidence of the former case to make it mandatory and widespread.
IIRC, many times it's compromised credentials of maintainers, which are caught very quickly, so that's evidence of the former case.
I completely understand the author here, because I'm actually also leaning more towards avoiding supply chain attacks than jumping on the latest CVEs.
It's just a gut feeling, rooted in 25 years of experience as a sysadmin, but I feel like a supply chain attack can do a lot more damage in general than most unpatched known vulnerabilities.
Just based on my own personal experiences, no real data.
I'll try to put words to it, but a supply chain attack is more focused, higher chance of infilitration. While a CVE very rarely is exploited en masse, and exploitation often comes with many caveats.
That combined with the current state of the world, where supply chain attacks seem to be a very high profile target for state actors.
The closest I've seen to this are opt-in early release channels.
The last time I pulled in more than Dapper I was using .NET Framework 4.8. Batteries are very included now. Perhaps a cooldown on dapper and maybe two other things would protect me to some degree, but when you have 3rd party dependencies you can literally count on one hand, it's hard to lose track of this stuff over time. I'd notice any dependency upgrade like a flashing neon sign because it happens so rarely. It's a high signal event. I've got a lot of time to audit them when they occur.
.NET definitely includes more these days, including lots of the things I've mentioned above, but they're often not as good and you likely have legacy dependencies.
The basic premise is a secure package registry as an alternative to NPM/PyPi/etc where we use a bunch of different methods to try to minimize risk. So e.g. reproducible builds, tracing execution and finding behavioral differences between release and source, historical behavioral anomalies, behavioral differences with baseline safe package, etc. And then rather than having to install any client side software, just do a `npm config set registry https://reg.example.com/api/packages/secure/npm/`
eBPF traces of high level behavior like network requests & file accesses should catch the most basic mass supply chain attacks like Shai Hulud. The more difficult one is xz-utils style attacks where it's a subtle backdoor. That requires tests that we can run reproducibly across versions & tracing exact behavior.
Hopefully by automating as much as possible, we can make this generally accessible rather than expensive enterprise-only like most security products (really annoys me). Still definitely need a layer of human reviews for anything it flags though since a false positive might as well be defamation.
Won't know if this is the right direction until things are done & we can benchmark against actual case studies, but at least one startup accelerator is interested in funding.
Because releases were relatively slow (weekly) compared to other places I worked (continuous), we had a reasonable lead time to have third party packages scanned for vulns before they made it to production.
The setup was very minimal, really just a script to link one stage’s artifacts to the next stage’s repo. But the end effect was production never pulled from the internet and never pulled packages that hadn’t been deployed to the previous stage.
Having production ever pull from the interwebs just seems bonkers to me. Even if you (for some reason?) want to stay up-to-date on every single dependency release you should be pulling to your own repo with some sort of gated workflow. If you're doing continuous deployment you definitely want to put extra control around your external dependencies, and releasing your product quickly after they change is probably the rare exception.
The solution is independent audit. "Package managers" need to be (as they are in the Linux world) human beings responsible for integrating, validating and testing the upstream software for the benefit of their users.
NPM, PyPI, Cargo et. al. continue to think they can short circuit that process and still ship safe software, and they verifiably cannot.
- Could you explain what you mean by "security through obscurity"? The mechanism is well explained in the blog.yossarian.net posts linked within. It is simply adding a time filter on a client.
- Also, I'm not sure if package registries (e.g. server) and package managers (e.g. client) are being conflated here regarding "attacks on package managers", this seems to be more of a mitigation a client could do when the upstream content in a registry is compromised.
- Lastly, I agree with the sentiment that this is not a full solution. But I think it can be useful nevertheless, a la Swiss Cheese Safety Model. [1]
Using pkgin(1) you get to know what happens before anything is done. Creating packages is hard, but as an end user pkgsrc shows me what it will do before it does anything.
This example for installing a binary package is from an already active NetBSD workstation, all items needed will be shown for the package you want to install.
# pkgin install gnumeric-1.12.59nb2
calculating dependencies...done.
4 packages to install:
gnumeric-1.12.59nb2 goffice0.10-0.10.59nb2 lasem-0.6.0nb1 libgsf-1.14.54
0 to remove, 0 to refresh, 0 to upgrade, 4 to install
16M to download, 76M of additional disk space will be used
proceed ? [Y/n] n
jauntywundrkind•1h ago
I do think there is some sense in having some cool down. Automated review systems having some time to sound alarms would be good.
I'm not sure what the reporting mechanisms look like for various ecosystems. Being able to declare that there should be a hold is Serious Business, and going through with a hold or removal is a very human-costly decision to make for repo maintainers. With signficiant lag. So we are up to at least 2d, if centralized.
Ideally I'd like to see something on atprotocol, where individuals can create records on their PDS's that declare dangers. This can form a reputation system, that disincentivizes bad actors (false reports), and which can let anyone on the net quickly see incoming dangers, in a distributed fashion, real time.
(Hire me, I'll build it.)
ameliaquining•1h ago
I don't understand what you're saying about reporting mechanisms; is there something wrong with how this is currently done?