I mean, sure. So what does the solution look like? From my perspective it looks like a tool that is able to update your dependencies so that you can easily pick up bug fixes in your dependencies, which sounds an awful lot like a package manager.
> JavaScript is great example of this as there are multiple different package managers for the language (npm being one of the most popular), but because each package manager defines the concept of a package differently, it results in the need for a package manager manager.
This doesn't seem like a strong point to me. Yes, there are things like yarn, pnpm, etc. But IIUC practically all npm alternatives still define packages in the same way (a package.json at the root hosted by npmjs (or your private repo)), and the differences are ergonomic/performance related.
> [that each package manager defines the concept of a package differently] is why I am saying it is evil, as it will send you to hell quicker.
Then I think it's more of a language problem, not a problem with the concept of a package manager.
Exactly! Who has the time or the discipline to do that manually?
Nowadays it is mostly improved, and the others differentiate by enchantments to workspaces (better monorepo support) or more aggressive caching by playing games with where the installed packages physically exist on the system.
The core functionality- what a package is- has always been the same across the package managers though, because the runtime behavior is defined by node, not the package manager.
Obviously taking on fewer such liabilities?
> I am not advocating to write things from scratch.
and is clear in its target:
> That’s my general criticism: the unnecessary automation.
Yes, fewer dependencies is a solution, but it does not seem to be the author's position.
The "I am not advocating to write things from scratch" is more of a caveat to the people I know will comment NIH nonsense rather than anything productive.
But yes, my position is minimize dependencies and slow and carefully vet them too, and do not automate this process.
If only it where that easy.
Often the update isn't source compatible with the package that uses it so you can't update. There are some projects I use that I can't update because I use 6 different plugins, and each updates to the main project on a different schedule on their own terms - meaning the only version I can use is 10 years out of date and there appears no chance they will all update. (if this was critical I'd update it myself, but there are always more important things to work on so I never will in practice)
Sometimes a package will change license and you need to check the legalese before you update.
Sometimes a package is hijacked (see xv) and so you really should be doing an audit of every update you apply.
The solution IMO (which is non-existent afaik) would be to integrate some kind of third party auditing service into package managers. For example, for your npm project you could add something like this to your package.json:
` "requireAuditors": [ { "name": "microsoft-scanning-service", "url": "https://npmscanner.microsoft.com/scanner/", "api_key": "yourkeyhere, default to getting it from .env" } ] `
And when you npm install, the version / hash is posted to all of your required auditor's urls. npm should refuse to install any version that hasn't been audited. You can have multiple auditing services defined, maybe some of them paid/able to scan your own internal packages, etc.
I've thought about building a PoC of this myself a couple of times because it's very much on my mind, but haven't spent any time on it and am not really positioned to advocate for such a service.
There are no solutions, only trade-offs. And the point is that not everything needs to be, nor ought to be, automated. And package managers are a good point of this.
And yes, a language with an ill-defined concept of a package in the language itself is a problem of the language, but the package managers are not making it any better.
If a language does not provide a definition of a package but a package manager _does_, then I would say that that package manager did make that aspect of the problem better.
Fine, now what if you need to connect to a database, or parse a PDF, or talk to a grpc backend. What a hilariously short-sighted example.
To me, this whole article just screams inexperience.
Actually his perspective is quite reasonable. Go is in the other part of the spectrum than languages encouraging "left-pad"-type of libraries, and this is a good thing.
As my psychology professor used to say. "Smart is how efficiently use your intelligence. Or don't."
So someone pretty low IQ can be smart - Forrest Gump. Or someone high IQ can be dumb occasionally - a professor so very attuned to his research topic at expense of everything else.
In other words: when someone's knowledge is disproportionately localized/siloed to their prospective subfield or domain of expertise, it does not necessitate generalization to others.
I'm certainly not saying this is the case with this particular individual, as I'm personally not familiar with their background. I'm simply stating that it's a plausible explanation for when experts in one domain make naive assertions about another domain they might not have the same experience in.
A guy designing and then implementing a programming language has a much bigger chance to put a lot of rational thinking into the tooling like dependency manager, than a typical language consumer, who can and often is easily falling into the languages emo wars.
How is ginger bill excluded from this group? No one is more invested in a language than its creator(s).
Sure, he might have given it a lot of thought, but he came up with some completely bonkers conclusions. If you don't want dependencies, DON'T IMPORT DEPENDENCIES. Don't make your dependencies extremely hard to add.
Odin is "successful enough" so far. Also, you know about it, so that says something.
Kind of bonkers this even needs to be said, and even then it's missed/ignored.
I'd prefer instead a more balanced title like "Remember to Consider the Costs When Using Package Managers", or whatever.
"be careful all the time" doesn't scale. Half of all developers have below-average diligence, and that's a low bar. No-one is always vigilant, don't think that you're immune to human error.
No, you need tooling, automation to assist. It needs to be supported at the package manager side. Managing a site where many files are uploaded, and then downloaded many times is not a trivial undertaking. It comes with oversight responsibilities. If it's video you have to check for CSAM. If it's executable code, then you have to check for malware.
Package managers are not evil, but they are a tempting target and need to be secured. This can't just be an individual consumer responsibility.
I can't speak for other ecosystems, but some NuGet measures are here:
https://devblogs.microsoft.com/dotnet/building-a-safer-futur...
https://learn.microsoft.com/en-us/nuget/concepts/security-be...
I believe that there have been (a few) successful compromises of packages in NuGet, and that these have been mitigated. I don't know how intense the arms race is now.
Yes, this is the C attitude, where you provide no safety rails or poka-yokes or, indeed, package managers, and therefore you get a lot of fragile reimplementations of package managers (autoconf, anyone?). But you get to keep the satisfaction of blaming the users.
nuget is pretty good. It helps that packages tend to be substantial things, not left-pad.
Agree, this is IMHO also a better pattern. 1-liners or even 20-liners are not worth the overhead of extracting a package. Or of depending on a package.
"developers, be more conscious" isn't going to fix all the issues. In general, there are not individual fixes to systemic issues.
Btw the Js ecosystem also has quite a few good packages (and a ton of terrible ones, including some which everyone seems to consider as the gold standard).
If you only use a package manager for libraries that you have high trust in then you don't need to worry - but there are so few projects you can have high trust in that manual management isn't a big deal. Meanwhile there are many many potentially useful packages that can save you a lot of effort if you use them - but you need to manually audit each because if you don't nobody will and that will bite you.
[0] https://en.wikipedia.org/wiki/Dependency_hell
I find it strange that they use a term with a common meaning, link to that meaning, and then talk about something else?
The author asserts that most open-source projects don't hit the quality standards so that their libraries can be just included, and they'll do what they say.
I assert that this is because there's no serious product effort behind most libraries (as in no dedicated QA/test/release cycle), no large commercial products use it (or if they do, either they do it in a very limited fashion, or just fork it).
Hobbyists do QA as long as it interests them/fits their usecase, but only the big vendors do bulletproof releases (which in the desktop realm seems to be only MS/Apple)
This might have to do with the domain the author chose - desktop development has unfortunately had the life sucked out of it with every dev either being a fullstack/cloud/ML/mobile dev, its mindshare and the resources going toward it have plummeted.
(I also have a sneaking suspicion the author might've encountered those bugs on desktop Linux, which, despite all the cheerleading (and policing negative opinions), is as much as a buggy mess as ever.
In my experience, it's quite likely to run into a bug that nobody has written about on the internet ever.
I have an article on my unstructured thoughts on the problems of OSS/FOSS which goes into more depth about this: https://www.gingerbill.org/article/2025/04/22/unstructured-t...
I sympathise with the arguments but IMO laziness will always win out. If Rust didn't have Cargo to automate dependency hell, someone would create a third party script to fill the gap.
When I worked at Google every single dependency was strictly vendored (and not in the mostly useless way that Cargo vendors things). There was generally only one version of a dep in the mono repo, and if you wanted something.. you generally got to own maintaining it, and you had to make sure it worked for every "customer" -- the giant CI system made sure that you knew if an upgrade would break things. And you reached out to stakeholders to manage the process. Giant trains of dependencies were not a thing. You can do that when you have seemingly infinite budget.
But technology can indeed make it worse. I love Rust, but I'm not a fan of the loose approach in Cargo and esp Crates.io, which seems to have pulled inspiration from NPM -- which I think is more of a negative than positive example. It's way too easy to make a mess. Crates.io is largely unmoderated, and its namespace is full of abandoned or lightly maintained projects.
It's quite easy to get away with a maze of giant transitive deps w/ Cargo because Rust by default links statically, so you don't usually end up in DLL hell. But just doing cargo tree on the average large Rust project is a little depressing -- to see how many separate versions of random number generators, SHA256, MD5, etc libs you end up with in a single linkage. It may not be the case that every single one is contributing to your binary size... but it's also kind of hard to know.
Understanding the blast radius of potential issues that come from unmoderated 3rd-party deps is I think something that many engineers have to learn the hard way. When they deal with a security vulnerability, or a fundamental incompatibility issue, or have to deal with build time and binary size explosions.
I wish there was a far more mature approach to this in our industry. The trend seems to be going in the opposite direction.
But it’s annoying to have to deal with 3 different time libraries and 3 different error creation libraries and 2 regex libraries somehow in my dependency tree. Plus many packages named stuff like “anyhow” or “nom” or other nonsense words where you need to google for a while to figure out what a package is supposed to do. Makes auditing more difficult than if your library is named structured-errors or parser-combinator.
I don’t like go programming language but I do like go tooling & go ecosystem. I wish there was a Rust with Go Principles. Swift is kinda in the right ballpark, packages are typically named stuff that makes sense and Swift is closer to Rust perf and Rust safety than Go perf and Go safety. But Swift is a tiny ecosystem outside of stuff that depends on the Apple proprietary universe, and the actual APIs in packages can be very magical/clever. ¯\_(ツ)_/¯
Python is much older than Go, and has had more packages move from 3rd party into the stdlib to become a "battery", and then atrophy over the years while people move back to 3rd party alternatives with more features that are actually receiving maintenance. Eventually some of those modules were removed from core.
Perhaps the Go model only works when you have a very dedicated core group (for Go, mostly Google employees) around to continuously build and maintain the Cathedral of the standard library + toolchain together. Golang feels very much like UNIX (eg FreeBSD) for this reason, and Rust/Python more like Linux.
Possibly but not guaranteed. Some other languages without a built in package manager haven't had an external one manage to take over the ecosystem, most (in)famously C and C++, while others have.
Yes, shared code has costs
- more general than you likely need, affecting complexity, compile times, etc
- comes with risks for today (code) and the future (governace)
But the benefits are big. My theory for one of the causes for Rust having so many good cli's is Cargo because it keeps the friction low for pulling in high quality building blocks so you can better focus on your actual problem.
Instead of resisting dependencies, I think it would be better to spend time finding ways to mitigate the costs, e.g.
- I'd love for crates.io to integrate diff.rs, provenance reporting (https://lawngno.me/blog/2024/06/10/divine-provenance.html), etc
- More direct support for security checking in cargo
- Integrating cargo-vet and/or cargo-crev into cargo
Dependencies do suck but it is because managing a lot of complicated code sucks. You need some way to find issues over time and keep things up to date. Dependencies and package managers at least offer us a path to deal with problems. If you are managing your own dependencies, which I imagine would mean vendoring, then you aren't going to keep these dependencies up to date. You aren't going to find out about exploits in the dependencies and apply them.
Slackware Linux does precisely that.
I'm a Slackware user. Slackware does have a package manager that can install or remove packages, and even a frontend that can use repositories (slackpkg), but it does have manual dependency resolution. Sure, there are 3rd-party managers that can add dependency resolution, but they do not come with the distro as default.
This is a very personal opinion, but manual dependency management is a feature. Back in the day, I remember installing Mandrake Linux 9.2 and activating the (then new-ish) framebuffer console. The distro folks had no better idea than to force a background "9.2" image on framebuffer consoles, which I hated. I finally found the package responsible for that. Removing it with urpmi, however, meant removing all the graphical desktop components (including X11) because that stupid package was listed as a dependency of everything graphical.
That prompted me to seek alternatives to Mandrake and ended up using Slackware. Its simplicity had the added bonus of offering manual dependency resolution.
NPM is even worse, you import one thing and get 1000s of trash libraries so nowadays the only JS I write is vanilla and I import ES Modules manually.
Also, Odin doesn't make adding dependencies that difficult, you can literally just throw an Odin library into your project as a folder and it's available. The Odin compiler does everything else for you.
Regardless of how they define these terms, producing a list of hashes which function as a commitment to specific versions of dependencies is a technique essential to modern software development. Whatever the tools are called, and whatever they do, they need to spit out a list of hashes that can be checked into version control.
You could just use git submodules, but in practice there are better user experiences provided by language package managers (`go mod` works great).
A good amount of this ranting can probably be attributed to projects and communities that aren't even playing the list of hashes game. They are resolving or upgrading dependencies in CI or at runtime or something crazy like that.
Also, use git subtrees, not git submodules. What people think submodules are, are actually subtrees and most people don't know about them.
As for "good" package managers, they are still bad because of what I said in the article.
And honestly speaking: It is plain stupid.
We can all agree that abusing package management with ~10000 of micro packages everywhere like npm/python/ruby does is completely unproductive and brings its own considerable maintenance burden and complexity.
But ignoring the dependency resolution problem entirely by saying "You do not need dependencies" is even dumber.
Not every person is working in an environment where shipping a giant blob executable built out of vendored static dependencies is even possible. This is a privilege of the Gamedev industry has and the author forgets a bit too easily it is domain specific.
Some of us works in environment where the final product is an agglomerate of >100 of components developed by >20 teams around the world. Versioned over ~50 git repositories. Often mixed with some proprietary libraries provided by third-party providers. Gluing, assembling and testing all of that is far beyond the "LOL, just stick to the SDL" mindset proposed here.
Some of us are developing libraries/frameworks that are used embedded in >50 products with other libraries with a hell of multiples combinations of compilers / ABI / platforms. This is not something you want to test nor support without automation.
Some of us have to maintain cathedrals that are constructed over decades of domain specific knowhow (Scientific simulators, solvers, Petrol prospection tools, financial frameworks, ... ) in multiple languages (Fortran, C, C++, Python, Lua, ...) that can not just be re-written in few weeks because "I tell you: dependencies sucks, Bro"
Managing all of that manually is just insane. And generally finishes with an home-made half-baked bunch of scripts that try to badly mimic the behavior of a proper package manager.
So no, there is no replacement for a proper package manager: Instead of hating the tool, just learn to use it.
Package manager are tools, and like every tool, they should be used Wisely and not as a Maslow's Hammer.
> Some of us works in environment where the final product is an agglomerate of >100 of components developed by >20 teams around the world. Versioned over ~50 git repositories. Often mixed with some proprietary libraries provided by third-party providers. Gluing, assembling and testing all of that is far beyond the "LOL, just stick to the SDL" mindset proposed here.
Does this somehow prevent you from vendoring everything?
Would you also try to build all of them on every CI run?
What about the non-source dependencies, check the binaries into git?
Yes. Because in these environment soon or later you will be shipping libraries and not executable.
Shipping libraries means that your software will need to be integrated in other stacks where you do not control the full dependency tree nor the versions there.
Vendoring dependencies in this situation is the guarantee that you will make the life of your customer miserable by throwing the diamond dependency problem right in their face.
In the game development sphere, there's plenty of giant middleware packages for audio playback, physics engines, renderers, and other problems that are 1000x more complex and more useful than any given npm package, and yet I somehow don't have to "manage a dependency tree" and "resolve peer dependency conflicts" when using them.
And you just don't know what you are talking about.
If I am providing (lets say) a library that provides some high level features for a car ADAS system on top of a CAN network with a proprietary library as driver and interface.
This is not up to me to fix or choose the library and the driver version that the customer will use. He will choose the certified version he will ship, he will test my software on it and integrate it.
Vendoring dependency for anything which is not a final product (product as executable) is plain stupid.
It is a guarantee of pain and ABI madness for anybody having to deal with the integration of your blob later on.
If you want to vendor, do vendor, but stick to executables with well-defined IPC systems.
If you're writing an ADAS system, and you have a "dependency tree" that needs to be "resolved" by a package manager, you should be fired immediately.
Any software that has lives riding on it, if it has dependencies, must be certified against a specific version of them, that should 100% of the time, without exceptions, must be vendored with the software.
> It is a guarantee of pain and ABI madness for anybody having to deal with the integration of your blob later on.
The exact opposite. Vendoring is the ONLY way to prevent the ABI madness of "v1.3.1 of libfoo exports libfoo_a but not libfoo_b, and v1.3.2 exports libfoo_b but not libfoo_c, and in 1.3.2 libfoo_b takes in a pointer to a struct that has a different layout."
If you MUST have libfoo (which you don't), you link your version of libfoo into your blob and you never expose any libfoo symbols in your library's blob.
The vendoring step happens at something like Yocto or equivalent and that's what ends up being certified, not random library repos.
And in addition: Yocto (or equivalent) will also be the one providing you the traceability required to guarantee that what you ship is currently what you certified and not some random garbage compiled in a laptop user directory.
You're providing a library. That library has dependencies (although it shouldn't). You've written that library to work against a specific version of those dependencies. Vendoring these dependencies means shipping them with your library, and not relying on your user, or even worse, their package manager to provide said dependencies.
I don't know what industry you work in, who the regulatory body that certifies your code is, or what their procedures are, but if they're not certifying the "random library repos" that are part of your code, I pray I never have to interact with your code.
> So let's handle the hell manually to feel the pain better
This is far from my position. Literally the entire point is to make it clearer you are heading to dependency hell, rather than feel the pain better whilst you are there.
I am not against dependencies but you should know the costs of them and the alternatives. Package managers hide the complexity, costs, trade-offs, and alternative approaches, thus making it easier to slip into dependency hell.
You are against the usage of a tool and you propose no alternative.
Handling the dependency by vendoring them manually, like you propose in your blog, is not an alternative.
This is an over simplification of the problem (and the problem is complex) that can be applied only to your specific usage and domain.
Again, what is wrong with saying you should know the costs of the dependencies you include AND the alternative approaches of not using the dependencies?—e.g. using the standard library, writing it yourself, using another dependency already that might fit, etc.
What a great quote.
1) Have problem that feels too complicated to hand-code.
2) Go on Internet/forums, find a library. The library is usually a small, flat collection of atomic functions.
3) A senior engineer vets the library and approves it for use.
4) Download the stable version: header file, and the lib file for our platform (on rare occasions, build it from source)
5) Place the .h file in the header path, and the lib file in the lib path; update the Makefile.
6) #include the header and call functions.
7) Update deployment scripts (bash script) to scp the lib file to target environment, or in some cases, use static linking.
8) Subscribe to a mailing list and very occasionally receive news of a breaking change that requires a rebuild.
This may sound like a lot of work, but somehow, it was a lot less stressful than dealing with NPM and node_modules today.
> Through manual dependency management. Regardless of the language, it is a very good idea that you know what you are depending on in your project. Copying and vendoring each package manually, and fixing the specific versions down is the most practical approach to keeping a code-base stable, reliable, and maintainable. Automated systems such as generic package managers hide the complexity and complications in a project which are much better not hidden away.
So that makes all of us human package managers. It's also true that you can get a package manager from internet folk that works better than the processes and utilities your team cobbles together to ease the burden.
spacebanana7•2h ago
This the wrong thing to automate. You can do this manually, however it doesn’t stop you getting into hell, rather just slow you down, as you can put yourself into hell (in fact everyone puts themselves into hell voluntarily). The point is it makes you think how you get there, so if you have to download manually, you will start thinking “maybe I don’t want this” or “maybe I can do this instead”. And when you need to update packages, being manual forces you to be very careful."
I sympathise with this, but I have to respond that we have to live within existing ecosystems. Getting rid of npm and doing things manually won't make building SPAs have fewer dependencies, build would be incredibly slow and painful.
pmarreck•2h ago
It will at least massively help prevent things from breaking unexpectedly.
It won't prevent you from having to cascade a necessary upgrade (such as a security fix) across the entire project until resolution/new equilibrium is achieved.
My solution to the latter is simply to try to depend on as few things as possible. But eventually, the cancer will overtake the project if it keeps growing.
Source: Have worked on a million-LOC Ruby app
gingerBill•56m ago
The solution is just to depend on less and manage them manually.
bluGill•2h ago
The other thing is your package manager cannot go out to the internet randomly. You need it to download from a place you are comfortable with (which might or might not be the default) existing as long as you need packages, and that will keep the versions of packages you want around. If you are a company project that means an internal server/mirror because otherwise something you depend on will disappear in the future. (most of they decide nobody is using it, delete it, but sometimes it is discovered the thing is an illegal copyright violation - but you have ask your lawyers what this means for you - perhaps a license is easy to get)
Sesse__•2h ago
You don't think making adding dependencies incredibly slow and painful would make people have fewer of them?
Ygg2•2h ago
Same number of lines but in fewer dependencies.
spacebanana7•2h ago
But in the context of newer ecosystems or ones that are more tightly controlled things might be different. For example if apple massively expanded the swift standard library and made dependency management painful, iOS apps might end up having fewer dependencies.
pmontra•2h ago
I remember installing software in the early 90s: download the source code, read the README, find and download the dependencies, read their READMEs, repeat a few times. Sometimes one dependency could not compile because of any incompatibility or bug. Some could be fixed, some couldn't. Often everything ended up with a successful compilation and install and in one day of work I could have what I'm getting in a few minutes now.
Actually those were small programs by today standards. My take is that we would achieve less if we have to use less dependencies.
By the way, the last time I compiled something from source was yesterday. It was openvpn3 on Debian 13, which is still unsupported. TLDR, it works but the apt-get are a little different from the ones in BUILD.md
microtherion•2h ago
Part of my reproducing the build was to conduct all the library downloading in a miniconda environment, so at the end I had a reproducible recipe.
Is the original author seriously claiming that anybody was better off with the original, "pure" ad-hoc approach?
gingerBill•59m ago
Honestly, I don't think this is true in the slightest. Rather, I hypothesize that people want to use such tooling and think the alternatives are slower, which I don't think is true.
If people actually did use fewer dependencies, people would have actually have websites that didn't take ages to load and were responsive.
So the existing ecosystems are just bad.