Or is there something that cargo does to manage it differently (due diligence?).
Not having a dependency management system isn't a solution to supply chain attacks, auditing your dependencies is
How do you do that practically? Do you read the source of every single package before doing a `brew update` or `npm update`?
What if these sources include binary packages?
The popular Javascript React framework has 15K direct and 2K indirect dependencies - https://deps.dev/npm/react/19.2.3
Can anyone even review it in a month? And they publish a new update weekly.
We need better tooling to enable crowdsourcing and make it accessible for everyone.
Someone committed malicious code in Amazon Developer Q.
AWS published a malicious version of their own extension.
https://aws.amazon.com/security/security-bulletins/AWS-2025-...
You’re looking at the number of dependents. The React package has no dependencies.
Asides:
> Do you read the source of every single package before doing a `brew update` or `npm update`?
Yes, some combination of doing that or delegating it to trusted parties is required. (The difficulty should inform dependency choices.)
> What if these sources include binary packages?
Reproducible builds, or don’t use those packages.
Indeed.
My apologies for misinterpreting the link that I posted.
Consider "devDependencies" here
https://github.com/facebook/react/blob/main/package.json
As far as I know, these 100+ dev dependencies are installed by default. Yes, you can probably avoid it, but it will likely break something during the build process, and most people just stick to the default anyway.
> Reproducible builds, or don’t use those packages.
A lot of things are not reproducible/hermetic builds. Even GitHub Actions is not reproducible https://nesbitt.io/2025/12/06/github-actions-package-manager...
Most frontend frameworks are not reproducible either.
> don’t use those packages.
And do what?
devDependencies should only be installed if you're developing the React library itself. They won't be installed if you just depend on React.
Please correct me if I am wrong, here's my understanding.
"npm install installs both dependencies and dev-dependencies unless NODE_ENV is set to production."
So, these ~100 [direct] dev dependencies are installed by anyone who does `npm install react`, right?
They are only installed for the topmost package (the one you are working on), npm does not recurse through all your dependencies and install their devDependencies.
When you do `npm install react` the direct dependency is `react`. All of react's dependencies are indirect.
Keep on keepin on
(so yes, I'm stating that 99% of JS devs who _do_ precisely that, are not being serious, but at the same time I understand they just follow the "best practices" that the ecosystem pushes downstream, so it's understandable that most don't want to swim against the current when the whole ecosystem itself is not being serious either)
(The next most useful step, in the case where someone in your dependency tree is pwned, is to not have automated systems that update to the latest version frequently. Hang back a few days or so at least so that any damage can be contained. Cargo does not update to the latest version of a dependency on a built because of its lockfiles: you need to run an update manually)
That doesn't necessarily help you in the case of supply chains attacks. A large proportion of them are spread through compromised credentials. So even if the author of a package is reputable, you may still get malware through that package.
Not all dependencies are created equal. A dependency with millions of users under active development with a corporate sponsor that has a posted policy with an SLA to respond to security issues is an example of a low-risk dependency. Someone's side project with only a few active users and no way to contact the author is an example of a high-risk dependency. A dependency that forces you to take lots of indirect dependencies would be a high-risk dependency.
Here's an example dependency policy for something security critical: https://github.com/tock/tock/blob/master/doc/ExternalDepende...
Practically, unless you code is super super security sensitive (something like a root of trust), you won't be able to review everything. You end up going for "good" dependencies that are lower risk. You throw automated fuzzing and linting tools, and these days ask AI to audit it as well.
You always have to ask: what are the odds I do something dumb and introduce a security bug vs what are the odds I pull a dependency with a security bug. If there's already "battle hardened" code out there, it's usually lower risk to take the dep than do it yourself.
This whole thing is not a science, you have to look at it case-by-case.
> The popular Javascript React framework has 15K direct and 2K indirect dependencies - https://deps.dev/npm/react/19.2.3
You’re looking a dependents. The core React package has no dependencies.
There are several ways to do this. What you mentioned is the brute-force method of security audits. That may be impractical as you allude to. Perhaps there are tools designed to catch security bugs in the source code. While they will never be perfect, these tools should significantly reduce the manual effort required.
Another obvious approach is to crowd source the verification. This can be achieved through security advisory databases like Rust's rustsec [1] service. Rust has tools that can use the data from rustsec to do the audit (cargo-audit). There's even a way to embed the dependency tree information in the target binary. Similar tools must exist for other languages too.
> What if these sources include binary packages?
Binaries can be audited if reproducible builds are enforced. Otherwise, it's an obvious supply chain risk. That's why distros and corporations prefer to build their software from source.
Cargo does have lock files by default. But we really need better tooling for auditing (and enforcing tha auditing has happened) to properly solve this.
So it's ultimately a trade off rather than a strictly superior solution.
Also, nothing in Rust prevents you from doing the same thing. In fact, I would argue that Cargo makes this process easier.
Regardless, the maintenance burden remains.
If the code you vendored was well hidden so the distro maintainer didn't notice, perhaps the bad guys would also fail to realize you were using (for instance) libxml2, and not consider your software a target for attack.
However, when you install Servo, you just install a single artefact. You don't need to juggle different versions of these different packages to make sure they're all compatible with each other, because the Servo team have already done that and compiled the result as a single static binary.
This creates a lot of flexibility. If the Servo maintainers think they need to make a breaking change somewhere, they can just do that without breaking things for other people. They depend internally on the newer version, but other projects can still continue using the older version, and end-users and distros don't need to worry about how best to package the two incompatible versions and how to make sure that the right ones are installed, because it's all statically built.
And it's like this all the way down. The regex crate is a fairly standard package in the ecosystem for working with regexes, and most people will just depend on it directly if they need that functionality. But again, it's not just a regex library, but a toolkit made up of the parts needed to build a regex library, and if you only need some of those parts (maybe fast substring matching, or a regex parser without the implementation), then those are available. They're all maintained by the same person, but split up in a way that makes the package very flexible for others to take exactly what they need.
In theory, all this is possible with traditional distro packages, but in practice, you almost never actually see this level of modularity because of all the complexity it brings. With Rust, an application can easily lock its dependencies, and only upgrade on its own time when needed (or when security updates are needed). But with the traditional model, the developers of an application can't really rely on the exact versions of dependencies being installed - instead, they need to trust that the distro maintainers have put together compatible versions of everything, and that the result works. And when something goes wrong, the developers also need to figure out which versions exactly were involved, and whether the problem exists only with a certain combination of dependencies, or is a general application problem.
All this means that it's unlikely that Servo would exist in its current form if it were packaged and distributed under the traditional package manager system, because that would create so much more work for everyone involved.
The distros try, but one complex problem with a project that holds strong opinions and you may not have a fix.
The gnome keyring secrets being available to any process running under your UID, unless that process ops into a proxy as an example.
Looking at how every browser and busybox is exempted from apparmor is another.
It is not uncommon to punt the responsibility to users.
The Rust approach is to split-off a minimal subset of functionality from your project onto an independent sub-crate, which can then be depended on and audited independently from the larger project. You don't need to get all of ripgrep[1] in order to get access to its engine[2] (which is further disentangled for more granular use).
Beyond the specifics of how you acquire and keep that code you depend on up to date (including checking for CVEs), the work to check the code from your dependencies is roughly the same and scales with the size of the code. More, smaller dependencies vs one large dependency makes no difference if the aggregate of the former is roughly the size of the monolith. And if you're splitting off code from a monolith, you're running the risk of using it in a way that it was never designed to work (for example, maybe it relies on invariants maintained by other parts of the library).
In my opinion, more, smaller dependencies managed by a system capable of keeping track of the specific version of code you depend on, which structured data that allows you to perform checks on all your dependencies at once in an automated way is a much better engineering practice than "copy some code from some project". Vendoring is anathema to proper security practices (unless you have other mechanisms to deal with the vendoring, at which point you have a package manager by another name).
Oh lord.
It does unlock some interesting things to be sure, like sqlx’ macros that check the query at compile time by connecting to the database and checking the query against it. If this sounds like the compiler connecting to a database, well, it’s because it is.
Some people use 'pnpm', which only runs installScripts for a whitelisted subset of packages, so an appreciable fraction of the npm ecosystem (those that don't use npm or yarn, but pnpm) do not run scripts by default.
Cargo compiles and runs `build.rs` for all dependencies, and there's no real alternative which doesn't.
PS. Actually I'll risk to share my (I'm new to Rust) thoughts about it: https://shatsky.github.io/notes/2025-12-22_runtime-code-shar...
Most of the community I’ve interacted with are big on either embedding a scripting engine or WASM. Lots of momentum on WASM based plugins for stuff.
It’s a weakness for both Rust and Go if I recall correctly
They’re only really useful if you’re distributing multiple binary executables that share most of the underlying code, and you want to save some disk space in the final install. The standard Rust toolchain builds use them for this purpose last time I checked.
You get lucky when all assets have been compiled with the same toolchain (with the same options) but will lose your mind when you have issues caused by this thing neither you nor the package authors knew existed.
Or at least it used to be when they designed the thing…
It’s been some time since I looked into this so I wanted to be clear on what I meant. I’d be elated to be wrong though
You do still need to write the interfacing code, but that's true for all languages.
You can also link to C libs from both. I guess you could technically make a rust lib with C interface and load it from rust but that's obviously suboptimal
Rust supports two kinds of dynamic linking:
- `dylib` crate types create dynamic libraries that use the Rust ABI. They are only usesul within a single project though, since they are only guaranteed to work with the crate that depended on them at the compilation time.
- `cdylib` crate types with exported `extern "C"` functions; this creates a typical shared library in the C way, but you also need to implement the whole interface in a C-like unsafe subset of Rust.
Neither is ideal, but if you really want to write a shared library you can do it, it's just not a great experience. This is part of the reason why it's often preferred to use scripting languages or WASM (the other reason being that scripting languages and WASM are sandboxed and hence more secure by default).
I also want to note that a common misconception seems to be that Rust should allow any crate to be compiled to a shared library. This is not possible for a series of technical reasons, and whatever solution will be found will have to somehow distinguish "source only" crates from those that will be compilable as shared libraries, similarly to how C++ has header-only libraries.
This way we'd have no portability issue, same benefit as with static linking except it works with glibc out of the box instead of requiring to use musl, and we could benefit from filesystem-level deduplication (with btrfs) to save disk space and memory.
One challenge will be that the likelihood of two random binaries having generated the same code pages for a given source library (even if pinned to the exact source) can be limited by linker and compiler options (eg dead code stripping, optimization setting differences, LTO, PGO etc).
The benefit of sharing libraries is generally limited unless you’re using a library that nearly every binary may end up linking which has decreased in probability as the software ecosystem has gotten more varied and complex.
Outside of embedded, this kind of reuse is a very marginal memory savings for the overall system to begin with. The key benefit of dynamic libraries for a system with gigabytes of RAM is that you can update a common dependency (e.g. OpenSSL) without redownloading every binary on your system.
Yes, it did. We have literally millions of times as much memory as in 1970 but far less than millions of times as many good library developers, so this is probably the right tradeoff.
It still boggles my mind that Adobe Acrobat Reader is now larger than Encarta 95… Hell, it’s probably bigger than all of Windows 95!
And increasingly, many C++ libraries are header only, meaning they are always statically linked.
Haskell (or GHC at least) is also in a similar situation to Rust as I understand it: no stable ABI. (But I'm not an expert in Haskell, so I could be wrong.)
C is really the outlier here.
The main problem with dynamic libraries is when they're shared at the system level. That we can do away with. But they're still very useful at the app level.
A stable ABI would allow making more robust Rust-Rust plugin systems, but I wouldn't consider that "safe"; dynamic linking is just fundamentally unsafe.
> Large binaries could also be broken down into dynamic libraries and make rebuilds much faster at the cost of leaving some optimizations on the table.
This can already be done within a single project by using the dylib crate type.
A significantly more thorny issue is to make sure any types with generics match, e.g. if I declare a struct with some generic and some concrete functions, and this struct also has private fields/methods, those private details (that are currently irrelevant for semver) would affect the ABI stability. And the tables mentioned in the previous paragraph might not be enough to ensure compatibility: a behaviour change could break how the data is interpreted.
So at minimum this would redefine what is a semver compatible change to be much more restricted, and it would be harder to have automated checks (like cargo-semverchecks performs). As a rust developer I would not want this.
People writing Rust generally prefer to stay within Rust though, because FFI gives up a lot of safety (normally) and is an optimization boundary (for most purposes). And those are two major reasons people choose Rust in the first place. So yeah, most code is just statically compiled in. It's easier to build (like in all languages) and is generally preferred unless there's a reason to make it dynamic.
I'd give a gig of my memory to never have to deal with that again.
You can't COW two different libraries, even if the libraries in question share the source code text.
Easylist will contact you, strongarm you into disabling your countermeasures and threaten to block all JS on your page if you don't comply.
So no ad servers can load, no prebid, nothing will function/load if the user has an adblocker that uses easylist (all of them) installed.
I'm still not entirely sure it's for the best or not.
On one hand, it is a concentration of power into hands a few people.
On the other, it is for a good cause, to maintain a list of ad network and site banners that drain resources, cause privacy issues, etc.
The Easylist people aren't saints. They get paid off by Google to allow "Acceptable Ads". So you just show a different campaigns if your user is running adblocker nowadays.
Just that Easylist is indirectly funded by ad revenue (google)
When it comes to ads, it's about the bulk of people, most don't run anything other than the default lists.
Shared memory obviously exists, though and they do mention in the post (missed it first time around) that they try to share Adblock resources.
The more rust gets written, the better AI will be able to write it for people... I like to be optimistic.
AI may have forced the hand on this. Users will no longer be able to subsidize software performance with hardware upgrades due to the great DRAM debacle of 2026..
Me too.
> The more rust gets written,
Rust seems neither necessary nor sufficient for getting developers to care about memory efficiency again though.
If you also care about memory safety it further limits options.
I do agree it's kind of a misleading headline, the real update is their use of Flatbuffers.
I might have to try switching from FF...
But nowadays, vertical tabs are native since Firefox v136 [3][4], so at least for the basics you won't need an add-on.
[1]: https://addons.mozilla.org/firefox/addon/sidebery/
[2]: https://addons.mozilla.org/firefox/addon/tree-style-tab/
> in this release:
> Other enhancements, stability improvements, and security updates
No mention of efficiency, or adblocking whatsoever!
> The upgrade represents roughly 45 MB of memory savings for the Brave browser on every platform (Android, iOS and desktop) by default
Brave never did that.
Brave blocks third-party ads & trackers by default.
(disclaimer: I lead privacy and adblocking at Brave)
(disclaimer: people remember how sketchy Brave is)
What they should have done instead is just take hundreds of millions of dollars from Google, like a non-sketchy browser.
The issue which I found out about late, and fixed right away, was infringing on right to publicity, nothing to do with donations from users' own tokens.
Disclaimer in case it's not obvious: I am a Brave employee
Also attacking the person instead of the ideas is really not in the spirit of HN.
These are non-tracking, carefully designed (including vetting by Brave), brand advertising images. They are not ads (we never did this) inserted into publisher pages, or (opt-in only) push notifications.
Brave has been working to find ways to sustain ourselves, and these sponsored images are still a good revenue line, although lesser now vs other lines. If you want, turn them off.
Free riding is always an user right, we don't try to stop it on principle, as if we ever could with open source. But there's no free lunch: if you use Firefox, you are Google's product. If you use a Firefox fork, you're free riding on Gecko which costs a lot to maintain. HTH
What exactly do you think an advertisement is?
What you may be thinking of was at one point, when you went to a URL (for some URLs), the browser would rewrite the URL to contain their affiliate link. There was blowback for doing that. They quickly removed that/haven't done it since as far as I know
In fairness, that is incredibly shady and they deserve this mistrust even years later because of it.
Brendan talks about this a bit more here: https://x.com/BrendanEich/status/2006412918783619455
"""
Brave Origin is:
1/ new, optional, separate build (stripped down, no telemetry/rewards/wallet/vpn/ai);
2/ free on Linux, one time buy elsewhere.
"""
So the stripped down version (at least the non-Linux one) will not be open source?
Rules that require the distribution of source code don't require the distribution of binaries.
As other people have mentioned you can resell open source software. I have a big box Linux distro on my shelf here.
Even Firefox, which is the best we have currently, surprises us a few times a year with questionable decisions. Still, it's what I recommend to people.
In general, of 3rd party blockers, uBlock Origin isn't even the best, AdGuard is.
Why? I thought uBlock Origin on Firefox was the most effective combination available (assuming that you use the same filter lists).
AdGuard works better simply because there's a bunch of people being paid to work on it. There's more optimization and less bugs. The UI is a whole lot more polished. Blocklists have improved syntax, and the lists themselves are updated more frequently to catch site breakage. EasyList often has breakage on their lists for months even after being reported on their Github, but reporting the same breakage to AdGuard results in the breakage being fixed in days if not hours. And they do adjacent projects like AdGuard Home (sort of a commercial Pi-Hole) too.
FWIW, big names in adblocking work for these companies too. AFAIK, FanBoy (EasyList + EasyPrivacy + his own lists) gets paid by Brave to maintain the lists. So in a way, Brave is funding adblocking for everyone :)
And you should really be using https://flathub.org/en/apps/com.brave.Browser
The best part was this whole scam sitting as an unresolved issue on GitHub for months after they finally acknowledged it (after first denying it lol).
Closest browser I’ve seen to an actual virus in maybe ever.
And it’s a good lesson for developers that once you lose trust there are many of us who will never make the same mistake again purely out principle.
https://news.ycombinator.com/item?id=18734999
https://www.sophos.com/en-us/blog/brave-ceo-apologises-for-a...
https://www.bleepingcomputer.com/news/security/facebook-twit...
and issues with tor (presumably fixed by now)
https://www.coindesk.com/tech/2021/02/22/brave-browser-was-e...
My fucking god I’m not sure enshittification has ever been so widely dispersed. It’s impossible to have any type of unified set up across different OS/devices currently.
On 64-bit systems, pointers themselves can really start to take up a lot of memory (especially if you multiply them across 100k+ adblock filters). Switching to array indices instead of pointers saves a lot of memory that's otherwise wasted when you don't need to address the entire possible memory space.
Flat buffers is know to bloat client code. Was any trick used to mitigate that?
You might be thinking of Vivaldi. Brave is certainly buggy, but it's not written in Javascript.
>This repository holds the build tools needed to build the Brave desktop browser for macOS, Windows, and Linux.
The web browser is here : https://github.com/brave/brave-core (8% of typescript)
I’ll never trust them again after that.
The VPN they installed was disabled and they could not activate it without user interaction. And the only reason they did this is so when you click "activate VPN" in the browser, it works immediately.
On top of that, other businesses employ(ed) similar tricks. For years and years and years, Dropbox on macOS did a very specific hack to give itself more permissions to ease syncing. Hell, Firefox injected ads for Mr. Robot via a surreptitiously installed invisible extension.
Still a boneheaded move by Brave, just like adding their own affiliate link to crypto links (if none were added) to generate extra revenue for the company at no extra cost to the user. But that is even further in the past.
At any rate, they also fund or develop a bunch of anti-ad tech and research and make it open source / publish it. The defaults of Brave protect your privacy much better than Firefox's defaults. And so far, their BAT concept is the only one that is a legitimate alternative to an ad-funded internet.
Brave is everything Mozilla wishes it had become.
And yeah, my bone to pick is warning others not to fall for Brave’s slick PR. Companies that act that way can pay the price.
This is what Firefox blocks by default: https://support.mozilla.org/en-US/kb/enhanced-tracking-prote...
2. Look at Brave - see 1.
https://winaero.com/how-to-enable-split-view-in-firefox-146/
This only claims that the memory usage of the adblock engine was reduced, not the total memory consumption of the browser.
> In 2022, Brave faced further criticism for bundling its paid virtual private network (VPN) product, Brave Firewall + VPN, into installations of its Windows browser, even for users who had not subscribed to the service
bqmjjx0kac•1d ago
mhitza•1d ago
I'll happily take performance improvements cause most products lack any efficiency care nowadays.
timeon•1d ago
Are you referring to current RAM prices or bloat of numerous Electron apps?
nomel•1d ago
formerly_proven•1d ago
renewiltord•1d ago
Retric•1d ago
Bombthecat•1d ago
devwastaken•1d ago
moscoe•1d ago
This kind of lazy thinking is why today’s software is so bloated and slow.
jokoon•1d ago
I agree with the sentiment but it's a bit too late now.
The problem is with how software is designed around being cheap to write.
winrid•1d ago
nomel•19h ago
morshu9001•1d ago
mikkupikku•1d ago
array_key_first•1d ago
nwallin•1d ago
nomel•1d ago
WhyNotHugo•1d ago
Not sure if this 45MB is per browser instance or per tab, but it’s the latter case, 10 windows would save 450MB. >10% on a lower-end device.
astrange•1d ago
The lesson here is pointer-chasing data structures and trees are a lot more expensive than everyone and most programming languages like to pretend they are.
jamesnorden•1d ago
allarm•1d ago
infogulch•1d ago
cozzyd•1d ago
hagbard_c•1d ago
ensocode•1d ago
db48x•1d ago