NPM debug and chalk packages compromised
Tons of people think these kind of micro dependencies are harmful and many of them have been saying it for years.
Micro-dependencies are not the only thing that went wrong here, but hopefully this is a wakeup call to do some cleaning.
Absolutely. A lot of developers work on a large Enterprise app for years and then scoot off to a different project or company.
What's not fun is being the poor Ops staff that have to deal with supporting the library dependencies, JVM upgrades, etc for decades after.
Upgrading when falling off the train is serious drawback on moving fast..
I didn't think it'll make things perfect, not by a long shot. But it can make the exploits a lot harder to pull off.
If a package wants to access the filesystem, shell, OS API's, sockets, etc., those should be permissions you have to explicitly grant in your code.
function main(io) {
const result = somethingThatRequiresHttp(io.fetch);
// ...
}
and as long as you don't put I/O in global scope (i.e. window.fetch) but do an injection into the main entrypoint, that entrypoint gets to control what everyone else can do. I could for example do function main(io) {
const result = something(readonlyFetch(onlyOurAPI(io.fetch))
}
function onlyOurAPI(fetch) {
return (...args) => {
const test = /^https:\/\/api.mydomain.example\//.exec(args[0]);
if (test == null) {
throw new ValueError("must only communicate with our API");
}
return fetch(..args);
}
}
function readonlyFetch(fetch) { /* similar but allowlist only GET/HEAD methods */ }
I vaguely remember him being really passionate about "JavaScript lets you do this, we should all program in JavaScript" at the time... these days he's much more likely to say "JavaScript doesn't have any way to force you to do this and close off all the exploits from the now-leaked global scope, we should never program in JavaScript."Shoutout to Ryan Dahl and Deno, where you write `#!/usr/bin/env deno --allow-net=api.mydomain.example` at the start of your shell script to accomplish something similar.
In my amateur programming-conlang hobby that will probably never produce anything joyful to anyone other than me, one of those programming languages has a notion of sending messages to "message-spaces" and I shamelessly steal Doug's idea -- message-spaces have handles that you can use to communicate with them, your I/O is a message sent to your main m-space containing a bunch of handles, you can then pattern-match on that message and make a new handle for a new m-space, provisioned with a pattern-matcher that only listens for, say, HTTP GET/HEAD events directed at the API, and forwards only those to the I/O handle. So then when I give this new handle to someone, they have no way of knowing that it's not fully I/O capable, requests they make to the not-API just sit there blackholed until you get an alert "there are too many unread messages in this m-space" and peek in to see why.
https://blog.plan99.net/why-not-capability-languages-a8e6cbd...
we could literally just take Go and categorize on "imports risky package" and we'd have a better situation than we have now, and it would encourage library design that isolates those risky accesses so people don't worry about them being used. even that much should have been table stakes over a decade ago.
and like:
>No language has such an object or such interfaces in its standard library, and in fact “god objects” are viewed as violating good object oriented design.
sure they do. that's dependency injection, and you'd probably delegate it to a dependency injector (your god object) that resolves permissions. plus go already has an object for it that's passed almost everywhere: context.
perfect isn't necessary. what we have now very nearly everywhere is the most extreme example of "yolo", almost anything would be an improvement.
I think I need to read up more on how to deal with (avoiding) changes to your public APIs when doing dependency injection, because that seems like basically what you're doing in a capability-based module system. I feel like there has to be some way to make such a system more ergonomic and make the common case of e.g. "I just want to give this thing the ability to make any HTTP request" easy, while still allowing for flexibility if you want to lock that down more.
On the other hand, it seems about as hard as I was imagining. I take for granted that it has to be a new language -- you obviously can't add it on top of Python, for example. And obviously it isn't compatible with things like global monkeypatching.
But if a language's built-in functions are built around the idea from the ground up, it seems entirely feasible. Particularly if you make the limits entirely around permissions around data communication -- with disk, sockets, APIs, hardware like webcams and microphones, and "god" permissions like shell or exec commands -- and not about trying to merely constrain resource usage around things like CPU, memory, etc.
If a package is blowing up your memory or CPU, you'll catch it quickly and usually the worst it can do is make your service unavailable. The risk to focus on should be exclusively data access+exfiltration and external data modification, as far as I can tell. A package shouldn't be able to wipe your user folder or post program data to a URL at all unless you give it permission. Which means no filesystem or network calls, no shell access, no linked programs in other languages, etc.
It didn't go well. The JVM did it's part well, but they couldn't harden the library APIs. They ended up playing whack-a-mole with a steady stream of library bugs in privileged parts of the system libraries that allowed for sandbox escapes.
There’s no reason a color parser, or a date library should require network or file system access.
Versus, when I've worked at places that eschew automatic dependency management, yes, there is some extra work associated with manually managing them. But it's honestly not that much. And in some ways it becomes a boon for maintainability because it encourages keeping your dependency graph pruned. That, in turn, reduces exposure to third-party software vulnerabilities and toil associated with responding to them.
And at least with a standardized package manager, the packages are in a standard format that makes them easier to analyze, audit, etc.
should it be higher friction than npm? probably yes. a permissions system would inherently add a bit (leftpad includes 27 libraries which require permissions "internet" and "sudo", add? [y/N]) which would help a bit I think.
but I'm personally more optimistic about structured code and review signing, e.g. like cargo-crev: https://web.crev.dev/rust-reviews/ . there could be a market around "X group reviewed it and said it's fine", instead of the absolute chaos we have now outside of conservative linux distro packagers. there's practically no sharing of "lgtm" / "omfg no" knowledge at the moment, everyone has to do it themselves all the time and not miss anything or suffer the pain, and/or hope they can get the package manager hosts' attention fast enough.
The more interesting comparison to me is, for example, my experience on C# projects that do and do not use NuGet. Or even the overall C# ecosystem before and after NuGet got popular. Because then you're getting closer to just comparing life with and without a package manager, without all the extra confounding variables from differing language capabilities, business domains, development cultures, etc.
I do agree that C is an especially-bad case for additional reasons though, yeah.
Leftpad as a library? Let it all burn down; but then, it's Javascript, it's always been on fire.
In the C world, anything that is not direct is often a very stable library and can be brought in as a peer deps. Breaking changes happen less and you can resolve the tree manually.
In NPM, there are so many little packages that even renowned packages choose to rely one for no obvious reason. It’s a severe lack of discipline.
Hey that was also on NPM iirc!
Nixing javascript in the frontend is a harder sell, sadly
Ruby, Python, and Clojure, though? They weren’t any better than my npm projects, being roughly the same order of magnitude. Same seems to be true for Rust.
Same with Java, if you avoid springboot and similar everything frameworks, which admittedly is a bit of an uphill battle given the state of java developers.
You can of course also keep dependencies small in javascript, but it's a very uphill fight where you'll have just a few options and most people you hire are used to including a library (that includes 10 libraries) to not have to so something like `if (x % 2 == 1)`
Just started with golang... the language is a bit annoying but the dependency culture seems OK
Yeah, stop those cute domain names. I never got the memo on Youtu.be, I just had “learn” it was okay. Of course people started to let their guard down because dumbasses started to get cute.
We all did dodge a bullet because we’ve been installing stuff from NPM with reckless abandon for awhile.
Can anyone give me a reason why this wouldn’t happen in other ecosystems like Python, because I really don’t feel comfortable if I’m scared to download the most basic of packages. Everything is trust.
Passkeys are disruptive enough that I don't think they need to be mandated for everyone just yet, but I think it might be time for that for people who own critical dependencies.
Same issue with python, rust etc. It’s all very trust driven
The only solution would be to prevent all releases from being applied immediately.
No hardware keys, no new releases.
They have it implemented.
I created NPM account today and added passkey from my laptop and hardware key as secondary. As I have it configured it asked my for it while publishing my test package.
So the guy either had TOTP or just the pw.
Seems like should be easy to implement enforcement.
In the Java world, I know there’s been griping from mostly juniors re “why isn’t Maven easy like npm?” (I work with some of these people). I point them to this article: https://www.sonatype.com/blog/why-namespacing-matters-in-pub...
Maven got a lot of things right back in the day. Yes POM files are in xml and we all know xml sucks etc, but aside from that the stodgy focus on robustness and carefully considered change gets more impressive all the time.
That looks like a phishing attempt from someone using a random EC2 instance or something, but apparently it's legit. I think. Even the "heads-up" email they sent beforehand looked like phishing, so I was waiting for the actual invoice to see if they really started using that address, but even now I'm not opening these attached PDFs.
These companies tell customers to be suspicious of phishing attempts, and then they pull these stunts.
Yep. At every BigCo I've worked at, nearly all of the emails from Corporate have been indistinguishable from phishing. Sometimes, they're actual spam!
Do the executives and directors responsible for sending these messages care? No. They never do, and get super defensive and self-righteous when you show them exactly how their precious emails tick every "This message is phishing!" box in the mandatory annual phishing-detection-and-resistance training.
A week later some executive pushing the training emailed the entire company saying that it was unacceptable that nobody from engineering had logged into the training site and spun some story about regulatory requirements. After lots of back and forth they still wouldn't accept that it obviously looked like a phishing email.
Eventually when we actually did the training, it literally told us to check the From address of emails. I sometimes wonder if it was some weird kind of performance art.
“We got pwned but the entire company went through a certified phishing awareness program and we have a DPI firewall. Nothing more we could have done, we’re not liable.”
Our infra guy then had to argue with them for quite a while to just email from their own domain, and that no, we're weren't going to add their cert to our DNS, and let a third party spoof us (or however that works, idk). Absolutely shocking lack of self awareness.
Title: "Expense report overdue - Please fill now"
Subject:
<empty body>
<Link to document trying it's best to look like google's attachment icon but was actually a hyperlink to a site that asked me to log in with my corporate credentials>
---
So like, obviously this is a stupid phishing email, right? Especially as at this time, I had not used my corporate card.
A few weeks later I got the finance team reaching out threatening to cancel my corporate card because I had charges on it with no corresponding expense report filed.
So on checking the charge history for the corporate card, it was the annual tax payment that all cards are charged in my country every year, and finance should have been well aware of. Of course, then the expense system initially rejected my report because I couldn't provide a receipt, as the card provider automatically deducts this charge with no manual action on the card owner's side...
Edit: nvm it seems it's not the case
You can't protect against people clicking links in emails in this way. You might say `npmjs-help.ph` is a phishy domain, but npmjs.help is a phishy domain and people clicked it anyway.
I agree that especially larger players should be proactive and register all similar-sounding TLDs to mitigate such phishing attacks, but they can't be outright prevented this way.
Domain: NPMJS.HELP (85 similar domains)
Registrar: Porkbun, LLC (4.84 million domains)
Query Time: 8 Sep 2025 - 4:14 PM UTC [1 DAY BACK] [REFRESH]
Registered: 5th September 2025 [4 days back]
Expiry: 5th September 2026 [11 months, 25 days left]
I'd be suspicious of anything registered with Porkbun discount registrar. 4 days ago, means it's fake.> It sets a deadline a few days in the future. This creates a sense of urgency, and when you combine urgency with being rushed by life, you are much more likely to fall for the phishing link.
Any time I feel like I'm being rushed, I check deeper. It would help if everyone's official communications only came from the most well known domain (or subdomain).
The sense of urgency is always the red flag.
but, ok, you click the link, you get a new tab, and you're asked to fill in your auth credentials. but why? you should already be logged in to that service in your default browser, no? red flag 2
ok, maybe there is some browser cache issue, whatever, so you trigger your password manager to provide your auth to the website -- but here, every single password manager would immediately notice that the domain in the browser does not match the domain associated with the auth creds, and either refuse to paste the creds thru, or at an absolute minimum throw up a big honkin' alert that something is amiss, which you'd need to explicitly click an "ignore" button to get past. red flag 3
nobody should be able to publish new versions of widely-used software without some kind of manual review/oversight in the first place, but even ignoring that, if someone does have that power, and they get pwned by an attack like this, with at least 3 clear red flags that they would need to have explicitly ignored/bypassed, then CLEARLY this person cannot keep their current position of authority
You should still never click a link in an email like this, but the urgency factor is well done here
It was obviously good enough.
Snark aside, you only need to trick one person once and you've won.
No bank, and almost no large corporations go directly to artifact/package repos. They all host them internally.
Seriously, this is one of my key survival mechanisms. By the time I became system administrator for a small services company, I had learned to let other people beta test things. We ran Microsoft Office 2000 for 12 years, and saved soooo many upgrade headaches. We had a decade without the need to retrain.
That, and like other have said... never clicking links in emails.
"Hey, is it still broken? No? Great!"
Better defense would be to delete or quarantine the compromised versions, fail to build and escalate to a human for zero-day defense.
Reading the code content of emergency patches should be part of the job. Of course, with better code trust tools (there seem to have been some attempts at that lately, not sure where they’re at), we can delegate that and still do much better than the current state of things.
They stop working before can use them.
This happened even if you had pinned dependencies and were on top of security updates.
We need some deeper changes in the ecosystem.
https://github.com/nrwl/nx/security/advisories/GHSA-cxm3-wv7...
I avoid anything to do with NPM, except for the typescript compiler, and I'm looking forward to the rewrite in Go where I can remove even that. For this reason.
As a comparison, in Go, you have minimum version spec, and it takes great pains to never execute anything you download, even during compilation stage.
NPM will often have different source then the github repo source. How does anyone even trust the system?
I have seen so many takes lamenting how this kind of supply chain attack is such a difficult problem to fix.
No it really isn't. It's an ecosystem and cultural problem that npm encourages huge dependency trees that make it impractical to review dependency updates so developers just don't.
The difficulty comes in trying to change the entire culture.
None of it will help you when you're executing the binaries you built, regardless of which language they were written in.
---
I figure you aren't about to get fooled by phishing anytime soon, but based on some of your remarks and remarks of others, a PSA:
TRUSTING YOUR OWN SENSES to "check" that a domain is right, or an email is right, or the wording has some urgency or whatever is BOUND TO FAIL often enough.
I don't understand how most of the anti-phishing advice focuses on that, it's useless to borderline counter-productive.
What really helps against phishing :
1. NEVER EVER login from an email link. EVER. There are enough legit and phishing emails asking you to do this that it's basically impossible to tell one from the other. The only way to win is to not try.
2. U2F/Webauthn key as second factor is phishing-proof. TOTP is not.
That is all there is. Any other method, any other "indicator" helps but is error-prone, which means someone somewhere will get phished eventually. Particularly if stressed, tired, or in a hurry. It just happened to be you this time.
TOTP doesn't need to be phishing-proof if you use a password manager integrated with the browser, though.
If certain websites fail to be detected, thats a security issue on those specific websites, as I'll learn which ones tend to fail.
If they rarely fail to detect in general, its infrequent enough to be diligent in those specific cases. In my experience with password managers, they rarely fail to detect fields. If anything, they over detect fields.
An internal engineer there who did a bunch of security work phished like half of her own company (testing, obviously). Her conclusion, in a really well-done talk, was that it was impossible. No human measures will reduce it given her success at a very disciplined, highly security conscious place.
The only thing that works is yubikeys which prevent this type of credential + 2fa theft phishing attack.
edit:
karla burnette / talk https://www.youtube.com/watch?v=Z20XNp-luNA
I receive Google Doc links periodically via email; fortunately they're almost never important enough for me to actually log in and see what's behind them.
My point, though, is that there's no real alternative when someone sends you a doc link. Either you follow the link or you have to reach out to them and ask for some alternative distribution channel.
(Or, I suppose, leave yourself logged into the platform all the time, but I try to avoid being logged into Google.)
I don't know what to do about that situation in general.
Or only log in when you need to open a google link. Or better yet, use a multi-account container for google.
Pardon; a what? Got any reference links?
https://addons.mozilla.org/en-US/firefox/addon/multi-account...
Sites choosing to replace password login with initiating the login process and then clicking a "magic link" in your email client is awful for developing good habits here, or for giving good general advice. :c
In both cases it's good advice not to click the link unless you initiated the request. But with the auth token in the link, you don't need to login again, so the advice is still the same: don't login from a link in your email; clicking links is ok.
1. If a target website (say important.com) sends poorly-configured CORS headers and has poorly configured cookies (I think), a 3rd-party website is able to send requests to important.com with the cookies of the user, if they're logged in there. This depends on important.com having done something wrong, but the result is as powerful as getting a password from the user. (This is called cross-site request forgery, CSRF.)
2. They might have a browser zero-day and get code execution access to your machine.
If you initiated the process that sent that email and the timing matches, and there's no other way than opening the link, that's that. But clicking links in emails is overall risky.
2 is also true. But also, a zero day like that is a massive deal. That's the kind of exploit you can probably sell to some 3 letter agency for a bag. Worry about this if you're an extremely high-value target, the rest of us can sleep easy.
Just ... don't.
A guy I knew needed a car, found one, I told him to take it to a mechanic first. Later he said he couldn't, the guy had another offer, so he had to buy it right now!!!, or lose the car.
He bought, had a bad cylinder.
False urgency = scam
As in right then, without being given a deadline…
Last I checked, we're still in a world where the large majority of people with important online accounts (like, say, at their bank, where they might not have the option to disable online banking entirely) wouldn't be able to tell you what any of those things are, and don't have the option to use anything but SMS-based TOTP for most online services and maybe "app"-based (maybe even a desktop program in rare cases!) TOTP for most of the rest. If they even have 2FA at all.
The rest is handled by preferring plain text over HTML, and if some moron only sends HTML mails to carefully dissect it first. Allowing HTML mails was one of the biggest mistakes for HTML we've ever made - zero benefits with huge attack surface.
Still, I don’t understand how npmjs.help doesn’t immediately trigger red flags… it’s the perfect stereotype of an obvious scam domain. Maybe falling just short of npmjshelp.nigerianprince.net.
1- As a professional, installing free dependencies to save on working time.
There's no such thing as a free lunch, you can't have your cake and eat it too that is, download dependencies that solve your problems, without paying, without ads, without propaganda (for example to lure you into maintaining such projects for THE CAUSE), without vendor lockin or without malware.
It's really silly to want to pile up mountains of super secure technology like webauthn, when the solution is just to stop downloading random code from the internet.
This is very much a 'can we please not' situation, isn't it? (Obviously it's not something that the email recipients can (usually) control, so it's not a criticism of them.) It also has to meaningfully increase the chance that someone will eventually forget to renew a domain, too.
An authentication environment which has gotten so complex we expect to be harassed by messages say "your Plex password might be compromised", "your 2FA is all fucked up", etc.
And the crypto thing. Xe's sanguine about the impact, I mean, it just the web3 degens [1] that are victimized, good innocent decent people like us aren't hurt. From the viewpoint of the attacker it is all about the Benjamins and the question is: "does an attack like this make enough money to justify the effort?" If the answer is yes than we'll see more attacks like this.
There are just all of these things that contribute to the bad environment: the urgent emails from services you barely use, the web3 degens, etc.
[1] if it's an insult it is one the web3 community slings https://www.webopedia.com/crypto/learn/degen-meaning/
I find it insane that someone would get access to a package like this, then just push a shitty crypto stealer.
You're a criminal with a one-in-a-million opportunity. Wouldn't you invest an extra week pushing a more fledged out exploit?
You can exfiltrate API keys, add your SSH public key to the server then exfiltrate the server's IP address so you can snoop in there manually, if you're on a dev's machine maybe the browser's profiles, the session tokens common sales websites? My personal desktop has all my cards saved on Amazon. My work laptop, depending on the period of my life, you could have had access to stuff you wouldn't believe either.
You don't even need to do anything with those, there's forums to sell that stuff.
Surely there's an explanation, or is it that all the good cybercriminals have stable high paying jobs in tech, and this is what's left for us?
Because the way this was pulled off, it was going to be found out right away. It wasn't a subtle insertion, it was a complete account take over. The attacker had only hours before discovery - so the logical thing to do is a hit and run. They asked what is the most money that can be extracted in just a few hours in an automated fashion (no time to investigate targets manually one at a time) and crypto is the obvious answer.
Unless the back doors were so good they weren't going to be discovered even though half the world would be dissecting the attack code, there was no point in even trying.
That could have netted the attacker something much more valuable, but it is pure hit or miss and it requires more skill and patience for a payoff.
VS blast out some crypto stealing code and grab as many funds as possible before being found out.
> Lots of people/organisations are going to be complacent and leave you with valid credentials
You'd get non-root credentials on lots of dev machines, and likely some non-root credentials on prod machines, and possibly root access to some poorly configured machines.
Two factor is still in place, you only have whatever creds that NPM install was ran with. Plenty of the really high value prod targets may very well be on machines that don't even have publicly routable IPs.
With a large enough blast radius, this may have worked, but it wouldn't be guaranteed.
And very, very happy that we're proxying all access to npm through Artifactory, which allowed us to block the affected versions and verify that they were in fact never pulled by any of our builds.
It might get missed, but I sure notice any time account emails come through even if it's not saying "your password was reset."
is that so? from the email it looks like they MITM'd the 2FA setup process, so they will have qix's 2FA secret. they don't have to immediately start taking over qix's account and lock him out. they should have had all the time they need to come up with a more sophisticated payload.
What gets me is everyone acknowledges this, yet HN is full of comments ripping on IT teams for the restrictions & EDR put in place on dev laptops.
We on the ops side have known these risks for years and that knowledge of those risks are what drives organizational security policies and endpoint configuration.
Security is hard, and it is very inconvenient, but it's increasingly necessary.
To wit: I have an open ticket right now from an automated code review tool that flagged a potential vulnerability. I and two other seniors have confirmed that it is a false alarm so I asked for permission to ignore it by clicking the ignore button in a separate security ticket. They asked for more details to be added to the ticket, except I don’t have permissions to view the ticket. I need to submit another ticket to get permission to view the original ticket to confirm that no less than three senior developers have validated this as a false alarm, which is information that is already on another ticket. This non-issue has been going on for months at this point. The ops person who has asked me to provide more info won’t accept a written explanation via Teams, it has to be added to the ticket.
Stakeholders will quickly treat your entire security system like a waste of time and resources when they can plainly see that many parts of it are a waste of time and resources.
The objection isn’t against security. It is against security theater.
It might not be sensible for the organization as a whole, but there’s no way to determine that conclusively, without going over thousands of different possibilities, edge cases, etc.
I have already documented, in writing, in multiple places, that the automated software has raised a false alarm, as well as providing a piece of code demonstrating that the alert was wrong. They are asking me to document it in an additional place that I don't have access to, presumably for perceived security reasons? We already accept that my reasoning around the false alarm is valid, they just have buried a simple resolution beneath completely stupid process. You are going to get false alarms, if it takes months to deal with a single one, the alarm system is going to get ignored, or bypassed. I have a variety of conflicting demands on my attention.
At the same time, when we came under a coordinated DDOS attack from what was likely a political actor, security didn't notice the millions of requests coming from a country that we have never had a single customer in. Our dev team brought it to their attention where they, again, slowed everything down by insisting on taking part in the mitigation, even though they couldn't figure out how to give themselves permission to access basic things like our logging system. We had to devote one of our on calls to walking them through submitting access tickets, a process presumably put in place by a security team.
I know what good security looks like, and I respect it. Many people have to deal with bad security on a regular basis, and they should not be shamed for correctly pointing out that it is terrible.
The ops person obviously can’t do that on your behalf, at least not in any kind of organizational setup I’ve heard of.
So the solution to an illogical, kafkaesque security process is to bypass the process entirely via authority?
You are making my argument for me.
This is exactly why people don’t take security processes seriously, and fight efforts to add more security processes.
Edit: I didn’t comment on all those other points, so it seems irrelevant to the one question I asked.
Ops are the ones who imposed those constraints. You can't impose absurd constraints and then say you are acting reasonable by abiding by your own absurd constraints.
All their EDR crud runs on Windows, but as a dev I'm allowed to run WSL but the tools do not reach inside WSL so if that gets compromised they would be none the wiser.
There is some instrumentation for linux servers and cloud machines, but that too is full of blind spots.
And as a sibling comment says, a lot of the policies are executed without anyone being able to explain their purpose, being able to grant "functionally equivalent security" exceptions or them even making sense in certain contexts. It feels like dealing with mindless automatons, even though humans are involved. For example a thing that happened a while ago: We were using scrypt as KDF, but their scanning flagged it as unknown password encryption and insisted that we should use SHA2 as a modern, secure hashing function. Weeks of long email threads, escalation and several managers suggesting "just change it to satisfy them" followed. That's a clear example of mindless rule-following making a system less secure.
Blocking remote desktop forwarding of security keys also is a fun one.
Putting it another way: if I'm a random small-time burglar who happens to find himself in Walter White's vault, I'm stuffing as much cash as I can fit into my bag and ignoring the barrel of methylamine.
The damage from this hack could have been far worse if it was stealing real money people rely on to feed their kids.
The plot of Office Space might offer clues.
Also isn't it crime 101 that greedy criminals are the ones who are more likely to get caught?
Even if you steal other stuff, you're going to need to turn it all into cryptocurrency anyway, and how much is an AWS key really going to bring in.
There are criminals that focus on extracting passwords and password manager databases as well, though they often also end up going after cryptocurrency websites.
There are probably criminals out there biding their time, waiting for the perfect moment to strike, silently infiltrating companies through carefully picked dependencies, but those don't get caught as easily as the ones draining cryptocurrency wallets.
> one-in-a-million opportunity
step 1: live in a place where the cops do not police this type of activity
step 2: $$$$
For anything else you need a fiat market, which is hard to deal with remotely.
But (1) how do you do that with hundreds or thousands of SSH/API keys and (2) how do you actually make money from it?
So you get a list of SSH or specific API keys and then write a crawler that can hopefully gather more secrets from them, like credit card details (how would that work btw?) and then what, you google "how to sell credentials" and register on some forum to broker a deal like they do in movies?
Sure sounds a hell of a lot more complicated and precarious than swapping out crypto addresses in flight.
OTOH, this modus operandi is completely inconsistent with the way they published the injected code: by taking over a developer's account. This was going to be noticed quickly.
If the payload had been injected in a more subtle way, it might have taken a long time to figure out. Especially with all the levenshtein logic that might convince a victim they'd somehow screwed up.
nobody cares about your trade secrets, or some nation's nuclear program, just take the crypto
Consumer financial fraud is quite big and relatively harmless. Industrial espionage, otoh, can potentially put you in the cross hairs of powerful and/or rouge elements, and so, only the big actors get involved, but in a targeted way, preferring to not leave much if any trace of compromise.
Your ideas are potentially lubricative over time, but first it creates more work and risk for the attacker.
The attacker had access to the user's npm repository only.
Also, you underestimate how trivial this 'one-in-a-million opportunity' is; it's definitely not a one-in-a-million! Almost anybody with basic coding ability and a few thousand dollars could pull off this hack. There are thousands of libraries which are essentially worthless with millions of downloads and the author who maintains is basically broke and barely uses their npm account anymore. Anybody could just buy those npm accounts under false pretenses for a couple of thousands and then do whatever they want with tens of thousands (or even hundreds of thousands) of compromised servers. The library author is legally within their rights to sell their digital assets and it's not their business what the acquirer does with them.
I keep expecting some new company to bring out this revolutionary idea of "On prem: your machine, your libraries, your business."
If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.
No?
How do you change your 2FA? Buy a new phone? A new Yubikey?
I agree that rotating 2FA should ring alarm bells as an unusual request. But that requires thinking.
The post's author's resume section reinforces this feeling:
I am a skilled force multiplier, acclaimed speaker, artist, and prolific blogger. My writing is widely viewed across 15 time zones and is one of the most viewed software blogs in the world.
I specialize in helping people realize their latent abilities and help to unblock them when they get stuck. This creates unique value streams and lets me bring others up to my level to help create more senior engineers. I am looking for roles that allow me to build upon existing company cultures and transmute them into new and innovative ways of talking about a product I believe in. I am prioritizing remote work at companies that align with my values of transparency, honesty, equity, and equality.
If you want someone that is dedicated to their craft, a fearless innovator and a genuine force multiplier, please look no further. I'm more than willing to hear you out.
Wouldn't help in this case where someone bought a domain that looked a tiny bit like the authentic one for a very casual observer.
Let's say there's a company named Awesome...and i register the domain name of AwesomeSupport.com. I could be a total dark hat/evil hacker/neverdoweller....and this domain may not be infringing on any trademark, etc. And, then i can start using all the encryption you noted...which merely means that *my domain name* (the bad one) is "technically sound"...but of course, all that use of encryption fails to convey that i am not the legitimate Awesome company. So, how is the victim supposed to know which of the domains is legit or not? Especially considering that some departments of the real, legit Awesome company might register their own domain name to use for actual, real reasons - like the marketing department might register MyAwesome.com...for managing customer accounts, etc.
Is encryption necessary in digital life? Hellz yeah! Does it solve *all issues*? Hellz no! :-)
There aren't that many websites. The e-mail provider could have a list of "popular" domains, and the user could have their own list of trusted domains.
There is all sorts of ways to warn the user about it, e.g. "you have never interacted with this domain before." Even simply showing other e-mails from the same domain would be enough to prevent phishing in some cases.
There are practical ways to solve this problem. They aren't perfect but they are very feasible.
To your more recent points, i agree that there are other several protections in place...and depending on a number of facotrs, some foks have more at their disposal, and others might have less...but, still there are mechnisms in place to help - without a doubt. But yet with all these mechanisms in place, people still fall prey to phishing attacks...and sometimes those victims are not lay people, but actual technologists. So, i think the solution(s) to solve this are not so simple, and likely are not only tech-based. ;-)
URLs are also getting too damn long
Do you think there would be the time to properly review applications to get on the whitelist?
If it's new, you should be more cautious. Except even those companies that should know better need you to link through 7 levels of redirect tracking, and they're always using a new one.
NPM debug and chalk packages compromised - https://news.ycombinator.com/item?id=45169657 - Sept 2025 (697 comments, including exemplary comments from the project maintainer)
Authentication-Results: aspmx1.migadu.com;
dkim=pass header.d=smtp.mailtrap.live header.s=rwmt1 header.b=Wrv0sR0r;
dkim=pass header.d=npmjs.help header.s=rwmt1 header.b=opuoQW+P;
spf=pass (aspmx1.migadu.com: domain of ndr-cbbfcb00-8c4d-11f0-0040-f184d6629049@mt86.npmjs.help designates 45.158.83.7 as permitted sender) smtp.mailfrom=ndr-cbbfcb00-8c4d-11f0-0040-f184d6629049@mt86.npmjs.help;
dmarc=pass (policy=none) header.from=npmjs.help
If you think npm.help is something it isn't, that's not something DKIM et al can help with.
I can think of 3 paths to improve situation (assuming that "everyone deploys cryptographic email infrastructure instantly" is not gonna happen).
1. The email client doesn't indicate DKIM at all. This is strictly worse than today, because then the attack could have claimed to be from npmjs.com.
2. You only get a checkmark if you have DKIM et al plus you're a "verified domain". This means only big corporations get the checkmark -- I hate this option. It's EV SSL but even worse. And again, unless npmjs.com was a "big corporation" the attacker could have just faked the sender and the user would not notice anything different, since in that world the authentic npmjs.com emails wouldn't have a checkmark either.
3. The checkmark icon is changed into something else, nothing else happens. But what? "DKIM" isn't the full picture (and would be horribly confusing too). Putting a sunflower there seems a little weird. Do you really apply this much significance to the specific icon?
The path that HTTPS took just hasn't been repeatable in the email space; the upgrade cycles are much slower, the basic architecture is client->server->server not client->server, and so on.
Few years ago? I have lock icon right now in my address bar
The worst thing I can recall from the enterprisey ecosystems is the log4j exploit, which was easily one of the most attended to security problems I am aware of. Every single beacon was lit for that one. It seems like when an NPM package goes bad, it can take a really long time before someone starts to smell it.
I do think it's worth reducing the number of points of failure in an ecosystem, but relying entirely on a single library that's at risk of stagnating due to eternal backcompat obligations is not the way; see the standard complaints about Python's "dead batteries". The Debian or Stackage model seems like it could be a good one to follow, assuming the existence of funding to do it.
I don’t think we did. I think it is entirely plausible that more sophisticated attacks ARE getting into the npm ecosystem.
I see the JavaScript ecosystem hasn’t changed since leftpad then.
Email is such an utter shitfest. Even tech-savvy people fall for phishing emails, what hope do normal people have.
I recommend people save URLs in their password managers, and get in the habit of auto-filling. That way, you’ll at least notice if you’re trying to log into a malicious site. Unfortunately, it’s not foolproof, because plenty of sites ask you to randomly sign into different URLs. Sigh…
The first article I ever read discussing the possibility of npm supply chain attacks actually used coloured text in terminal as the example package to poison. And ever since then I have always been associated coloured terminal in text with supply chain attack
- Update your 2FA credentials
What does that even mean? That's not something that can be updated - that's kind of the point of 2FA.
- It's been over 12 months since you last 2FA update
Again - meaningless nonsense. There's no such thing as a 2FA update. Maybe the recipient was thinking "password update" - but updating passwords regularly is also bad practice.
- "Kindly ask ..."
It would be very unusual to write like that in a formal security notification.
- "your credentials will be temporarily locked ..."
What does "temporarily locked" mean? That's not a thing. Also creating a sense of urgency is a classic phishing technique and a red flag.
- A link to change your credentials
A legit security email should never contains a link to change your credentials.
- It comes from a weird domain - .help
Any nonstandard domain is a red flag.
I don't use NPM, and if this actually looks like an email NPM would send, NPM has serious problems. However security ignorant companies do send emails like this. That's why the second layer of defense if you receive an email like this and think it might be real is to just log directly into (in this case) NPM and update your account settings without clicking links in the email.
NEVER EVER EVER click links in any kind of security alert email.
I don't blame the people who fell for this, but it is also concerning that there's such limited security awareness/training among people with publish access to such widely used packages.
> However security ignorant companies do send emails like this
exactly
> What does that even mean? That's not something that can be updated - that's kind of the point of 2FA.
I didn't sit and read and parse the whole thing. That was mistake one. I have stated elsewhere, I was stressed and in a rush, and was trying to knock things off my list.
Also, 2FA can of course be updated. npm has had some shifts in how it approaches security over the years, and having worked within that ecosystem for the better part of 10-15 years, this didn't strike me as particularly unheard of on their part. This, especially after the various acquisitions they've had.
It's no excuse, just a contributing factor.
> It would be very unusual to write like that in a formal security notification.
On the contrary, I'd say this is pretty par for the course in corpo-speak. When "kindly" is used incorrectly, that's when it's a red flag for me.
> What does "temporarily locked" mean? That's not a thing. Also creating a sense of urgency is a classic phishing technique and a red flag.
Yes, of course it is. I'm well aware of that. Again, this email reached me at the absolute worst time it could have and I made a very human error.
"Temporarily locked" surprises me that it surprises you. My account was, in fact, temporarily locked while I was trying to regain access to it. Even npm had to manually force a password reset from their end.
> Any nonstandard domain is a red flag.
When I contacted npm, support responded from githubsupport.com. When I pay my TV tax here in Germany (a governmental thing), it goes to a completely bizarre, random third party site that took me ages to vet.
There's no such thing as a "standard" domain anymore with gTLDs, and while I should have vetted this particular one, it didn't stand out as something impossible. In my head, it was their new help support site - just like github.community exists.
Again - and I guess I have to repeat this until I'm blue in the face - this is not an excuse. Just reasons that contributed to my mistake.
> NEVER EVER EVER click links in any kind of security alert email.
I'm aware. I've taught this as the typical security person at my respective companies. I've embodied it, followed it closely for years, etc. I slipped up, and I think I've been more than transparent about that fact.
I didn't ask for my packages to be downloaded 2.6 billion times per week when I wrote most of these 10 years ago or inherited them more than five ago. You can argue - rightfully - about my technical failure here of using an outdated form of 2FA. That's on me, and would have protected against this, but to say this doesn't happen to security-savvy individuals is the wrong message here (see: Troy Hunt getting phished).
Shit happens. It just happened to happen to me, and I happen to have undue control over some stuff that's found its way into most of the javascript world.
The security lessons and advice are all very sound - I'm glad people are talking about them - but the point I'm trying to make is, that I am a security aware/trained person, I am hyper-vigilant, and I am still a human that made a series of small or lazy mistakes that turned into one huge mistake.
Thank you for your input, however. I do appreciate that people continue to talk about the security of it all.
Have the client-embedded AI view the email to determine if it contains a link to a purported service. Remotely verify if the service URL domain is valid, by comparing to the domains known for that service
If unknown, show the user a suspected phishing message.
This will occasionally give a false positive when a service changes their sending domain, but the remote domain<->service database can then be updated via an API call as a new `(domain, service)` pair for investigation and possible inclusion.
I feel like this would mitigate much of the risk of phishing emails slipping past defenses, and mainly just needs 2 or 3 API calls to service once the LLM has extracted the service name from the email.
Since then I've done all my dev in an isolated environment like a docker container. I know it's possible to escape the container, but at least that raises the bar to a level I'm comfortable with.
Added: story dedicated to this topic more or less https://news.ycombinator.com/item?id=45179889
> "Warning! This is the first time you have received a message from sender support@npmjs.help. Please be careful with links and attachments, and verify the sender's identity before taking any action."
There I fixed it. Now I don't even need the package array-ish!
I like to think I wouldn't. I don't put credentials into links from emails that I didn't trigger right then (e.g. password reset emails). That's a security skill everyone should be practicing in 2025.
It would be just as easy to argue that anyone who uses software and hasn't confirmed their security certifications include whatever processes you imagine avoids 'human makes 1 mistake and continues with normal workflow' error or holds updates until evaluated is negligent.
With the way things are going, I can't tell at a glance whether they mean crypto, VR, or AI when they say "web 3."
Then there's days like this.
Not only is it “proof of concept” but it’s a low risk high reward play. It’s brilliant really. Dangerously so.
DuckDB NPM packages 1.3.3 and 1.29.2 compromised with malware - https://news.ycombinator.com/item?id=45179939 - Sept 2025 (209 comments)
NPM debug and chalk packages compromised - https://news.ycombinator.com/item?id=45169657 - Sept 2025 (719 comments)
Developer stuff is arguably the least scrutinized thing that routinely runs as mega root.
I wish I could say that I audit every elisp, neovim, vscode plugin and every nifty modern replacement for some creaky GNU userland tool. But bat, zoxide, fzf, atuin, starship, viddy, and about 100 more? Nah, I get them from nixpkgs in the best case, and I've piped things to sh.
Write a better VSCode plugin for some terminal panel LLM gizmo, wait a year or two?
gg
> This is frankly a really good phishing email ... This is a 10/10 phishing email ..
Phishing email:
> As part of our on going commitment to account security, we are requesting that all users update their Two-Factor-Authentication (2FA) credentials ...
What does that even mean? What type of 2FA needs updating? One 2FA method supported is OTP. Can't see a reason that would legitimately ever need to be updated, so doesn't really pass the sniff test that every single user would need to "update 2FA".
I saw this kind of thing coming years ago. I never understood why people were obsessed with using tiny dependencies to save them 4 lines of code. These useless dependencies getting millions of weekly downloads always seemed very suspicious to me.
duxup•8h ago
I just try to avoid clicking links in emails generally...
loloquwowndueo•7h ago
Definitely good practice .
Dilettante_•7h ago
viraptor•7h ago
amysox•5h ago
duxup•5h ago
Roguelazer•2h ago
The only real solution is to have domain-bound identities like passkeys.
hu3•7h ago
Always manually open the website.
This week Oracle Cloud started enforcing 2FA. And surely I didn't click their e-mail link to do that.
JohnFen•7h ago
I don't just generally try, I _never_ click links in emails from companies, period. It's too dangerous and not actually necessary. If a friend sends me a link, I'll confirm it with them directly before using it.
0cf8612b2e1e•4h ago