Another good read is at https://www.aikido.dev/blog/npm-debug-and-chalk-packages-com...
More info:
- https://github.com/chalk/chalk/issues/656
- https://github.com/debug-js/debug/issues/1005#issuecomment-3...
Affected packages (at least the ones I know of):
- ansi-styles@6.2.2
- debug@4.4.2 (appears to have been yanked as of 8 Sep 18:09 CEST)
- chalk@5.6.1
- supports-color@10.2.1
- strip-ansi@7.1.1
- ansi-regex@6.2.1
- wrap-ansi@9.0.1
- color-convert@3.1.1
- color-name@2.0.1
- is-arrayish@0.3.3
- slice-ansi@7.1.1
- color@5.0.1
- color-string@2.1.1
- simple-swizzle@0.2.3
- supports-hyperlinks@4.1.1
- has-ansi@6.0.1
- chalk-template@1.1.1
- backslash@0.2.1
It looks and feels a bit like a targeted attack.
Will try to keep this comment updated as long as I can before the edit expires.
---
Chalk has been published over. The others remain compromised (8 Sep 17:50 CEST).
NPM has yet to get back to me. My NPM account is entirely unreachable; forgot password system does not work. I have no recourse right now but to wait.
Email came from support at npmjs dot help.
Looked legitimate at first glance. Not making excuses, just had a long week and a panicky morning and was just trying to knock something off my list of to-dos. Made the mistake of clicking the link instead of going directly to the site like I normally would (since I was mobile).
Just NPM is affected. Updates to be posted to the `/debug-js` link above.
Again, I'm so sorry.
Don't do security things when you're not fully awake, too. Lesson learned.
The email was a "2FA update" email telling me it's been 12 months since I updated 2FA. That should have been a red flag but I've seen similarly dumb things coming from well-intentioned sites before. Since npm has historically been in contact about new security enhancements, this didn't smell particularly unbelievable to my nose.
The email went to the npm-specific inbox, which is another way I can verify them. That address can be queried publicly but I don't generally count on spammers to find that one but instead look at git addresses etc
The domain name was `npmjs dot help` which obviously should have caught my eye, and would have if I was a bit more awake.
The actual in-email link matched what I'd expect on npm's actual site, too.
I'm still trying to work out exactly how they got access. They didn't technically get a real 2FA code from the actual, I don't believe. EDIT: Yeah they did, nevermind. Was a TOTP proxy attack, or whatever you'd call it.
Will post a post-mortem when everything is said and done.
Authentications are separated and if some signature must be placed or money to be sent, you must use other access code and the app shows the intention of what are you authorizing. If it is money being sent, you see where and how much you want to sent before you approve this request on the app.
But the app is all tied to digital identity from the id card in the first place - to set up these strong authentication guarantees in the first place you use your ID card. Some time ago we had to use computer with smartcard reader to set it up, nowdays I dunno whether it is NFC or something, but the mobile phone can read the ID card.
That's exactly what I mean! Who would use it if the UI/UX is terrible? Many Gemini (protocol) browsers like Lagrange have such pleasant UIs for it, though somewhat minimal. With sufficient push, you could have used mutual TLS from even hardware tokens.
It's nothing short of amazing that nobody worked on this. It's not as if there isn't a need. Everyone with high security requirements (defense, banks etc.) already do this, but this clumsy plugins and (semi-)proprietary software. Instead we get the nth iteration of settings redesigns.
Once heard of a user putting in a helpdesk ticket asking why they had to pay for the TOTP app. Then I realize their TOTP seed is probably out in the open now.
I’m sure we can imagine how else this could go badly…
It's a good thing the WebPKI cartel mostly did away with EV certs.... these days any old cert where only the SAN matches the domain and your browser gives a warm fuzzy "you're secure!"
By contrast, OV certs, which were originally supposed a very similar level of assurance, were did away with by CAs themselves, by cost-optimizing the verification requirements into virtual nonexistence.
That said, it remains a perpetual struggle to get people to understand the difference between being connected to the legitimate operator of satan.example (something an Internet-wide system mostly can guarantee) and it being wise to transact there (something extensive experience shows it can’t and shouldn’t try to). And if you’re a domain owner, your domain is your identity; pick one and stick to it. Stackoverflow.blog is stupid, don’t be like stackoverflow.blog.
[1] https://www.troyhunt.com/extended-validation-certificates-ar...
[2] https://arstechnica.com/information-technology/2017/12/nope-...
That's because the browser implementers gave up on trying to solve the identity problem. It's too difficult they said, we'd rather push other things.
Google implemented certificate pinning in Chrome for themselves and a few friends, said fuck everyone else, and declared the problem solved. Who cares about everyone else when your own properties are protected and you control the browser?
Meanwhile the average user has no idea what a certificate does, whether it does or doesn't prove identity.
No wonder they removed the lock icon from the browser.
They even gave me a new TOTP code to install (lol) and it worked. Showed up in authy fine. Whoever made this put a ton of effort into it.
I've always wondered if I ever get phished if I'll notice bc of that or if I'll just go "ugh 1password isn't working, guess i'll paste my password in manually" and end up pwned
The `.help` should have been the biggest red flag, followed by the 48-hours request timeline. I wasn't thinking about things like I normally would this morning and just wanted to get things done today. Been a particularly stressful week, not that it's any excuse.
If you maintain popular open source packages for the love of God get yourself a couple of security keys.
Can't really tell you what not to do, but if you're not already using a password manager so you can easily avoid phishing scams, I really recommend you to look into starting doing so.
In the case of this attack, if you had a password manager and ended up on a domain that looks like the real one, but isn't, you'd notice something is amiss when your password manager cannot find any existing passwords for the current website, and then you'd take a really close look at the domain to confirm before moving forward.
That being said, if you’re making login pages: please, for the love of god, test them with multiple password managers. Oh, and make sure they also work correctly with the browser’s autotranslation. Don’t rely on the label to make form submission decisions ... please.
I'd probably go looking for a new password manager if it fails to do one of the basic features they exist for, copy-pasting passwords defeats a lot of the purpose :)
> That being said, if you’re making login pages
I think we're doomed on this front already. My previous bank still (in 2025!) only allows 6 numbers as the online portal login password, no letters or special characters allowed, and you cannot paste in the field so no password manager works with their login fields, the future is great :)
This isn’t the fault of the password managers themselves, but devs not putting the right metadata on their login forms, or havo the password field show only after putting in the email address, causing the password input to fail to be filled, etc.
What is your mythical "good password manager"?
Screenshot here: https://imgur.com/a/q8s235k
Received: from npmjs.help by smtp.mailtrap.live
I'm just curious - and as a word of warning to others so we can learn. I may be missing some details, I've read most of the comments on the page.
For example, GitHub asks for 2FA when I change certain repo settings (or when deleting a repo etc.) even when I'm logged in. Maybe NPM needs to do the same?
FWIW npmjs does support FIDO2 including hard tokens like Yubikey.
They do not force re-auth when issuing an access token with publish rights, which is probably how the attackers compromised the packages. iirc GitHub does force re-auth when you request an access token.
I'm surprised by this. Yeah, GitHub definitely forces you to re-auth when accessing certain settings.
My local credit union sent me a "please change your password" email from a completely unassociated email address with a link to the change password portal. I emailed them saying "Hey it looks like someone is phishing" and they said, "nope, we really, intentionally, did this"
Companies intentionally withhold warning emails as late as possible to cause more people to incur late fees. So everyone is used to "shit, gotta do this now or get screwed"
You can't hope to have good security when everyone's money is controlled by organizations that actively train people to have bad OPSEC or risk missing rent.
I used the word "often" rather than "always" for this reason.
Completely agree. The only reliable way is to never use an email/SMS link to login, ever.
0x10ed43c718714eb63d5aa57b78b54704e256024e
0x13f4ea83d0bd40e75c8222255bc855a974568dd4
0x1111111254eeb25477b68fb85ed929f73a960582
0xd9e1ce17f2641f24ae83637ab66a2cca9c378b9f
Source: https://github.com/chalk/chalk/issues/656#issuecomment-32670...
> Those are swap contract addresses, not attacker addresses. E.g. 0x66a9893cC07D91D95644AEDD05D03f95e1dBA8Af the Uniswap v4 universal router addr.
> Every indication so far is that the attacker stole $0 from all of this. Which is a best-case outcome.
that message feels like it could work as a first-time as well
Regardless of whether the real NPM had done this in the past, decades of dumb password expiration policies have trained us that requests like this are to be expected rather than suspected.
dkim=pass header.d=smtp.mailtrap.live header.s=rwmt1 header.b=Wrv0sR0r
Urgency is poison.
Please, please put a foot in the door whenever you see anyone trying to push this kind of sh*t on your users. Make one month's advance notice the golden standard.
I see this pattern in scam mail (including physical) all the time: stamp an unreasonably short notice and expect the mark to panic. This scam works - and this is why legit companies that try this "in good faith" should be shamed for doing it.
Actual alerts: just notify. Take immediate, preventive, but non-destructive action, and help the user figure out how to right it - on their own terms.
and use what? instant message? few things lack legitimacy more than an instant message asking you to do something.
Links in email are much more of a problem than email itself. So tempting to click. It's right there, you don't have to dig through bookmarks, you don't have to remember anything, just click. A link is seductive.
the actual solution is to avoid dependencies whenever possible, so that you can review them when they change. You depend on them. You ARE reviewing them, right? Fewer things to depend on is better than more, and NPM is very much an ecosystem where one is encouraged to depend on others as much as possible.
If you're publishing your software: you can't "not" depend on some essential service like source hosting or library index.
> You ARE reviewing them, right?
Werkzeug is 20kloc and is considered "bare bones" of Python's server-side HTTP. If you're going to write a complex Python web app using raw WSGI, you're just going to repeat their every mistake.
While at it: review Python itself, GCC, glibc, maybe Linux, your CPU? Society depends on trust.
No. The problem is unsigned package repositories.
The solution is to tie a package to an identity using a certificate. Quickest way I can think off would be requiring packages to be linked to a domain so that the repository can always check incoming changes to packages using the incoming signature against the domain certificate.
1. It's an extra step: before you pwn the package, you need to pwn a domain.
2. When a domain is pwned, the packages it signs can be revoked with a single command.
You'd need some kind of offline verification method as well for these widely used infrastructure libraries.
Nothing "really works" against a sophisticated hacker :-/ Doesn't mean that "defense in depth" does not apply.
> You'd need some kind of offline verification method as well for these widely used infrastructure libraries.
I don't understand why this is an issue, or even what it means: uploading a new package to the repository requires the contributor to be online anyway. The new/updated/replacement package will have to be signed. The signature must be verified by the upload script/handler. The verification can be done using the X509 certificate issued for the domain of the contributor.
1. If the contributor cannot afford the few dollars a year for a domain, they are extremely vulnerable to the supply chain attack anyway (by selling the maintenance of the package to a bad actor), and you shouldn't trust them anyway.
2. If the contributor's domain gets compromised you only have to revoke that specific certificate, and all packages signed with that certificate, in the past or in the future, would not be installable.
As I have repeatedly said in the past, NPM (and the JS tools development community in general) had no adults in the room during the design phase. Everything about JS stacks feels like it was designed by children who had never programmed in anything else before.
It's a total clown show.
They didn't need me; plenty of repositories doing signed packages existed well before npm was created.
Which is why I likened them to a bunch of kids - they didn't look around at how the existing repos were designed, they just did the first thing that popped into their head.
Identity on the Internet is a lie. Nobody knows you're a dog.
The solution is to make security easy and accessible, so that the user can't be confused into doing the insecure thing.
What do you think HTTPS is?
(Microsoft owns GitHub, which owns NPM.)
What does worry me, though, is exactly what you pointed out about NPM’s response time. Given how central NPM packages are to the entire JavaScript ecosystem, you’d expect their security processes to be lightning fast. Every hour of delay can mean thousands (or millions) of downloads happening with potentially compromised code. And as you said, that just increases the incentive for attackers to target maintainers in the first place.
Most people who get phished aren’t using password managers, or they would notice that the autofill doesn’t work because the domain is wrong.
Additionally, TOTP 2FA (numeric codes) are phishable; stop using them when U2F/WebAuthn/passkeys are available.
I have never been phished because I follow best practices. Most people don’t.
In 15 years of maintaining OSS, I've never been pwned, phished, or anything of the sort.
Thank you for your input :)
Well, until now.
They screwed up, but we have thousands of years of evidence that people make mistakes even when they really know better and the best way to prevent that is to remove places where a single person making a mistake causes a disaster.
On that note, how many of the organizations at risk do you think have contributed a single dollar or developer-hour supporting the projects they trust? Maybe that’s where we should start looking for changes.
But instead, we're left with this mess where ordinary developers are forced to deal with the consequences of getting phished.
Also, Yubikeys work on phones just fine, via both NFC and USB.
Password managers can’t help you if you don’t use them properly.
Spotify steals (and presumably uploads) your clipboard, as well as other apps. Autofill is your primary defense against phishing, as you (and hopefully some others) learned this week.
The autofill feature is not 100% reliable for various reasons:
(1) some companies use different domains that are legitimate but don't exactly match the url in the password manager. Troy Hunt, the security expert who runs https://haveibeenpwned.com/ got tricked because he knew autofill is often blank because of legit different domains[1]. His sophisticated knowledge and heuristics of how autofill is implemented -- actually worked against him.
(2) autofill doesn't work because of technical bugs in the plugin, HTML elements detection, interaction/incompatibility with new browser versions, etc. It's a common complaint with all password plugins:
https://www.google.com/search?q=1password+autofill+doesn%27t...
https://www.1password.community/discussions/1password/1passw...
https://github.com/bitwarden/clients/issues?q=is%3Aissue%20a...
... so in the meantime while the autofill is broken, people have to manually copy-paste the password!
The real-world experience of flaky and glitchy autofill distorts the mental decision tree.
Instead of, "hey, the password manager didn't autofill my username/password?!? What's going on--OH SHIT--I'm being phished!" ... it becomes "it didn't autofill in the password (again) so I assume the Rube-Goldberg contraption of pw manager browser plugin + browser version is broken again."
Consider the irony of how password managers not being perfectly reliable causes sophisticated technical minds to become susceptible to social engineering.
In other words, password managers inadvertently create a "Normalization of Deviance" : https://en.wikipedia.org/wiki/Normalization_of_deviance
[1] >Thirdly, the thing that should have saved my bacon was the credentials not auto-filling from 1Password, so why didn't I stop there? Because that's not unusual. There are so many services where you've registered on one domain (and that address is stored in 1Password), then you legitimately log on to a different domain. -- from: https://www.troyhunt.com/a-sneaky-phish-just-grabbed-my-mail...
One side note: most systems make it hard to completely rely on WebAuthn. As long as other options are available, you are likely vulnerable to an attack. It’s often easier than it should be to get a vendor to reset MFA, even for security companies.
It was a generic Phish email you were in every single Corp 101 security course
My main point was simply that the better response isn’t to mock them but to build systems which can’t fail this badly. WebAuthn is great, but you have to go all in if you want to prevent phishing. NPM would also benefit immensely from putting speed bumps and things like code signing requirements in place, but that’s a big usability hit if it’s not carefully implemented.
A password manager can’t manage passwords if you don’t configure it and use it.
You can run the following to check if you have the malware in your dependency tree:
`rg -u --max-columns=80 _0x112fa8`
Requires ripgrep:
`brew install rg`
https://github.com/chalk/chalk/issues/656#issuecomment-32668...
npm cache clean --force pnpm cache delete
For security checks, the first 2 out of 3 is just fine.
-uu searches through ignored and hidden files (eg dotfiles)
-uuu searches through ignored, hidden, and binary files (ie everything)
If you have crypto wallets on the potentially compromised machine, or intend to transfer crypto via some web client, proceed with caution.
https://gist.github.com/edgarpavlovsky/695b896445c19b6f66f14...
Like ... npm?
Everybody knows npm is a gaping security issue waiting to happen. Repeatedly.
It’s convenient, so it’s popular.
Many people also don’t vendor their own dependencies, which would slow down the spread at the price of not being instantly up to date.
npm sold it really hard that you could rely on them and not have to vendor dependencies yourself. If I suggested that a decade ago in Seattle, I would have gotten booed out of the room.
Yet here we are. And this is going to get massively worse, not better.
I mean, I believe you, but the person you are replying to obviously believes that they are similar. Could you explain the significant differences?
https://www.redhat.com/en/blog/understanding-red-hats-respon...
https://lists.debian.org/debian-security-announce/2024/msg00...
It takes like 2 years to get up to date packages. This isn't NPM.
(I get that the same can be said for said for npm and the packages in question, but I don’t really see how the context of the thread matters in this case).
`Get-ChildItem -Recurse | Select-String -Pattern '_0x112fa8' | ForEach-Object { $_.Line.Substring(0, [Math]::Min(80, $_.Line.Length)) }`
Breakdown of the Command:
- Get-ChildItem -Recurse: This command retrieves all files in the current directory and its subdirectories.
- Select-String -Pattern '_0x112fa8': This searches for the specified pattern in the files.
- ForEach-Object { ... }: This processes each match found.
- Substring(0, [Math]::Min(80, $_.Line.Length)): This limits the output to a maximum of 80 characters per line.
---
Hopefully this should work for Windows devs out there. If not, reply and I'll try to modify it.
If there's any ideas on what I should be doing, I'm all ears.
EDIT: I've heard back, they said they're aware and are on it, but no further details.
It took them quite a long time to do so.
Github is SOC2 compliant, but that of course means nothing really.
And because it could happen to anyone that we should be doing a better job using AI models for defense. If ordinary people reading a link target URL can see it as suspicious, a model probably can too. We should be plumbing all our emails through privacy-preserving models to detect things like this. The old family of vulnerability scanners isn't working.
Great of you to own up to it.
I actually got hit by something that sounds very similar back in July. I was saved by my DNS settings where "npNjs dot com" wound up on a blocklist. I might be paranoid, but it felt targeted and was of a higher level of believability than I'd seen before.
I also more recently received another email asking for an academic interview about "understanding why popular packages wouldn't have been published in a while" that felt like elicitation or an attempt to get publishing access.
Sadly both of the original emails are now deleted so I don't have the exact details anymore, but stay safe out there everyone.
Please take care and see this as things that happen and not your own personal failure.
But google comes with its own privacy nightmares.
https://socket.dev/blog/npm-author-qix-compromised-in-major-...
While it sucks that this happened, the good thing is that the ecosystem mobilized quickly. I think these sorts of incidents really show why package scanning is essential for securing open source package repositories.
Does the AI detect the obfuscation?
Thanks for the links in your other comment, I'll take a look!
In this incident, we detected the packages quickly, reported them, and they were taken down shortly after. Given how high profile the attack was we also published an analysis soon after, as did others in the ecosystem.
We try to be transparent with how Socket work. We've published the details of our systems in several papers, and I've also given a few talks on how our malware scanner works at various conferences:
Very insightful.
“Chat, I have reading comprehension problems. How do I fix it?”
> The author appears to have deleted most of the compromised package before losing access to his account. At the time of writing, the package simple-swizzle is still compromised.
Is this quote from TFA incorrect, since npm hasn’t yanked anything yet?
npm does appear to have yanked a few, slowly, but I still don't have any insight as to what they're doing exactly.
phishing is too easy. so easy that I don't think the completely unchecked growth of ecosystems like NPM can continue. metastasis is not healthy. there are too many maintainers writing too many packages that too many others rely on.
want to stress everyone it can happen to. no one has perfect opsec or tradecraft as a 1 man show. its simply not possible. only luck gets one through and that often enough runs out.
Does anyone know how this attack works? Is it a CSRF against npmjs.com?
It wasn't a single-click attack, sorry for the confusion. I logged into their fake site with a TOTP code.
Sorry for what you're going through.
You login with your credentials, the attacker logins to the real site.
You get an SMS with a one time code from the real site and input it to the fake site.
The attacker takes the code andc finishes the login to the real site.
So if the hacker did an npm publish from local it would show up.
Also, junon.support++ – big thanks for being clear about all this.
If you change your key you can't use it for like 12 hours or something?
They can't pwn what they can't find online.
With nodejs packages, I can open up node_modules and read the code. But packages get a chance to run arbitrary code on your computer after installation. By the time you can read the source code, it may be too late.
I figure you aren't about to get fooled by phishing anytime soon, but based on some of your remarks and remarks of others, a PSA:
TRUSTING YOUR OWN SENSES to "check" that a domain is right, or an email is right, or the wording has some urgency or whatever is BOUND TO FAIL often enough.
I don't understand how most of the anti-phishing advice focuses on that, it's useless to borderline counter-productive.
What really helps against phishing :
1. NEVER EVER login from an email link. EVER. There are enough legit and phishing emails asking you to do this that it's basically impossible to tell one from the other. The only way to win is to not try.
2. U2F/Webauthn key as second factor is phishing-proof. TOTP is not.
That is all there is. Any other method, any other "indicator" helps but is error-prone, which means someone somewhere will get phished eventually. Particularly if stressed, tired, or in a hurry. It just happened to be you this time.
Good luck and well done again on the response!
Have the TOTP in the same/another password manager (after considering the tradeoffs) and that can also not be entered unless the domain is right :)
You only need read the whole thread however to see reasons why this would sometimes not be enough: sometimes the password manager does not auto-fill, so the user can think it's one of those cases, or they're on mobile and they don't have the extension there, or...
As a matter of fact, he does use one, that didn't save him, see: https://news.ycombinator.com/item?id=45175125
So pick one that does? That's like its top 2 feature
> he does use one
He doesn't since he has no autofill installed, so loses the key security+ convenience benefit of automatch
> I was mobile, the autofill stuff isn't installed
Still doesn’t work 100% of the time, because half of the companies on earth demote their developer time to breaking 1995-level forms. That’s why every popular password manager has a way to fill passwords for other domains, why people learn to use that feature, and why phishers have learned to convince people to use that feature.
WebAuthn prevents phishing. Password managers reduce it. This is the difference between being bulletproof like Superman or a guy in a vest.
As a developer I also love their ssh and gpg integrations, very handy.
I do get it for free from work, but if I had to choose one myself I'd have to pay for I'd probably still pick 1Passwrod.
That's why WebAuthn doesn't allow that as a core protocol feature, preventing both this attack and shifting the cost of unnecessary origin changes back to the company hosting the site. Attacking this guy for making a mistake in a moment of distraction is like prosecuting a soldier who was looking the other way when someone snuck past: wise leaders know that human error happens and structure the system to be robust against a single mistake.
Passkeys seem like the best solution here where you physically can not fall for a phishing attack.
This is how Troy Hunt got phished. He was already very tired after a long flight, but his internal alarm bells didn't ring loud enough, when the password manager didn't fill in the credentials. He was already used to autofill not always working.
I dunno, it mostly seems to not work when companies change their field names/IDs, or just 3rd party authentication, then you need to manually add domains. Otherwise my password manager (1Password) works everywhere where I have an account, except my previous bank which was stuck in the 90s and disallowed pasting the passwords. If you find that your password manager doesn't work with most websites (since it's "extremely common") you might want to look into a different one, even Firefox+Linux combo works extremely well with 1Password. Not affiliated, just a happy years+ user.
> Passkeys seem like the best solution here where you physically can not fall for a phishing attack.
Yeah, I've looked into Passkeys but without any migration strategy or import/export support (WIP last time I looked into it), it's not really an alternative just yet, at least for me personally. I have to be 100% sure I can move things when the time ultimately comes for that.
I’m extremely security conscious and that phishing email could have easily gotten me. All it takes is one slip up. Tired, stressed, distracted. Bokm, compromised
Do we just run:
npm list -g #for global installs
npm list #for local installs
And check if any packages appear that are on the above list?
Thanks!
Folks from multi-billion dollar companies with multimillion dollar packages should learn a few things from this response.
thanks for your efforts!
My worst nightmare is to wake up, see an email like that and hastily try to recover it while still 90% asleep, compromising my account in the process.
However, I think I can still sleep safe considering I'm using a password manager that only shows up when I'm on the right domain. A 2FA phishing email sending me to some unknown domain wouldn't show my password manager on the site, and would hence give me a moment to consider what's happening. I'm wondering if the author here wasn't using any sort of password manager, or something slipped through anyways?
Regardless, fucking sucks to end up there, at least it ends up being a learned lesson for more than just one person, hopefully. I sure get more careful every time it happens in the ecosystem.
I generally recommend Google's to any Android users, since it suggests your saved password not only based on domain in Chrome browser, but also based on registered appID for native apps, to extend your point. I'm not sure if third party password managers do this, although perhaps it's possible for anti-monopoly reasons?
It does happen, yes, it's not terrifying.
Wouldn't have happened if they used passkeys or a password manager. Things that get dunked on here regularly. Hm.
There was no way to quickly visualize that the site was fake, because it was in fact, "actually" amazon.com.
Phishing sucks. Sorry to read about this.
Edit: To other readers, yes, the exploit failed to use an additional TLS attack, which was how I noticed something was wrong. Otherwise, the site was identical. This was many years ago before browsers were as vocal as they are now about unsecured connections.
- make sure you're connected to the expected official domain (though many companies are desensitizing us to this threat by using distinct domains instead of subdomains for official business)
- make sure you're connected over HTTPS (this was most likely their issue)
- use a password manager which remembers official domains for you and won't offer to auto-fill on phishing sites
- use a 2FA method that's immune to phishing, like passkeys or security keys (if you do this, you get a lot of leniency to mistakes everywhere else)
If someone hijacked your DNS, they could direct your browser to connect to their web server instead which served a phishing site on port 80 and never redirected you, thus never ran into the certificate issue. That's part of the reason why browsers started warning users when they're connecting to a website without HTTPS.
There are a handful of important packages that are controlled by people who have consulting / commercial interests in OSS activity. These people have an incentive to inflate download numbers.
There could be a collective push to move off these deps, but it takes effort and nobody has a strong incentive to be the first
Debug, chalk, ansi-styles?
---
You can pretend like this is unique to JS ecosystem, but xz was compromised for 3 years.
Okay, but you're not suggesting that a compression algorithm is the same scale as "is-arrayish". I don't think everyone should need to reimplement LZMA but installing a library to determine if a value is an array is bordering on satire.
But it's all one author.
That being said, let's take color printing in terminal as an example. In any sane environment how complicated would that package have to be, and how much work would you expect it to take to maintain? To me the answer is "not much" and "basically never." There are pretty-print libraries for OS terminals written in compiled languages from 25 years ago that still work just fine.
So, what else is wrong with javascript dev where something as simple as coloring console text has 32 releases and 58 github contributors?
I see a new CLI graphics library on HN every other week.
https://github.com/fatih/color (Go) has 23 releases and 39 contributors.
https://github.com/BurntSushi/termcolor (Rust) has 173 contributors.
https://github.com/chalk/chalk/releases
5.0: moving to ESM
4.0: dropping support for Node <10
3.0: indeed some substantive API and functionality changes
I got to 2.0 which added truecolor support. I was amused to note also that 3.0 and 2.0 come with splashy banner images in their GitHub releases
This is a pattern I've seen often with "connector" packages, e.g. "glue library X into framework Y". They get like 10 major versions just because they have to keep updating major versions of X and Y they are compatible with, or do some other ecosystem maintenance.
A comprehensive library might offer a more neat DX, but you'd have to ship library code you don't use. (Yes, tree-shaking exists, but still is tricky and not widespread.)
It helps, but not as much as judicious imports. I've been using Deno more for my personal projects which does have a pretty good @std library, though I do think they should keep methods that simply pass through to the Deno runtime, and should probably support working in Node and Bun as well.
On the server side, of course, you can do whatever you like, see Node / Deno / Bun. But the code bundle size plays a minor role there.
If it's the browser's job to implement the standard library, how do you ensure that all browsers do this in a compliant and timely fashion? And if not, how do you optimise code-on-demand delivery over the internet?
I don't deny there are/could be solutions to this. But historically JS devs have wrestled with these issues as best they can and that has shaped what we see today.
What is this unknown runtime environment? Even during the browser war, there was just an handful of browsers. And IE was the only major outlier. Checking the existence of features and polyfilling is not that complicated.
And most time, the browser is already downloading lot of images and other resources. Arguing about bundle size is very hypocritical of developers that won't blink at adding 17 analytics modules.
Judging by what we see in the world, most developers don't agree with you. And neither do I. A handful of browsers, multiplied by many versions per browser in the wild (before evergreen browsers like Chrome became widespread, but even today with e.g. Safari, or enterprise users), multiplied by a sprawling API surface (dare I say it, a standard library) is not trivial. And that's not even considering browser bugs and regressions.
> very hypocritical of developers that won't blink
Not a great argument, as developers don't necessarily get to choose how to add analytics, and plenty of them try to push back against doing so.
Also, the cost of parsing and JIT'ing JS code is byte-for-byte different to the cost of decoding an image.
From my POV, most developers just test on the most popular browser (and the latest version of that) without checking if the API is standard or its changelog. Or they do dev on the most powerful laptop while the rest of the world is still on 8gb, FHD screen with integrated gpu.
Alternatively, because there are now (often ridiculous) build systems and compilation steps, we might expect similar behavior to other compiled binaries. Instead we get the worst of both worlds.
Yes, JS as it is is some kind of standard, but at a certain point we might ask, "Why not throw out the bad designs and start from scratch?" If it takes ten years to sunset the garbage and offer a compatibility shim, that's fine. All the more reason to start now.
A purely compiled WASM approach with first class DOM access or a clean scripting language with a versioned standard lib, either option would be better than the status quo.
I would love to see if a browser could like... "disaggregate" itself into WASM modules. E.g. why couldn't new JS standards be implemented in WASM and hot loaded into the browser itself from a trusted distributor when necessary?
Missing CSS Level 5 selectors? Browser goes and grabs the reference implementation from the W3C.
Low-level implementations could replace these for the browsers with the most demanding performance goals, but "everyone else" could benefit from at least remaining spec compatible?
(I guess this begs the question of "what's the API that these WASM modules all have to conform to" but I dunno, I find it an interesting thought.)
Say there is neoleftpad and megaleftpad - both could see widespread adoption, so you are transitively dependent on both.
I have no hope of this ever happening and am abandoning the web as a platform for interactive applications in my own projects. I’d rather build native applications using SDL3 or anything else.
It's perfectly possible to build web apps without relying on npm at all, or by being very selective and conservative about the packages you choose as your direct and transitive dependencies. If not by reviewing every line of code, then certainly by vendoring them.
Yes, this is more inconvenient and labor intensive, but the alternative is far riskier and worse for users.
The problem is with web developers themselves, who are often lazy, and prioritize their own development experience over their users'.
Apache Commons helper libraries don't import sub libraries for every little thing, they collect a large toolbox into a single library/jar.
Why instead do people in the javascript ecosystem insist on separating every function into it's own library that STILL has to import helper libraries? Why do they insist on making imports fractally complex for zero gain?
At this point, it’s just status-quo and lazyness
If your mega package decides to drop something you need you pretty much have to follow.
Or you can code it in. Mega packages can be very stable. Think SDL, ffmpeg, ImageMagick, Freetype...There's usually a good justification for dropping something alongside a wide deprecation windows. You don't just wake up and see the project gone. It's not like the escape codes for the unix terminal are going to change overnight.
Not hating on the author but I doubt similar compromise would happen to Facebook or Google owned package.
People have done, but the ecosystem has already engrossed around the current status quo and it's very hard to get rid of habits.
At a time small JS libraries were desired, and good library marketing approach, but nowadays simple sites ship megabytes of without a care.
In particular this developer is symptomatic of the problem of the NPM ecosystem and I've used him multiple times as an example of what not to do.
A fully-formed standard library doesn't spring into existence in a day.
UUID v7 for example is unstable and one would be pretty confident in that not changing at this stage.
Many unstable functions have less churn than a lot of other “stable” packages. It’s a standard library so it’s the right place to measure twice before cementing it forever.
It started as CommonJs ([1]) with Server-side JavaScript (SSJS) runtimes like Helma, v8cgi, etc. before node.js even existed but then was soon totally dominated by node.js. The history of Server-side JavaScript btw is even longer than Java on the server side, starting with Netscape's LifeScript in 1996 I believe. Apart from the module-loading spec, the CommonJs initiative also specified concrete modules such as the interfaces for node.js/express.js HTTP "middlewares" you can plug as routes and for things like auth handlers (JSGI itself was inspired by Ruby's easy REST DSL).
The reason for is-array, left-pad, etc. is that people wanted to write idiomatic code rather than use idiosyncratic JS typechecking code everywhere and use other's people packages as good citizens in a quid pro quo way.
[1]: https://wiki.commonjs.org/wiki/CommonJS
Edit: the people crying for an "authority" to just impose a stdlib fail to understand that the JS ecosystem is a heterogeneous environment around a standardized language with multiple implementations; this concept seems lost on TypeScripters who need big daddy MS or other monopolist to sort it all out for them
It's not unique in this sense, yet others manage to provide a lot more in their stdlib.
It's not that you need a "big daddy". It's that the ecosystem needs a community that actually cares about shit like this vulnerability.
What is this crap statement?
So you want type-checking because it helps you catch a class of errors in an automated way, and suddenly you have a daddy complex and like monopolies?
Claiming this says a lot more about you than people who use TypeScript.
It started with browsers giving you basically nothing. Someone had to invent jQuery 20 years ago for sensible DOM manipulation.
Somehow this ethos permeated into Node which also basically gives you nothing. Not even fundamental things like a router or db drivers which is why everyone is using Express, Fastify, etc. Bun and Deno are fixing this.
There are certainly a lot of libraries on crates.io, but I’ve noticed more projects in that ecosystem are willing to push back and resist importing unproven crates for smaller tasks. Most imported crates seem to me to be for bigger functionality that would be otherwise tedious to maintain, not something like “is this variable an array”.
(Note that I’m not saying Rust and Cargo are completely immune to the issue here)
I just created a Next.js app, saw that `is-arrayish` was in my node_modules, and tried to figure out how it got there and why. Here's the chain of dependencies:
next > sharp > color > color-string > simple-swizzle > is-arrayish
`next` uses `sharp` for image optimization. Seems reasonable.
`sharp` uses `color` (https://www.npmjs.com/package/color) to convert and manipulate color strings. Again, that seems reasonable. This package is maintained by Qix-.
Everything else in the chain (color-string > simple-swizzle > is-arrayish) is also maintained by Qix-. It's obnoxious to me that he feels it is necessary to have 80 different packages, but it would also be a substantial amount of effort for the other parties to stop relying on Qix-'s stuff entirely.
I'm not saying it is safer, just to the tired grug brain it can feel safer.
Why do "Java people" depend on lowrie's itext? Remember the leftpad-esque incident he initiated in 2015?
Most people writing JavaScript code for employment cannot really program. It is not a result of intellectual impairment, but appears to be more a training and cultural deficit in the work force. The result is extreme anxiety at the mere idea of writing original code, even when trivial in size and scope. The responses vary but often take the form of reused cliches of which some don't even directly apply.
What's weird about this is that it is mostly limited to the employed workforce. Developers who are self-taught or spend as much time writing personal code on side projects don't have this anxiety. This is weird because the resulting hobby projects tend to be substantially more durable than products funded by employment that are otherwise better tested by paid QA staff.
As a proof ask any JavaScript team at your employment to build their next project without a large framework and just observe how they respond both verbally and non-verbally.
"It has been tested by a 1000 people before me"
"What if there is an upstream optimisation?"
"I'm just here to focus on Business Problems™"
"It reduces cognitive load"
---
Whilst I think you are exaggerating, I do recognise this phenomenon. For me, it was during the pandemic when I had to train / support a lot of bootcamp grads and new entrants to the career. They were anxious to perform in their new career and interpreted that as shipping tickets as fast as possible.
These developers were not dumb but they had... like, no drive at all to engage with problems. Most programmers should enjoy problems, not develop a kind of bad feeling behind the eyes, or a tightness in their chest. But for these folks, a problem was a threat, of a bad status update at their daily Scrum.
Dependencies are a socially condoned shortcut to that. You can use a library and look like a sensible and pragmatic engineer. When everyone around you appears to accept this as the norm, it's too easy to just go with the flow.
I think it is a change in the psychological demographic too. This will sound fanciful. But tech used to select for very independent, stubborn, disagreeable people. Now, agreeableness is king. And what is more agreeable than using dependencies?
reinventing the wheel
some comparison to assembly
To be honest, I think these programmers understood their jobs perfectly here. Their bosses view programmers as commodities, are not concerned with robustness, maintainability, or technical merit - they want a crank they can turn that spits out features.
> As a proof ask any JavaScript team at your employment to build their next project without a large framework and just observe how they respond both verbally and non-verbally.
With an assumption like that, I bet the answer is mostly the same if you ask any Java/Python dev for example — build your next microservice/API without Spring or DRF/Flask.
Even though I only clock at about 5YOE, I'm really tired of hearing these terrible takes since I've met plentiful share of non-JS backend folks for example, who have no idea about basic API design, design patterns or even how to properly use the same framework they use for every single project.
A key phrase that comes up is "this is a solved problem." So what? You should want to solve it yourself, too. It's the PM's job to tell us not to.
The main reasons you don't see this in other languages is they don't have so many developers, and their packaging ecosystems are generally waaay higher friction. Rust is just as easy, but way higher skill level. Python is... not awful but it's definitely still a pain to publish packages for. C++, yeah why even bother.
If Python ever official adopts uv and we get a nice `uv publish` command then you will absolutely see the same thing there.
If you NPM import that's now part of your SCA/SBOM/CI to monitor and keep secure.
If you write code, it's now your problem to secure and manage.
The observation is real however. But every culture develops its own quirks and ideas, and for some reason this has just become a fundamental part of Javascript's. It's hard to know why after the fact, but perhaps it could spark the interest of sociologists who can enlighten us.
After an npm incident in 2020 I wrote up my thoughts. I argue that this anxiety is actually somewhat unique to JS which is why we don't see a similar culture in other languages ecosystems
https://crabmusket.net/java-scripts-ecosystem-is-uniquely-pa...
Basically, the sources of paranoia in the ecosystem are
1. Weak dynamic typing
2. Runtime (browser engineers) diversity and compatibility issues
3. Bundle size (the "physics" of code on a website)
In combination these three things have made JS's ecosystem really psychologically reliant on other people's code.
The rust docs, a static site generator, pull in over 700 packages.
Because it’s trivial and easy
Another one for “web3 is going great”…
Was caught quickly (hours? hard to be sure, the versions have been removed/overwritten).
Attacker owns npmjs.help domain.
Kinda "proud" on it haha :D
Not sure if that would be a better result in the end. It seems like it depends on who has direct dependencies and how much testing they do. Do they pass it on or not?
the actual code only runs in a browser context - it replaces all crypto addresses in many places with the attacker's.
a list of the attacker's wallet addresses: https://gist.github.com/sindresorhus/2b7466b1ec36376b8742dc7...
So let me raise a different concern. This looks like an exploit for web browsers, where an average user (and most above average users) have no clue as to what's running underneath. And cryptocurrency and web3 aren't the only sensitive information that browsers handle. Meaning that similar exploits could arise targeting any of those. With millions of developers, someone is bound to repeat the same mistake sooner or later. And with some packages downloaded thousands of times per day, some CI/CD system will pull it in and publish it in production. This is a bigger problem than just a developer's oversight.
- How do the end user protect themselves at this point? Especially the average user?
- How do you prevent supply chain compromises like this?
- What about other language registries?
- What about other platforms? (binaries, JVM, etc?)
This isn't a rhetorical question. Please discuss the solutions that you use or are aware of.
Unless this is a situation that could've been easily avoided with a password manager since the link was from a website not in your manager's database, so can't happen to anyone following security basics, and the point of discussing the oversight instead of just giving up is to increase the share of people who follow the basics?
Don't use unregulated financial products. The likelihood of a bank being hit by this isn't zero - but in most parts of the world they would be liable and the end user would be refunded.
> How do you prevent supply chain compromises like this?
Strictly audit your code.
There's no magic answer here. Oh, I'm sure you can throw an LLM at the problem and hope that the number of false positives and false negatives don't drown you. But it comes down to having an engineering culture which moves slowly and doesn't break things.
Why a package with 10+ million weekly downloads can just be "updated" like this is beyond me. Have a waiting period. Make sure you have to be explicit. Use dates. Some of the packages hadn't been updated in 7 years and then we firehosed thousands of CI/CD jobs with them within minutes?
npm and most of these package manager should be getting some basic security measures like waiting periods. it would be nice if I could turn semver off to be honest and force folks to actually publish new packages. I'm always bummed when a 4 layer deep dependency just updates at 10PM EST because that's when the open source guy had time.
Packages used to break all the time, but I guess things kind of quieted down and people stopped using semvers as much. Like I think major packages like React don't generally have "somedepend" : "^1.0.0" but go with "1.0.0"
I think npm and the community knew this day was coming and just hopes it'll be fixed by tooling, but we need fundamental change in how packages are updated and verified. The idea that we need to "quickly" rollout a security fix with a minor patch is a good idea in theory, but in practice that doesn't really happen all that often. My audit returns all kinds of minor issues, but its rare that I need it...and if that's the case I'll probably do a direct update of my packages.
Package-lock.json was a nice bandaid, but it shouldn't have been the final solution IMHO. We need to reduce semver usage, have some concept of package age/importance, and npm needs a scanner that can detect obviously obfuscated code like this and at least put the package in quarantine. We could also use some hooks in npm so that developers could write easy to control scripts to not install newer packages etc.
Yep. Also interesting how many automated security scanners picked this up right away ... but NPM itself can't be bothered, their attitude is "YOLO we'll publish anything"
You could imagine that a compromised pad-left package could read the contents of all password inputs on the page and send it to an attacker server, but if you don't let that package access the document, or send web requests, you can avoid this compromise.
- Install as little software as possible, use websites if possible.
- Keep important stuff (especially cryptocurrency) on a separate device.
- If you are working on a project that pulls 100s of dependencies from a package registry, put that project on a VM or container.
If I understood this correctly, this is an exploit for the browser.
[0]: https://gist.github.com/martypitt/0d50c350aa7f0fc73354754343...
grep -r "_0x112fa8"
"overrides": {
"chalk": "5.3.0",
"strip-ansi": "7.1.0",
"color-convert": "2.0.1",
"color-name": "1.1.4",
"is-core-module": "2.13.1",
"error-ex": "1.3.2",
"has-ansi": "5.0.1"
}
EDIT: This comment[1] suggests `npm audit` issue has now been resolved.[0] https://jdstaerk.substack.com/i/173095305/how-to-protect-you...
[1] https://github.com/chalk/chalk/issues/656#issuecomment-32676...
Edit: As of this morning, `npm audit` will catch this.
It actually calculates the Levenshtein distance between the legitimate address and every address in its own list. It then selects the attacker's address that is visually most similar to the original one.
This is a brilliant piece of social engineering baked right into the code. It's designed to specifically defeat the common security habit of only checking the first and last few characters of an address before confirming a transaction.
We did a full deobfuscation of the payload and analyzed this specific function. Wrote up the details here for anyone interested: https://jdstaerk.substack.com/p/we-just-found-malicious-code...
Stay safe!
I don't agree that the exuberance over the brilliance of this attack is warranted if you give this a moment's thought. The web has been fighting lookalike attacks for decades. This is just a more dynamic version of the same.
To be honest, this whole post has the ring of AI writing, not careful analysis.
No it doesn't?
It has been what, hours? since the discovery? Are you expecting them to spend time analysing it instead of announcing it?
Also, nearly everyone has AI editing content these days. It doesn’t mean it wasn’t written by a human.
I want no part of AI in any form of my communication, and I know many which espouse the same.
I will certainly agree on "many", but not "nearly everyone".
"we kindly ask that you complete this update your earliest convenience".
The email was included here: https://cdn.prod.website-files.com/642adcaf364024654c71df23/...
From this article: https://www.aikido.dev/blog/npm-debug-and-chalk-packages-com...
> Our package-lock.json specified the stable version 1.3.2 or newer, so it installed the latest version 1.3.3
As far as I've always understood, the lockfile always specifies one single, locked version for each dependency, and even provides the URL to the tarball of that version. You can define "x version or newer" in the package.json file, but if it updates to a new patch version it's updating the lockfile with it. The npm docs suggest this is the case as well: https://arc.net/l/quote/cdigautx
And with that, packages usually shouldn't be getting updated in your CI pipeline.
Am I mistaken on how npm(/yarn/pnpm) lockfiles work?
In my experience, it's common for CI pipelines to be misconfigured in this way, and for Node developers to misunderstand what the lock file is for.
Thank you!
That's because they are being "replaced", in a sense!
When an industry doubles every 5 years like web dev was for a long time, that by the mathematical definition means that the average developer has 5 years or less experience. Sure, the old guard eventually get to 10 or 15 years of experience, but they're simply outnumbered by an exponentially growing influx of total neophytes.
Hence the childish attitude and behaviour with everything to do with JavaScript.
And so, it seems, is everything else. Perhaps, this commentary adds no value — just old man yells at cloud stuff.
Changing the main command `npm install` after 7 years isn't really "stable". Anyway didn't this replace versions, so locking won't have helped either?
The lockfile includes a hash of the tarball, doesn't it?
The package.json locked the file to ^1.3.2. If a newer version exists online that still satisfies the range in package.json (like 1.3.3 for ^1.3.2), npm install will often fetch that newer version and update your package-lock.json file automatically.
That’s how I understand it / that’s my current knowledge. Maybe there is someone here who can confirm/deny that. That would be great!
The npm team eventually seemed to settle on requiring someone to bring an RFC for this improvment, and the RFC someone did create I think has sat neglected in a corner ever since.
That way it's much harder to make one hash look like another.
Simple. Instead of forcing colour, one could retain a no colour option maybe?
Done. Solved.
Everything should have this option. I personally have no colour vision issues, other than I find colour annoying in any output. There's a lot who prefer this too.
If you're the sort of person who would think about adjusting it to suit your sensitivity to this kind of attack, you're likely not the sort of person that the feature is trying to protect anyhow.
That's a lot of flexibility within which to do clever color math which accounts for the types of colorblindness according to their prevalence.
I am sorry, but this is not due to not having a good standard library, this is just bad programming. Just pure laziness. At this point just blacklist every package starting with is-.
I believe if you pay money to certain repo maintainers like red hat you can still have a supported version of Python 2.7.
On one extreme, we have standards committees that move glacially, and on the other, we have a chaotic package ecosystem moving faster than is prudent. The two are related.
1) N tiny dubious modules like that are created by maintainers (like Qix)
2) The maintainer then creates 1 super useful non-tiny module that imports those N dubious modules.
3) Normal devs add that super useful module as a dependency… and ofc, they end up with countless dubious transitive dependencies
Why maintainers do that? I don’t think it’s ignorance or laziness or lack of knowledge about good software engineering. It’s because either ego (“I’m the maintainer of N packages with millions of downloads” sounds better than “I’m the maintainer of 1 package “), or because they get more donations or because they are actually planning to drop malware some time soon.
> (function() { return Array.isArray(arguments); })()
false
Kudos to you for owning up to it.
As others have said, it's the kind of thing that could happen to anyone, unfortunately.
Got it from the "simple-swizzle" package that hasn't been taken down by NPM.
That page says that the affected versions are ">=0". Does that seem right? That page also says:
> Any computer that has this package installed or running should be considered fully compromised. All secrets and keys stored on that computer should be rotated immediately from a different computer. The package should be removed, but as full control of the computer may have been given to an outside entity, there is no guarantee that removing the package will remove all malicious software resulting from installing it.
Is this information accurate?
Edit: However, I think the reason the security advisory marks the entire package at the moment, is because there is no mechanism in npm to notify users a version with an exploit is currently installed. `npm audit` looks at the versions configured, not installed.
The security advisory triggering this warning forces everyone to reinstall packages today, in case 4.4.2 was installed.
- https://github.com/advisories/GHSA-hfm8-9jrf-7g9w
- https://github.com/advisories/GHSA-5g7q-qh7p-jjvm
- https://github.com/advisories/GHSA-8mgj-vmr8-frr6
- https://github.com/advisories/GHSA-m99c-cfww-cxqx
I wonder if they're all from the same thing, they all popped up at the same time.
edit: they do appear to all be the same thing, and the advisory version wildcard is wrong: https://github.com/github/advisory-database/issues/6099
Now? Why aren't everyone setting up own GitHub mirrors is beyond me, almost. They were 100% right.
Once upon a time, I used a software called passwordmaker. Essentially, it computed a password like hash(domain+username+master password). Genius idea, but it was a nightmare to use. Why? Because amazon.se and amazon.com share the same username/password database. Similarly, the "domain" for Amazon's app was "com.amazon.something".
Perhaps it's time for browser vendors to strongly bind credentials to the domain, the whole domain and nothing but the domain, so help me Codd.
Although I'll still be told that using single-header libraries and avoiding the C standard library are regressive and obsolete, so gotta wait 10 more years I guess.
XZ got hacked, it reached development versions of major distributions undetected, right inside an _ssh_, and it only got detected due to someone luckily noticing and investigated slow ssh connections.
Still some C devs will think it's a great time to come out and boast about their practices and tooling. :shrug:
For xz an advanced persistent threat, inserted hypertargeted self modifying code into a tarball.
A single npm dev was "hacked" (phished) by a moderate effort, (presumably drive by) crypto thief.
I have no idea what you meant by "right inside _ssh_" but I don't think that's a good description of what actually happened in any possible case.
I'm unlikely to defend C devel practices but this doesn't feel like an indictment of C, if anything the NPM ecosystem looks worse by this comparison. Especially considering the comment you replied to was advocating for minimizing dependencies, which if the distros effected by xz being compromised had followed, (instead of patching sshd) they wouldn't have shipped a compromised version.
That sounds great in theory. In practice, NPM is very, very buggy, and some of those bugs impact pulling deps from git repos. See my issue here: https://github.com/npm/cli/issues/8440
Here's the history behind that:
Projects with build steps were silently broken as late as 2020: https://github.com/npm/cli/issues/1865
Somehow no one thought to test this until 2020, and the entire NPM user base either didn't use the feature, or couldn't be arsed to raise the issue until 2020.
The problem gets kinda sorta fixed in late 2020: https://github.com/npm/pacote/issues/53
I say kinda sorta fixed, because somehow they only fixed (part of) the problem when installing package from git non-globally -- `npm install -g whatever` is still completely broken. Again, somehow no one thought to test this, I guess. The issue I opened, which I mentioned at the very beginning of this comment, addresses this bug.
Now, I say "part of of the problem" was fixed because the npm docs blatantly lie to you about how prepack scripts work, which requires a workaround (which, again, only helps when not installing globally -- that's still completely broken); from https://docs.npmjs.com/cli/v8/using-npm/scripts:
prepack
- Runs BEFORE a tarball is packed (on "npm pack", "npm publish", and when installing a git dependencies).
Yeah, no. That's a lie. The prepack script (which would normally be used for triggering a build, e.g. TypeScript compilation) does not run for dependencies pulled directly from git.Speaking of TypeScript, the TypeScript compiler developers ran into this very problem, and have adopted this workaround, which is to invoke a script from the npm prepare script, which in turn does some janky checks to guess if the execution is occuring from a source tree fetched from git, and if so, then it explicitly invokes the prepack script, which then kicks off compiler and such. This is the workaround they use today:
https://github.com/cspotcode/workaround-broken-npm-prepack-b...
... and while I'm mentioning bugs, even that has a nasty bug: https://github.com/cspotcode/workaround-broken-npm-prepack-b...
Yes, if the workaround calls `npm run prepack` and the prepack script fails for some reason (e.g. a compiler error), the exit code is not propagated, so `npm install` will silently install the respective git dependency in a broken state.
How no one looks at this and comes to the conclusion that NPM is in need of better stewardship, or ought to be entirely supplanted by a competing package manager, I dunno.
But maybe I'm misunderstanding the feature
So I guess a lot more accounts/packages might be affected than the ones stated in the article
If you're doing financial transactions using a big pile of NPM dependencies, you should IMHO be financially liable for this kind of thing when your users get scammed.
Luckily some of them actually import the packages to a local distribution point and check them first.
https://www.businessinsider.com/npm-ceo-bryan-bogensberger-r...
https://www.businessinsider.com/npm-cofounder-laurie-voss-re...
Why in the world would they NEED to stop? It apparently doesn't harm their "business"
What kind of crazy AI could possible have noticed that on the NPM side?
This is frustrating as someone that has built/published apps and extensions to other software providers for years and must wait days or weeks for a release to be approved while it's scanned and analyzed.
For all the security wares that MS and GitHub sell, NPM has seen practically no investment over the years (e.g. just go review the NPM security page... oh, wait, where?).
> Things were fine before they became mainstream
As in, things were fine before we had commonplace tooling to fetch third party software?
> package files that are set to grab the latest version
The three primary Node.js package managers all create a lockfile by default.
> As in, things were fine before we had commonplace tooling to fetch third party software?
Yes. The languages without a dominant package manager (basically C and C++) are the only ones that have self-contained libraries, that you can just drag into your source tree.
This is how you write good libraries - as can be seen by the fact that for many problems, there's a powerful C (or C++, but usually C) library with minimal (and usually optional) dependencies, that is the de-facto standard, and has bindings for most other languages. Think SDL, ffmpeg, libcurl, zlib, libpng/jpeg, FreeType, OpenSSL, etc, etc.
That's not the case for libraries written in JS, Python, or even other compiled languages like Go and Rust - libraries written in those languages come with a dependency tree, and are never ported to other languages.
Node.js proper has floated the idea of including chalk into the standard libraries, FWIW.
Oh my word please no! Every time I run into an issue where a dependency suddenly isn’t logging colors like it’s supposed to, it always boils down to chalk trying to do something fancy to handle an edge case that doesn’t actually exist. Just log the dang colors!
is there a runnable command to determine if the package list has a compromised version of anything?
[1] https://www.debian.org/doc/manuals/securing-debian-manual/de...
So if we're discussing anything here, why not what this reason is, instead of everyone praising their favourite package registry?
https://github.com/npm/npm/pull/4016#issuecomment-76316744
https://news.ycombinator.com/item?id=38645969
https://github.com/npm/cli/commit/5a3b345d6d5d175ea9ec967364...
Good.
export default function ansiRegex({onlyFirst = false} = {}) {
// Valid string terminator sequences are BEL, ESC\, and 0x9c
const ST = '(?:\\u0007|\\u001B\\u005C|\\u009C)';
// OSC sequences only: ESC ] ... ST (non-greedy until the first ST)
const osc = `(?:\\u001B\\][\\s\\S]*?${ST})`;
// CSI and related: ESC/C1, optional intermediates, optional params (supports ; and :) then final byte
const csi = '[\\u001B\\u009B][[\\]()#;?]*(?:\\d{1,4}(?:[;:]\\d{0,4})*)?[\\dA-PR-TZcf-nq-uy=><~]';
const pattern = `${osc}|${csi}`;
return new RegExp(pattern, onlyFirst ? undefined : 'g');
}
I guess having some cool down period after some strange profile activity (e.g. you've suddenly logged from China instead of Germany) before you're allowed to add another signing key would help, but other than that?
It removes _most_ of the release friction while still adding the "human has acknowledged the release" bit.
In the attack described above, the attacker did not have access to the victim's email address.
Hell no. CI needs to be a clean environment, without any human hands in the loop.
Publishing to public registries should require a chain of signatures. CI should refuse to build artifacts from unsigned commits, and CI should attach an additional signature attesting that it built the final artifact based on the original signed commit. Public registries should confirm both the signature on the commit and the signature on the artifact before publishing. Developers without mature CI can optionally use the same signature for both the source commit and the artifact (i.e. to attest to artifacts they built on their laptop). Changes to signatures should require at least 24 hours to apply and longer (72 hours) for highly popular foundation packages.
https://github.blog/changelog/2025-07-01-dependabot-supports...
Is there a way to not accept any package version less than X months old? It's not ideal because malicious changes may still have gone undetected in that time span.
Time to deploy AI to automatically inspect packages for suspect changes.
The GitHub page (https://github.com/advisories/GHSA-hfm8-9jrf-7g9w) says to treat the computer as compromised. What does this mean? Do I have to do a full reset to be sure? Should I avoid running the app until the version is updated?
>Any computer that has this package installed or running should be considered fully compromised. All secrets and keys stored on that computer should be rotated immediately from a different computer. The package should be removed, but as full control of the computer may have been given to an outside entity, there is no guarantee that removing the package will remove all malicious software resulting from installing it.
It sounds like the package then somehow executes and invites other software onto the machine. If something else has executed then anything the executing user has access to is now compromised.
This incident would be much more severe if the code would actually steal envs etc. because a lot of packages have dependency on debug as wildcard.
1. The version matching was wrong (now fixed).
2. The warning message is (still) exaggerated, imo, though I understand why they’d pass the liability downstream by doing so.
- Don't update dependencies unless necessary
- Don't use `npm` to install NPM packages, use Deno with appropriate sandboxing flags
- Sign up for https://socket.dev and/or https://www.aikido.dev
- Work inside a VM
And get yourself drowning in insurmountable technical debt in about two months.
JS ecosystems moves at an extremely fast pace and if you don't upgrade packages (semi) daily you might inflict a lot of pain on you once a certain count of packages start to contain incompatible version dependencies. It sucks a lot, I know.
It so recommend to stay on top of the dependencies and for different stacks this means different update schedule. For some, daily is indeed a good choice.
Somehow we've survived without updating dependencies for probably at least a year.
Other than that you now probably have an insurmountable technical debt and upgrading the dependencies is a project of itself.
All the above applies to JavaScript world, of course. It's much different for the rest.
content-security-policy: default-src 'self';
(and not sending crypto transactions): No need to worry about CVEs in jsAgain, this is not the failure of a single person. This is a failure of the software industry. Supply chain attacks have gigantic impacts. Yet these are all solved problems. Somebody has to just implement the standard security measures that prevents these compromises. We're software developers... we're the ones to implement them.
Every software packaging platform on the planet should already require code signing, artifact signing, user account attacker access detection heuristics, 2FA, etc. If they don't, it's not because they can't, it's because nobody has forced them to.
These attacks will not stop. With AI (and continuous proof that they work) they will now get worse. Mandate software building codes now.
You’re right, this will only get a lot worse.
It's not that simple. You can implement the most stringent security measures, and ultimately a human error will compromise the system. A secure system doesn't exist because humans are the weakest link.
So while we can probably improve some of the processes within npm, phishing attacks like the ones used in this case will always be a vulnerability.
You're right that AI tools will make these attacks more common. That phishing email was indistinguishable from the real thing. But AI tools can also be used to scan and detect such sophisticated attacks. We can't expect to fight bad actors with superhuman tools at their disposal without using superhuman tools ourselves. Fighting fire with fire is the only reasonable strategy.
These can exclude a lot of common systems and software, including automations. If your heuristic is quite naive like "is using Linux" or "is using Firefox" or "has an IP not in the US" you run into huge issues. These sound stupid, because they are, but they're actually pretty common across a lot of software.
Similar thing with 2FA. Sms isn't very secure, email primes you to phishing, TOTP is good... but it needs to be open standard otherwise we're just doing the "exclude users" thing again. TOTP is still phishable, though. Only hardware attestation isn't, but that's a huge red flag and I don't think NPM could do that.
The attacks are still possible, but they're not going to be nearly as easy here.
This attack would have 100% been thwarted, when a load of emails appeared saying "publish package you just uploaded?".
(if you read the dev's account of this, you'll see this would have worked)
If I could have a publish token / oidc Auth in CI that required an additional manual approve in the web UI before it was actually published I could imagine this working well.
It would help reduce risk from CI system breaches as well.
There are already "package published" notification emails, it's just at that point it's too late.
Sadly, programming language package managers have normalized the idea that everyone who uses the package manager should be exposed to every random package and release from random strangers with no moderation. This would be unthinkable for a Linux distribution. (You can of course add 3rd-party Linux package repositories, unstable release branches, etc, which should enforce the same type of rules, but they don't have to)
Linux distros are still vulnerable to supply chain attacks though. It's very rare but it has happened. So regardless of the release process, you need all the other mitigations to secure the supply chain. And once they're set up it's all pretty automatic and easy (I use them all day at work).
This will probably be reigned in soon. Many companies I know are backing away from npm/node, and even composer. It's just too risky an ecosystem.
I don't disagree, but this sentence is doing a lot of heavy lifting. See also "draw the rest of the owl".
Hey, that's a pretty good reproduction of npmjs
As for developers trusting a plugin that reaches out to an external location to determine the reputation of every website they visit seems like a harder sell though.
Return-Path: <ndr-6be2b1e0-8c4b-11f0-0040-f184d6629049@mt86.npmjs.help> X-Original-To: martin@minimum.se Delivered-To: martin@minimum.se Received: from mail-storage-03.fbg1.glesys.net (unknown [10.1.8.3]) by mail-storage-04.fbg1.glesys.net (Postfix) with ESMTPS id 596B855C0082 for <martin@minimum.se>; Mon, 8 Sep 2025 06:47:25 +0200 (CEST) Received: from mail-halon-02.fbg1.glesys.net (37-152-59-100.static.glesys.net [37.152.59.100]) by mail-storage-03.fbg1.glesys.net (Postfix) with ESMTPS id 493F2209A568 for <martin@minimum.se>; Mon, 8 Sep 2025 06:47:25 +0200 (CEST) X-SA-Rules: DATE_IN_PAST_03_06,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,FROM_FMBLA_NEWDOM,HTML_FONT_LOW_CONTRAST,HTML_MESSAGE,MIME_HTML_ONLY,SPF_HELO_NONE,SPF_PASS X-RPD-Score: 0 X-SA-Score: 1.1 X-Halon-ID: e9093e1f-8c6e-11f0-b535-1932b48ae8a8 Received: from smtp-83-4.mailtrap.live (smtp-83-4.mailtrap.live [45.158.83.4]) by mail-halon-02.fbg1.glesys.net (Halon) with ESMTPS id e9093e1f-8c6e-11f0-b535-1932b48ae8a8; Mon, 08 Sep 2025 06:47:23 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; x=1757637200; d=smtp.mailtrap.live; s=rwmt1; h=content-transfer-encoding:content-type:from:to:subject:date:mime-version: message-id:feedback-id:cfbl-address:from; bh=46LbKElKI+JjrZc6EccpLxY7G+BazRijag+UbPv0J3Y=; b=Dc1BbAc9maHeyNKed/X7iAPabcuvlgAUP6xm5te6kkvGIJlame8Ti+ErH8yhFuRy/xhvQTSj8ETtV f3AElmzHDWcU3HoD/oiagTH9JbacmElSvwtCylHLriVeYbgwhZVzTm4rY7hw/TVqNE5xIZqWWCMrVG wi+k9uY+FUIQAh7Ta2WiPk/A4TPh04h3PzA50zathvYcIsPC0iSf7BBE+IIjdLXzDzNZwRmjgv2ZHW GAx/FRCPFgg0PbVvhJw98vSHnKmjPO/mmcotKFG+MUWkCtTu28Mm46t7MI7z5PrdCXZDA7L1nVnIwE ffIf0zED32Z6tFSJFNmYgFZlD6g+DnQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; x=1757637200; d=npmjs.help; s=rwmt1; h=content-transfer-encoding:content-type:from:to:subject:date:mime-version: message-id:feedback-id:cfbl-address:from; bh=46LbKElKI+JjrZc6EccpLxY7G+BazRijag+UbPv0J3Y=; b=DyWvxSOjMf7WfCVtmch+zw63kZ/OOBjcWnh1kIYs/hozgemb9mBIQCMqAdb4vSZChoW5uReVH5+k5 Jaz7UodbPJksVkYWqJOVg6nyx5EaYMYdgcw1+BCct/Sf2ceFwWurhupa6y3FBTFWBYLhcsAXERlx2l IuxWlpZoMDEBqDxjs8yvx/rkBrcd/2SNTcI+ooKJkrBIGBKuELOd3A5C6jlup6JNA4bE7vzP3FUfKw y0357UMnn45zWHm9HvudO4269FRlNjpiJaW7XF1/ANVrnDlNWfUGNQ5yxLZqmQDTtxFI7HcOrF3bTQ O/nrmVOvN9ywMvk/cJU4qGHqD9lT32A== CFBL-Address: fbl@smtp.mailtrap.live; report=arf X-Report-Abuse-To: abuse@mailtrap.io Received: from npmjs.help by smtp.mailtrap.live with ESMTPSA 6aee9fff-8c4b-11f0-87bb-0e939677d2a1; Mon, Sep 08 2025 00:33:20 GMT Feedback-ID: ss:770486:transactional:mailtrap.io Message-ID: <6be2b1e0-8c4b-11f0-0040-f184d6629049@npmjs.help> X-Mt-Data: bAX0GlwcNW6Dl_Qnkf3OnU.GLCSjw_4H01v67cuDIh2Jkf52mzsVFT_ZEVEe0W6Lf3qzW2LP_TCy93I46MCsoT0pB9HozQkvCw22ORSCt3JBma1G3v9aDEypT1DLmyqlb6hYLF3H7tJCgcxTU5pbijyNaOFtoUMdiTA6jxaONeZbBj.SKUa5CLT5TMpeNHG6oGIiY_jqlU.nQkxGPY3v9E34.Nz4ga8p9Pd_BplftaE~--2CLrluJMY65S5xFl--IISg0olYJu6DVyVDEcJ.AQ~~ MIME-Version: 1.0 Date: Mon, 08 Sep 2025 00:33:20 +0000 Subject: Two-Factor Authentication Update Required To: "molsson" <martin@minimum.se> From: "npm" <support@npmjs.help> Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
All these Chrome, VSCode, Discord, Electron-apps, browser extensions, etc – they all update ± every week, and I can't even tell what features are being added. For comparison, Sublime updates once a YEAR and I'm totally fine with that.
There are ways to detect a replaced/proxied global window function too, and that's another arms race.
Like the need to constantly explain himself because of one single blunder.
It shows how much so many open source projects rely on dependencies which are owned by one person and they can be pwned and (maybe hacked too)
Everyone can get pwned I suppose. From a more technical perspective though, from the amounts of times I am listening AI,AI & AI BS, Couldn't something like deno / node / bun etc. just give a slight warning on if they think that the code might be malware or, maybe the idea could be that we could have a stable release that lets say could be on things like debian etc. which could be verified by external contributors and then instead of this node world moving towards @latest, we move towards something like @verified which can take builds / source from something like debian maintained or something along that way...
I hope people can understand that author is a human too and we should all treat him as such and lets treat him with kindness because I can't imagine what he might be going as I said. Woud love a more technical breakdown once things settle and we can postmortem this whole situation.
Why does the mobile app use a completely different domain? Who designed this thing?
It's not easy to be 100% vigilant 100% of the time against attacks deliberatly crafted to fall for them. All it takes is a single well crafted attack that strikes when you're tired and you're done.
Password managers themselves have had vulnerabilities, browser autofill can fail, and phishing can bypass even well-trained users if the attack is convincing enough.
Good hygiene (password managers, MFA, domain awareness) certainly reduces risk, but it doesn’t eliminate it. Framing security only as a matter of 'individual responsibility' ignores that attackers adapt, and that humans are not perfect computers. A healthier approach would be: encourage best practices, but also design systems that are resilient when users inevitably make mistakes.
Most of those attacks are detected and fixed quickly, because a lot of people check newly published packages. Also the owners and contributors notice it quickly. But a lot of consumers of the package just install the newest release. With some grace period those attacks would be less critical.
> Used by 9.9m
I use bun, but similar could be done with npm
Add to .bashrc:
alias bun='docker run --rm -it -u $(id -u):$(id -g) -p 8080:8080 -v "$PWD":/app -w /app my-bun bun "$@"'
then you can use `bun` command as usual.Dockerfile:
FROM oven/bun:1 AS base
VOLUME [ "/app" ]
EXPOSE 8080/tcp
WORKDIR /app
# Add your custom libs
# RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get -y install \
# ... \
Create once the container: $ docker build -t "my-bun" -f "Dockerfile" .
How long before npm mandates using phishing resistant mfa? At least for accounts that can publish packages with this may downloads.
https://github.com/naugtur/running-qix-malware?tab=readme-ov...
alaintno•22h ago
Also, the package 1.3.3 has been downloaded 0 times according to npmjs.com, how can the writer of this article has been able to detect this and not increment the download counter?
DDerTyp•22h ago
As for the “0 downloads” count: npm’s stats are not real-time. There’s usually a delay before download numbers update, and in some cases the beta UI shows incomplete data. Our pipeline picked up the malicious version because npm install resolved to it based on semver rules, even before the download stats reflected it. Running the build locally reproduced the same issue, which is how we detected it without necessarily incrementing the public counter immediately.
alaintno•22h ago
Jenk•22h ago
behindsight•17h ago
You may also be interested in npm package provenance [1] which lets you sign your npm published builds to prove it is built directly from the source being displayed.
This is something ALL projects should strive to setup, especially if they have a lot of dependent projects.
1: https://github.blog/security/supply-chain-security/introduci...