https://inspector.pypi.io/project/litellm/1.82.8/packages/fd...
The previous version triggers on `import litellm.proxy`
Again, all according to the issue OP.
I would expect better spam detection system from GitHub. This is hardly acceptable.
https://github.com/krrishdholakia/blockchain/commit/556f2db3...
- # blockchain
- Implements a skeleton framework of how to mine using blockchain, including the consensus algorithms.
+ teampcp owns BerriAII hope that everyone's course of action will be uninstalling this package permanently, and avoiding the installation of packages similar to this.
In order to reduce supply chain risk not only does a vendor (even if gratis and OS) need to be evaluated, but the advantage it provides.
Exposing yourself to supply chain risk for an HTTP server dependency is natural. But exposing yourself for is-odd, or whatever this is, is not worth it.
Remember that you are programmers and you can just program, you don't need a framework, you are already using the API of an LLM provider, don't put a hat on a hat, don't get killed for nothing.
And even if you weren't using this specific dependency, check your deps, you might have shit like this in your requirements.txt and was merely saved by chance.
An additional note is that the dev will probably post a post-mortem, what was learned, how it was fixed, maybe downplay the thing. Ignore that, the only reasonable step after this is closing a repo, but there's no incentive to do that.
Programming for different LLM APIs is a hassle, this library made it easy by making one single API you call, and in the backstage it handled all the different API calls you need for different LLM providers.
This is like a couple hours of work even without vibe coding tools.
Now I am not worried about the Ai Api keys having much damage but I am thinking of one step further and I am not sure how many of these corporations follow privacy policy and so perhaps someone more experienced can tell me but wouldn't these applications keep logs for legal purposes and those logs can contain sensitive information, both of businesses but also, private individuals perhaps too?
Irrevocable transfers... What could go wrong?
First Trivy (which got compromised twice), now LiteLLM.
The package was directly compromised, not “by supply chain attack”.
If you use the compromised package, your supply chain is compromised.
Basically it forkbombed `grep -r rpcuser\rpcpassword` processes trying to find cryptowallets or something. I saw that they spawned from harness, and killed it.
Got lucky, no backdoor installed here from what i could make out of the binary
This would also disable site import so not viable generically for everyone without testing.
I guess I am lucky as I have watchower automatically update all my containers to the latest image every morning if there are new versions.
I also just added it to my homelab this sunday, I guess that's good timing haha.
The possibilities within a good threat could be catastrophic if we assume so, and if we assume nation-states to be interested in sponsoring hacking attacks (which many nations already do) to attack enemy nations/gain leverage. We are looking at damage within Trillions at that point.
But I would assume that Linux might be safe for now, it might be the most looked at code and its definitely something safe.
LLVM might be a bit more interesting as it might go a little unnoticed but hopefully people who are working at LLVM are well funded/have enough funding to take a look at everything carefully to not have such a slip up.
However, the broader idea of supply chain attacks remains challenging and AI doesn’t really matter in terms of how you should treat it. For example, the xz-utils back door in the build system to attack OpenSSH on many popular distros that patched it to depend on systemd predates AI and that’s just the attack we know about because it was caught. Maybe AI helps with scale of such attacks but I haven’t heard anyone propose any kind of solution that would actually improve reliability and robustness of everything.
[1] Fully Countering Trusting Trust through Diverse Double-Compiling https://arxiv.org/abs/1004.5534
I'm sensing a pattern here, hmm.
Do the labs label code versions with an associated CVE to label them as compromised (telling the model what NOT to do)? Do they do adversarial RL environments to teach what's good/bad? I'm very curious since it's inevitable some pwned code ends up as training data no matter what.
I assume most labs don't do anything to deal with this, and just hope that it gets trained out because better code should be better rewarded in theory?
That's why I'm building https://github.com/kstenerud/yoloai
Domains might get added to a list for things like 1.1.1.2 but as you can imagine that has much smaller coverage, not everyone uses something like this in their DNS infra.
Basically, have all releases require multi-factor auth from more than one person before they go live.
A single person being compromised either technically, or by being hit on the head with a wrench, should not be able to release something malicious that effects so many people.
Since they all seem positive, it doesn't seem like an attack but I thought the general etiquette for github issues was to use the emoji reactions to show support so the comment thread only contains substantive comments.
> It also seems that attacker is trying to stifle the discussion by spamming this with hundreds of comments. I recommend talking on hackernews if that might be the case.
https://github.com/calebfaruki/tightbeam https://github.com/calebfaruki/airlock
This is literally the thing I'm trying to protect against.
1. Looks like this originated from the trivvy used in our ci/cd - https://github.com/search?q=repo%3ABerriAI%2Flitellm%20trivy... https://ramimac.me/trivy-teampcp/#phase-09
2. If you're on the proxy docker, you were not impacted. We pin our versions in the requirements.txt
3. The package is in quarantine on pypi - this blocks all downloads.
We are investigating the issue, and seeing how we can harden things. I'm sorry for this.
- Krrish
iwhalen•1h ago
kevml•1h ago
cirego•1h ago
nubg•1h ago
bakugo•1h ago
Imustaskforhelp•45m ago
https://github.com/BerriAI/litellm/issues/24512#issuecomment...