I’ve been working with ML infrastructure for a while and realized there’s a gap in the security posture: we scan our requirements.txt for vulnerabilities, but blindly trust the 5GB binary model files (.pt) we download from Hugging Face.
Most developers don't realize that standard PyTorch files are just Zip archives containing Python Pickle bytecode. When you run torch.load(), the unpickler executes that bytecode. This allows for arbitrary code execution (RCE) inside the model file itself - what security researchers call a "Pickle Bomb."
I built AIsbom (AI Software Bill of Materials) to solve this without needing a full sandbox.
How it works:
1. It inspects the binary structure of artifacts (PyTorch, Pickle, Safetensors) without loading weights into RAM.
2. For PyTorch/Pickles, it uses static analysis (via pickletools) to disassemble the opcode stream.
3. It looks for GLOBAL or STACK_GLOBAL instructions referencing dangerous modules like os.system, subprocess, or socket.
4. It outputs a CycloneDX v1.6 JSON SBOM compatible with enterprise tools like Dependency-Track.
5. It also parses .safetensors headers to flag "Non-Commercial" (CC-BY-NC) licenses, which often slip into production undetected.
I’d love feedback on the detection logic (specifically safety.py) or if anyone has edge cases of weird Pickle protocols that break the disassembler.
rafram•1h ago
> It looks for GLOBAL or STACK_GLOBAL instructions referencing dangerous modules like os.system, subprocess, or socket.
This seems like a doomed approach. You can’t make a list of every “dangerous” function in every library.
oofbey•33m ago
Agree an explicit block list is not very robust. I imagine the vast majority of legit ML models use only a very limited set of math functions and basically no system interaction. Would be good to fingerprint a big set of assumed-safe models and flag anything which diverges from that.
lab700xdev•32m ago
You are absolutely right - blocklisting is a game of whack-a-mole.
However, in the context of serialized ML weights, the "allowlist" of valid imports is actually quite small (mostly torch.nn, collections, numpy).
Right now, we are flagging the obvious low-hanging fruit (script kiddie RCE) because generic SCA tools miss even that. The roadmap includes moving to a strict "Allowlist" mode where we flag any global import that isn't a known mathematical library. That’s much safer than trying to list every dangerous function
pama•1h ago
You asked for specific feedback, but here is generic feedback: a new github account coupled to a new HN account does not inspire any sense of added infra safety. I would rather use modern pytorch/safetensors and tools that dont allow executing pickles from checkpoints. If you execute someone elses pickle you probably already lost no matter what checks you want to add over time.
lab700xdev•21m ago
That is entirely fair feedback regarding the new accounts. We all have to start somewhere!
That is exactly why I open-sourced the engine (Apache 2.0) and kept the logic in Python rather than a compiled binary - so you don't have to trust "me", you can audit scanner.py and safety.py yourself to see exactly how we parse the zip headers.
Regarding Safetensors: I agree 100%. If everyone used Safetensors, this tool wouldn't need to exist, but looking at the Hugging Face hub, there are still millions of legacy .pt files being downloaded daily. This tool is a guardrail for the messy reality we live in, not the perfect future we want.
oofbey•36m ago
Thanks for starting to address the gap. When would this tool be best used? As a post commit hook? In the CI/CD chain? At runtime?
lab700xdev•29m ago
Ideally, CI/CD Pipeline (Pre-Merge) - We recently released a GitHub Action for this exact workflow. The goal is to block a Pull Request if a developer tries to merge a .pt file that contains CRITICAL risk opcodes.
If you wait until Runtime to check, you’ve likely already unpickled the file to inspect it, which means you’re already pwnd. This needs to happen at the artifact ingestion stage (before it touches your production cluster).
woodrowbarlow•27m ago
> what security researchers call a "Pickle Bomb."
is anyone calling it that? to me, "pickle bomb" would imply abusing compression or serialization for a resource-exhaustion attack, a la zipbombs.
"pickle bomb", the way you're using it, doesn't seem like a useful terminology -- pickles are just (potentially malicious) executables.
yjftsjthsd-h•21m ago
> but blindly trust the 5GB binary model files (.pt) we download from Hugging Face.
I thought the ecosystem had mostly moved to .safetensors (which was explicitly created to fix this problem) and .gguf (which I'm pretty sure also doesn't have this problem); do you really need to download giant chunks of untrusted code and execute it at all?
chuckadams•1h ago
When dealing with stuff like php serialization and pickle, the rule is simple: never unpickle anything you didn't pickle yourself. If anything else could possibly touch the serialized bytes, sign it with HMAC and keep that somewhere untouchable.
I somehow doubt this tool is going to be able to pull off what Java bytecode verification could not.
lab700xdev•10m ago
The Golden Rule holds: "Don't unpickle untrusted data."
The problem I'm trying to solve is that "Untrusted" has become blurry in the AI age. Data Scientists treat Model Hubs (like Hugging Face) as trusted repositories, similar to PyPI or NPM. They shouldn't, but they do.
This tool effectively serves as a "Loud Warning Label" to break that assumption. It tells the engineer: "Hey, you think this is just weights, but I see socket calls in here. Do not load this."
woodruffw•1h ago
The checks here seem pretty minimal[1]. I'd recommend taking a look at fickling (FD: former employer) for a more general approach to pickle decompilation/analysis[2].
Thanks for the link! fickling is excellent work (and definitely the gold standard for deep analysis).
The goal with AIsbom was to build something lightweight enough to run in a fast CI/CD loop that creates a standard inventory (CycloneDX SBOM) alongside the security check. We are definitely looking at fickling's symbolic execution approach for inspiration on how to make our safety.py module more robust against obfuscation.
lab700xdev•1h ago
I’ve been working with ML infrastructure for a while and realized there’s a gap in the security posture: we scan our requirements.txt for vulnerabilities, but blindly trust the 5GB binary model files (.pt) we download from Hugging Face.
Most developers don't realize that standard PyTorch files are just Zip archives containing Python Pickle bytecode. When you run torch.load(), the unpickler executes that bytecode. This allows for arbitrary code execution (RCE) inside the model file itself - what security researchers call a "Pickle Bomb."
I built AIsbom (AI Software Bill of Materials) to solve this without needing a full sandbox.
How it works: 1. It inspects the binary structure of artifacts (PyTorch, Pickle, Safetensors) without loading weights into RAM. 2. For PyTorch/Pickles, it uses static analysis (via pickletools) to disassemble the opcode stream. 3. It looks for GLOBAL or STACK_GLOBAL instructions referencing dangerous modules like os.system, subprocess, or socket. 4. It outputs a CycloneDX v1.6 JSON SBOM compatible with enterprise tools like Dependency-Track. 5. It also parses .safetensors headers to flag "Non-Commercial" (CC-BY-NC) licenses, which often slip into production undetected.
It’s open source (Apache 2.0) and written in Python/Typer. Repo: https://github.com/Lab700xOrg/aisbom Live Demo (Web Viewer): https://aisbom.io
Why I built a scanner? https://dev.to/labdev_c81554ba3d4ae28317/pytorch-models-are-...
I’d love feedback on the detection logic (specifically safety.py) or if anyone has edge cases of weird Pickle protocols that break the disassembler.
rafram•1h ago
This seems like a doomed approach. You can’t make a list of every “dangerous” function in every library.
oofbey•33m ago
lab700xdev•32m ago
pama•1h ago
lab700xdev•21m ago
oofbey•36m ago
lab700xdev•29m ago
woodrowbarlow•27m ago
is anyone calling it that? to me, "pickle bomb" would imply abusing compression or serialization for a resource-exhaustion attack, a la zipbombs.
"pickle bomb", the way you're using it, doesn't seem like a useful terminology -- pickles are just (potentially malicious) executables.
yjftsjthsd-h•21m ago
I thought the ecosystem had mostly moved to .safetensors (which was explicitly created to fix this problem) and .gguf (which I'm pretty sure also doesn't have this problem); do you really need to download giant chunks of untrusted code and execute it at all?