frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Logic Puzzles: Why the Liar Is the Helpful One

https://blog.szczepan.org/blog/knights-and-knaves/
1•wasabi991011•3m ago•0 comments

Optical Combs Help Radio Telescopes Work Together

https://hackaday.com/2026/02/03/optical-combs-help-radio-telescopes-work-together/
1•toomuchtodo•8m ago•1 comments

Show HN: Myanon – fast, deterministic MySQL dump anonymizer

https://github.com/ppomes/myanon
1•pierrepomes•14m ago•0 comments

The Tao of Programming

http://www.canonical.org/~kragen/tao-of-programming.html
1•alexjplant•15m ago•0 comments

Forcing Rust: How Big Tech Lobbied the Government into a Language Mandate

https://medium.com/@ognian.milanov/forcing-rust-how-big-tech-lobbied-the-government-into-a-langua...
1•akagusu•15m ago•0 comments

PanelBench: We evaluated Cursor's Visual Editor on 89 test cases. 43 fail

https://www.tryinspector.com/blog/code-first-design-tools
2•quentinrl•18m ago•1 comments

Can You Draw Every Flag in PowerPoint? (Part 2) [video]

https://www.youtube.com/watch?v=BztF7MODsKI
1•fgclue•23m ago•0 comments

Show HN: MCP-baepsae – MCP server for iOS Simulator automation

https://github.com/oozoofrog/mcp-baepsae
1•oozoofrog•26m ago•0 comments

Make Trust Irrelevant: A Gamer's Take on Agentic AI Safety

https://github.com/Deso-PK/make-trust-irrelevant
2•DesoPK•30m ago•0 comments

Show HN: Sem – Semantic diffs and patches for Git

https://ataraxy-labs.github.io/sem/
1•rs545837•32m ago•1 comments

Hello world does not compile

https://github.com/anthropics/claudes-c-compiler/issues/1
17•mfiguiere•38m ago•3 comments

Show HN: ZigZag – A Bubble Tea-Inspired TUI Framework for Zig

https://github.com/meszmate/zigzag
2•meszmate•40m ago•0 comments

Metaphor+Metonymy: "To love that well which thou must leave ere long"(Sonnet73)

https://www.huckgutman.com/blog-1/shakespeare-sonnet-73
1•gsf_emergency_6•42m ago•0 comments

Show HN: Django N+1 Queries Checker

https://github.com/richardhapb/django-check
1•richardhapb•57m ago•1 comments

Emacs-tramp-RPC: High-performance TRAMP back end using JSON-RPC instead of shell

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•todsacerdoti•1h ago•0 comments

Protocol Validation with Affine MPST in Rust

https://hibanaworks.dev
1•o8vm•1h ago•1 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
3•gmays•1h ago•0 comments

Show HN: Zest – A hands-on simulator for Staff+ system design scenarios

https://staff-engineering-simulator-880284904082.us-west1.run.app/
1•chanip0114•1h ago•1 comments

Show HN: DeSync – Decentralized Economic Realm with Blockchain-Based Governance

https://github.com/MelzLabs/DeSync
1•0xUnavailable•1h ago•0 comments

Automatic Programming Returns

https://cyber-omelette.com/posts/the-abstraction-rises.html
1•benrules2•1h ago•1 comments

Why Are There Still So Many Jobs? The History and Future of Workplace Automation [pdf]

https://economics.mit.edu/sites/default/files/inline-files/Why%20Are%20there%20Still%20So%20Many%...
2•oidar•1h ago•0 comments

The Search Engine Map

https://www.searchenginemap.com
1•cratermoon•1h ago•0 comments

Show HN: Souls.directory – SOUL.md templates for AI agent personalities

https://souls.directory
1•thedaviddias•1h ago•0 comments

Real-Time ETL for Enterprise-Grade Data Integration

https://tabsdata.com
1•teleforce•1h ago•0 comments

Economics Puzzle Leads to a New Understanding of a Fundamental Law of Physics

https://www.caltech.edu/about/news/economics-puzzle-leads-to-a-new-understanding-of-a-fundamental...
3•geox•1h ago•1 comments

Switzerland's Extraordinary Medieval Library

https://www.bbc.com/travel/article/20260202-inside-switzerlands-extraordinary-medieval-library
4•bookmtn•1h ago•0 comments

A new comet was just discovered. Will it be visible in broad daylight?

https://phys.org/news/2026-02-comet-visible-broad-daylight.html
5•bookmtn•1h ago•0 comments

ESR: Comes the news that Anthropic has vibecoded a C compiler

https://twitter.com/esrtweet/status/2019562859978539342
2•tjr•1h ago•0 comments

Frisco residents divided over H-1B visas, 'Indian takeover' at council meeting

https://www.dallasnews.com/news/politics/2026/02/04/frisco-residents-divided-over-h-1b-visas-indi...
5•alephnerd•1h ago•5 comments

If CNN Covered Star Wars

https://www.youtube.com/watch?v=vArJg_SU4Lc
1•keepamovin•1h ago•1 comments
Open in hackernews

AIsbom – open-source CLI to detect "Pickle Bombs" in PyTorch models

https://github.com/Lab700xOrg/aisbom
52•lab700xdev•1mo ago

Comments

lab700xdev•1mo ago
Hi HN,

I’ve been working with ML infrastructure for a while and realized there’s a gap in the security posture: we scan our requirements.txt for vulnerabilities, but blindly trust the 5GB binary model files (.pt) we download from Hugging Face.

Most developers don't realize that standard PyTorch files are just Zip archives containing Python Pickle bytecode. When you run torch.load(), the unpickler executes that bytecode. This allows for arbitrary code execution (RCE) inside the model file itself - what security researchers call a "Pickle Bomb."

I built AIsbom (AI Software Bill of Materials) to solve this without needing a full sandbox.

How it works: 1. It inspects the binary structure of artifacts (PyTorch, Pickle, Safetensors) without loading weights into RAM. 2. For PyTorch/Pickles, it uses static analysis (via pickletools) to disassemble the opcode stream. 3. It looks for GLOBAL or STACK_GLOBAL instructions referencing dangerous modules like os.system, subprocess, or socket. 4. It outputs a CycloneDX v1.6 JSON SBOM compatible with enterprise tools like Dependency-Track. 5. It also parses .safetensors headers to flag "Non-Commercial" (CC-BY-NC) licenses, which often slip into production undetected.

It’s open source (Apache 2.0) and written in Python/Typer. Repo: https://github.com/Lab700xOrg/aisbom Live Demo (Web Viewer): https://aisbom.io

Why I built a scanner? https://dev.to/labdev_c81554ba3d4ae28317/pytorch-models-are-...

I’d love feedback on the detection logic (specifically safety.py) or if anyone has edge cases of weird Pickle protocols that break the disassembler.

rafram•1mo ago
> It looks for GLOBAL or STACK_GLOBAL instructions referencing dangerous modules like os.system, subprocess, or socket.

This seems like a doomed approach. You can’t make a list of every “dangerous” function in every library.

oofbey•1mo ago
Agree an explicit block list is not very robust. I imagine the vast majority of legit ML models use only a very limited set of math functions and basically no system interaction. Would be good to fingerprint a big set of assumed-safe models and flag anything which diverges from that.
lab700xdev•1mo ago
You are absolutely right - blocklisting is a game of whack-a-mole. However, in the context of serialized ML weights, the "allowlist" of valid imports is actually quite small (mostly torch.nn, collections, numpy). Right now, we are flagging the obvious low-hanging fruit (script kiddie RCE) because generic SCA tools miss even that. The roadmap includes moving to a strict "Allowlist" mode where we flag any global import that isn't a known mathematical library. That’s much safer than trying to list every dangerous function
pama•1mo ago
You asked for specific feedback, but here is generic feedback: a new github account coupled to a new HN account does not inspire any sense of added infra safety. I would rather use modern pytorch/safetensors and tools that dont allow executing pickles from checkpoints. If you execute someone elses pickle you probably already lost no matter what checks you want to add over time.
lab700xdev•1mo ago
That is entirely fair feedback regarding the new accounts. We all have to start somewhere! That is exactly why I open-sourced the engine (Apache 2.0) and kept the logic in Python rather than a compiled binary - so you don't have to trust "me", you can audit scanner.py and safety.py yourself to see exactly how we parse the zip headers. Regarding Safetensors: I agree 100%. If everyone used Safetensors, this tool wouldn't need to exist, but looking at the Hugging Face hub, there are still millions of legacy .pt files being downloaded daily. This tool is a guardrail for the messy reality we live in, not the perfect future we want.
oofbey•1mo ago
Thanks for starting to address the gap. When would this tool be best used? As a post commit hook? In the CI/CD chain? At runtime?
lab700xdev•1mo ago
Ideally, CI/CD Pipeline (Pre-Merge) - We recently released a GitHub Action for this exact workflow. The goal is to block a Pull Request if a developer tries to merge a .pt file that contains CRITICAL risk opcodes. If you wait until Runtime to check, you’ve likely already unpickled the file to inspect it, which means you’re already pwnd. This needs to happen at the artifact ingestion stage (before it touches your production cluster).
woodrowbarlow•1mo ago
> what security researchers call a "Pickle Bomb."

is anyone calling it that? to me, "pickle bomb" would imply abusing compression or serialization for a resource-exhaustion attack, a la zipbombs.

"pickle bomb", the way you're using it, doesn't seem like a useful terminology -- pickles are just (potentially malicious) executables.

lab700xdev•1mo ago
Fair point on the terminology overlap with "Zip Bombs" (resource exhaustion). I used "Pickle Bomb" colloquially to describe a serialized payload waiting to detonate upon load, similar to how "Logic Bomb" is used in malware. "Malicious Pickle Stream" is definitely the more precise technical term, but it doesn't quite capture the visceral risk of "I loaded this file and my AWS keys are gone" as well as Bomb does!
yjftsjthsd-h•1mo ago
> but blindly trust the 5GB binary model files (.pt) we download from Hugging Face.

I thought the ecosystem had mostly moved to .safetensors (which was explicitly created to fix this problem) and .gguf (which I'm pretty sure also doesn't have this problem); do you really need to download giant chunks of untrusted code and execute it at all?

ivape•1mo ago
People will take the risk with uncensored models tuned for specific things. I'm glad we're talking about this now rather than 10 years later like with npm. The amount of ad-hoc AI tools on github is staggering, and people are just downloading these things like it's no big deal.
lab700xdev•1mo ago
The comparison to npm is spot on. We are seeing the exact same pattern: a massive explosion of dependency complexity, but now the "dependencies" aren't 50KB JavaScript files, they are 10GB binary blobs that we treat as black boxes. The "Shadow AI" problem (developers cloning a random repo + downloading a model from a Google Drive link to get a specific uncensored tune) is exactly what we built the CLI for. We want to make it trivial to run a "hygiene check" on that download folder before mounting it into a container.
ivape•1mo ago
Consider adding a little UI to this. If I can just right-click a model/zip/folder and click "scan", then there's really no reason not to have this around (speaking in terms of removing any practical barrier, including laziness).
lab700xdev•1mo ago
That barrier to entry ("laziness") is the #1 security vulnerability. If it takes 3 minutes to set up a scanner, nobody does it. That's actually why we built the Web Viewer - so you can just drag-and-drop the JSON output rather than reading terminal logs. But a native OS "Right Click --> Scan with AIsbom" Context Menu integration is a fantastic idea for a future desktop release. Thanks.
dylan604•1mo ago
Maybe because the trained habit of doing the same with npm??? Why write your own code when there's 30 packages "doing the same thing" and I don't have to look at the code at all and just include with no clue what's going on under the hood? What could possibly go wrong?
lab700xdev•1mo ago
You are right that the inference ecosystem (llama.cpp, vLLM) has moved aggressively to GGUF and Safetensors. If you are just consuming optimized models, you are safer. However, I see two reasons why the risk persists: 1) The Supply Chain Tail: The training ecosystem is still heavily PyTorch native. Researchers publishing code, LoRA adapters, and intermediate checkpoints are often still .pt. 2) Safetensors Metadata: Even if the binary is safe, the JSON header in a .safetensors file often carries the License field. AIsbom scans that too. Detecting a "Non-Commercial" (CC-BY-NC) license in a production artifact is a different kind of "bomb" - a legal one - but just as dangerous for a startup.
altomek•1mo ago
This is great tool! Would it be possible to add GGUF to your tool? It may be a little tricky format to parse but GGUF format already seen few attack vectors and I consider it untrustworthy. Been able to snan GGUF files would be great!
lab700xdev•1mo ago
@altomek - Thanks for the suggestion! Just shipped v0.3.0 which includes a native GGUF header parser. It now extracts metadata and checks for license risks in .gguf files.
solarengineer•1mo ago
Could those who have downvoted this comment please explain your reasoning? Are the rationale in the comment not valid?
nextaccountic•1mo ago
> Most developers don't realize that standard PyTorch files are just Zip archives containing Python Pickle bytecode.

This is outrageous. Why not deprecate this cursed format and use something from the data frame community? Like, Parquet or something

Actually almost any binary format is better than this

tennysont•1mo ago
Pickle files are probably still useful saving exploratory work, collaborating inside a company, and use inside a pipeline.

Safetensors is supposed to be the successor for distribution. I believe that it's the "safe" subset of pickle's data format.

rhdunn•1mo ago
The safetensors file format is a header length, JSON header, and serialized tensor weights. [1]

[1] https://github.com/huggingface/safetensors

anky8998•1mo ago
Thanks for sharing this — really solid write-up, and I agree with the core premise. Pickle is a huge blind spot in ML security, and most folks don’t realize that torch.load() is effectively executing attacker-controlled bytecode.

One thing we ran into while working on similar problems is that static opcode scanning alone tends to give a false sense of coverage. A lot of real-world bypasses don’t rely on obvious GLOBAL os.system patterns and can evade tools that depend on pickletools, modelscan, or fickling.

We recently open-sourced a structure-aware pickle fuzzer at Cisco that’s designed specifically to test the robustness of pickle scanners, not just scan models:

• It executes pickle bytecode inside a custom VM, tracking opcode execution, stack state, and memo behavior • Mutates opcode sequences, stack interactions, and protocol-specific edge cases • Has already uncovered multiple scanner bypasses that look benign statically but behave differently at runtime

Repo: https://github.com/cisco-ai-defense/pickle-fuzzer

We also wrote up some of the lessons learned while hardening pickle scanners here (including why certain opcode patterns are tricky to reason about statically): https://blogs.cisco.com/ai/hardening-pickle-file-scanners

I think tools like AIsbom are a great step forward, especially for SBOM and ecosystem visibility. From our experience, pairing static analysis + fuzzing-driven adversarial testing is where things get much more resilient over time.

lab700xdev•1mo ago
This is incredibly valuable feedback. I’ve been reading through the pickle-fuzzer repo this morning, specifically about stack manipulation bypassing static heuristics. You nailed the trade-off: AIsbom is designed for the "90% hygiene" case in a fast CI/CD pipeline (where spinning up a VM/Fuzzer might be too heavy/slow for every commit). We aim to catch the low-hanging fruit (obvious RCE) and generate the Inventory (SBOM) rapidly. That said, moving toward an "Allowlist Only" (Strict Mode) approach seems like the better way to make static analysis resilient against the obfuscation you mentioned. We are prioritizing that for upcoming release. Would love to potentially reference your fuzzer in our docs as the "Deep Scan" alternative!
zvr•1mo ago
You could also generate SPDX SBOMs, based on their AI Profile: https://spdx.github.io/spdx-spec/v3.1-dev/model/AI/AI/
chuckadams•1mo ago
When dealing with stuff like php serialization and pickle, the rule is simple: never unpickle anything you didn't pickle yourself. If anything else could possibly touch the serialized bytes, sign it with HMAC and keep that somewhere untouchable.

I somehow doubt this tool is going to be able to pull off what Java bytecode verification could not.

lab700xdev•1mo ago
The Golden Rule holds: "Don't unpickle untrusted data." The problem I'm trying to solve is that "Untrusted" has become blurry in the AI age. Data Scientists treat Model Hubs (like Hugging Face) as trusted repositories, similar to PyPI or NPM. They shouldn't, but they do. This tool effectively serves as a "Loud Warning Label" to break that assumption. It tells the engineer: "Hey, you think this is just weights, but I see socket calls in here. Do not load this."
nextaccountic•1mo ago
> When dealing with stuff like php serialization and pickle, the rule is simple: never unpickle anything you didn't pickle yourself.

I thought the rule was, never use pickle, it makes no sense when other serialization formats exist and are just as easy to use

woodruffw•1mo ago
The checks here seem pretty minimal[1]. I'd recommend taking a look at fickling (FD: former employer) for a more general approach to pickle decompilation/analysis[2].

[1]: https://github.com/Lab700xOrg/aisbom/blob/main/aisbom/safety...

[2]: https://github.com/trailofbits/fickling

lab700xdev•1mo ago
Thanks for the link! fickling is excellent work (and definitely the gold standard for deep analysis). The goal with AIsbom was to build something lightweight enough to run in a fast CI/CD loop that creates a standard inventory (CycloneDX SBOM) alongside the security check. We are definitely looking at fickling's symbolic execution approach for inspiration on how to make our safety.py module more robust against obfuscation.
liuliu•1mo ago
I know this sounds weird: "symbolic execution" of pickle VM cannot be slow right? We are talking about just a few thousands instructions here and you don't need "symbolic execution" per se, just write a custom interpreter and run it. That would take less than 10ms for any given PyTorch file (excluding disk loading).
liuliu•1mo ago
Agree. Writing a pickle interpreter is not particularly challenging. I did that in Swift to help load PyTorch checkpoint https://github.com/liuliu/swift-fickling without these pitfalls.
esafak•1mo ago
Good job. Pickle has no place in production. Yeah, I said it.
roywiggins•1mo ago
Don't love how ChatGPT the readme is, the bullet points under "Why AIsbom?" are very, very ChatGPT.
xpe•1mo ago
I will preemptively grant the narrow point that if a project demonstrates poor quality in its code or text (i.e. what I mean when I say "slop"), it can dissuade potential users. However, the "Why AIsbom?" section strikes me as clear and informative.

Many people prefer human writing. I get it, and I think I understand most of the underlying reasons and emotional drives. [1]

Nevertheless, my top preference (I think?) is clarity and accuracy. For technical writing, if these two qualities are present, I'm rarely bothered by what people may label "AI writing". OTOH, when I see sloppy, poorly reasoned, out-of-date writing, my left hand readies itself for ⌘W. [2]

A suggestion for the comment above, which makes a stylistic complaint: be more specific about what can be improved.

Finally, a claim: over time, valid identification of some text as being AI-generated will require more computation and be less accurate. [3]

[1]: Food for thought: https://theconversation.com/people-say-they-prefer-stories-w... and the backing report: https://docs.iza.org/dp17646.pdf

[2]: To be open, I might just have a much higher than average bar for precision -- I tend to prefer reading source materials than derivative press coverage, and I prefer reading a carefully worded, dry written documentation file over an informal chat description. To keep digging the hole for myself, I usually don't like the modern practice of putting unrelated full-width pictures in blog posts because they look purdy. Maybe it comes from a "just the facts, please" mentality when reading technical material.

[3]: I realize this isn't the clearest testable prediction, but I think the gist of it is falsifiable.

krick•1mo ago
Ok, as others noted, the tool in question is hardly a solution, but what is, then? I mean, presented like that, it's pretty crazy that everyone just downloads and runs 5GB executable blobs from Hugging Face. Does anyone review them somehow before they are accepted and gather 10K downloads on HF? Or is it really another totally mindbogglingy crazy thing that happens right now across all world, and everybody just shrugs and waits for catastrophic breach of security of planetary scale to happen?
eyeris•1mo ago
Was there any research into prior art? Recently did some research into this space and it seems like there are already a bunch of off the shelf open source projects for address this