frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

An Obscure Feature of a Long Forgotten SideProject Saved the Day

https://newbeelearn.com/blog/obscure-feature-of-forgotten-project/
1•pdyc•1m ago•0 comments

PaddleOCR-VL: Boosting Multilingual Document Parsing via a 0.9B Compact VLM

https://huggingface.co/PaddlePaddle/PaddleOCR-VL
1•daemonologist•1m ago•1 comments

Protein phase change batteries drive innate immune signaling and cell fate

https://elifesciences.org/reviewed-preprints/107962
1•rolph•2m ago•0 comments

Hey Zuck, Remember When You Said You'd Never Again Cave to Government Pressure?

https://www.techdirt.com/2025/10/15/hey-zuck-remember-when-you-said-youd-never-again-cave-to-gove...
2•saubeidl•4m ago•1 comments

AgentZero++: Modeling Fear-Based Behavior

https://arxiv.org/abs/2510.05185
1•PaulHoule•5m ago•0 comments

AI Agent Evaluation: The Definitive Guide to Testing AI Agents

https://www.confident-ai.com/blog/definitive-ai-agent-evaluation-guide
1•dustfinger•6m ago•0 comments

Mobian Trixie Has Landed

https://blog.mobian.org/posts/2025/10/new-stable-rotating-keys/
1•fsflover•8m ago•0 comments

Deno's Other Open Source Projects

https://deno.com/blog/open-source
1•enz•8m ago•0 comments

Everything's Computer, but How?

https://howtodothingswithmemes.substack.com/p/everythings-computer-but-how
1•laurex•9m ago•0 comments

Philanthropy and Traditional Charity Are Mutually Exclusive

https://ronfriedhaber.substack.com/p/philanthropy-and-traditional-charity
1•ronfriedhaber•9m ago•0 comments

Compliance Hound – Audit for SaaS to prevent fines

https://ai-guard-copilot.lovable.app/
1•OwnerMindset•10m ago•0 comments

New maps reveal post-flood migration patterns across the US

https://theconversation.com/fema-buyouts-vs-risky-real-estate-new-maps-reveal-post-flood-migratio...
1•bikenaga•10m ago•0 comments

Open-Source Hardware: curated list of open-source ASIC tools and designs

https://github.com/aolofsson/awesome-opensource-hardware
1•imakwana•12m ago•2 comments

Show HN: iOSPreCheck – Scan Native iOS Apps for Compliance Before Submission

https://iosprecheck.com/
2•da4thrza•15m ago•1 comments

Why C variable argument functions are an abomination (and what to do about it)

https://h4x0r.org/vargs/
1•birdculture•16m ago•0 comments

Cyberforest Links

http://cyberforest.nenv.k.u-tokyo.ac.jp/
1•atjamielittle•17m ago•0 comments

Why Jamie Dimon is warning of 'cockroaches' in the US economy

https://www.cnn.com/2025/10/16/business/jamie-dimon-us-economy-cockroaches
2•koolhead17•17m ago•0 comments

Listen to a Random Forest

https://tree.fm/
1•atjamielittle•18m ago•0 comments

Show HN: Art of X: Multi-agent brainstorming

https://art-of-x.com
2•artofalex•19m ago•0 comments

Our plan to build bottom-up resistance to billionaire technology

https://gfsc.community/our-plan-to-build-bottom-up-resistance-to-billionaire-technology/
5•laurex•19m ago•0 comments

Vite+ Aims to End JavaScript's Fragmented Tooling Nightmare

https://thenewstack.io/vite-aims-to-end-javascripts-fragmented-tooling-nightmare/
1•cendenta•19m ago•1 comments

Benchmarking Postgres 17 vs. 18

https://planetscale.com/blog/benchmarking-postgres-17-vs-18
3•enz•21m ago•1 comments

Show HN: DevSecOps.Bot – A GitHub App

https://devsecops.bot
1•raushanrajjj•22m ago•0 comments

Integrate Mermaid into Astro web framework

https://realfiction.net/posts/mermaid-in-astro/
1•tom_h_h•22m ago•0 comments

Skia: Exposing Shadow Branches

https://dl.acm.org/doi/10.1145/3676641.3716273
1•blakepelton•23m ago•1 comments

Customers imperiled after nation-state ransacks F5's network

https://arstechnica.com/security/2025/10/breach-of-f5-requires-emergency-action-from-big-ip-users...
1•d4mi3n•24m ago•1 comments

FCC Moves to Expel Hong Kong Telecom from U.S. Telecom Networks [pdf]

https://docs.fcc.gov/public/attachments/DOC-415089A1.pdf
1•phantomathkg•24m ago•0 comments

A Definition of AGI

https://www.agidefinition.ai/
1•backonhnhono•25m ago•1 comments

Nearly 100 stolen election ballots found in Sacramento County homeless camp

https://www.kcra.com/article/stolen-special-election-ballots-sacramento-encampment/69049403
1•Alupis•26m ago•0 comments

AMA with Stanley Tang

https://www.delphi.ai/stanleytang
1•RichZou•27m ago•0 comments
Open in hackernews

Pwning the Nix ecosystem

https://ptrpa.ws/nixpkgs-actions-abuse
284•SuperShibe•1d ago

Comments

jmclnx•1d ago
Well the "good" new is, OpenBSD and NetBSD still uses CVS, even for packages. So this will not work on those systems. I do not know about FreeBSD. Security by obscurity :)

But I have been seeing docs indication those projects are looking to go to git, will see if it really happens. In OpenBSD's case seems it will be based upon got(1).

seanhunter•1d ago
Just to make it clear, what you say is correct, but this is not a git vulnerability, it's a github actions vulnerability. That is, the BSDs are secured by CVS only because github doesn't do CVS. If you use git and even github but don't do CI/CD using github actions you are not affected by this.
graemep•1d ago
This is not a git issue, it is a github issue, and as far as I can see specific to github actions.
Mic92•1d ago
Don't they use email to accept contributions? Seems like security nightmare w.r.t to impersonation.
edoceo•1d ago
Aren't messages and/or patches signed?
Mic92•16h ago
I can't see any of that. They even tell you to not have any gnupg signatures: https://www.openbsd.org/mail.html
udev4096•1d ago
How? It's signed with their keys. Linux kernel also uses mail lists and I have yet to see someone trying to impersonate someone
Mic92•16h ago
I haven't seen anything about requirements for gpg. Also the ux of it is not so great, so it's easy to just not have a signature without causing too much suspicion. Would be a much easier attack than what Jian Tan pulled off. Just wait for some contributor to go on holiday and send a malicious v2 patch. There are so many patches in the linux kernel processed that no one wouldn't notice.
woodruffw•1d ago
This is a great example of why `pull_request_target` is fundamentally insecure, and why GitHub should (IMO) probably just remove it outright: conventional wisdom dictates that `pull_request_target` is "safe" as long as branch-controlled code is never executed in the context of the job, but these kinds of argument injections/local file inclusion vectors demonstrate that the vulnerability surface is significantly larger.

At the moment, the only legitimate uses of `pull_request_target` are for things like labeling and auto-commenting on third-party PRs. But there's no reason for these actions to have default write access to the repository; GitHub can and should be able to grant fine-grained or (even better) single-use tokens that enable those exact operations.

(This is why zizmor blanket-flags all use of `pull_request_target` and other dangerous triggers[1]).

[1]: https://docs.zizmor.sh/audits/#dangerous-triggers

zamalek•1d ago
This is what GitHub says about it:

> This event runs in the context of the base of the pull request, rather than in the context of the merge commit, as the pull_request event does. This prevents execution of unsafe code from the head of the pull request that could alter your repository or steal any secrets you use in your workflow.

Which is comical given how easily secrets were exilfiltrated.

woodruffw•1d ago
Yeah, I think that documentation is irresponsibly misleading: it implies that (1) attacker code execution requires the attacker to be able to run code directly (it doesn't, per this post), and (2) that checking out at the base branch somehow stymies the attacker, when all it does is incentivizes people to check out the attacker-controlled branch explicitly.

GitHub has written a series of blog posts[1] over the years about "pwn requests," which do a great job of explaining the problem. But the misleading documentation persists, and has led to a lot of user confusion where maintainers mistakenly believe that any use of `pull_request_target` is somehow more secure than `pull_request`, when the exact opposite is true.

[1]: https://securitylab.github.com/resources/github-actions-prev...

leeter•1d ago
I don't disagree... but, there is a use case for orgs that don't allow forks. Some tools do their merging outside of github and thus allow for PRs that cannot be clean from a merge perspective. This won't trigger workflows that are pull_request. Because pull_request requires a clean merge. In those cases pull_request_target is literally the only option.

The best move would be for github to have a setting for allowing the automation to run on PRs that don't have clean merges, off by default and intended for use with linters only really. Until that happens though pull_request_target is the only game in town to get around that limitation. Much to my and other SecDevOps engineers sadness.

NOTE: with these external tools you absolutely cannot do the merge manually in github unless you want to break the entire thing. It's a whole heap of not fun.

woodruffw•1d ago
That's a fantastic use case that should be supported discretely!
leeter•1d ago
Why github didn't is beyond me. Even if something isn't merge clean doesn't mean linters shouldn't be run. I get not running deployments etc. but not even having the option is pain.
lijok•23h ago
Inside private repos we use pull_request_target because 1. it runs the workflow as it exists on main and therefore provides a surface where untampered with test suites can run, and 2. provides a deterministic job_workflow_ref in the sub claim in the jwt that can be used for highly fine grained access control in OIDC enabled systems from the workflow
woodruffw•23h ago
Private repos aren't as much of a concern, for obvious reasons.

However, it's worth noting that you don't (necessarily) need `pull_request_target` for the OIDC credential in a private repo: all first-party PRs will get it with the `pull_request` event. You can configure the subject for that credential with whatever components you want to make it deterministic.

lijok•22h ago
You’re right! I edited my comment to clarify I was talking about good ole job_workflow_ref.
cookiengineer•19h ago
This attack surface is essentially unfixed for almost a year now.

Remember the python packages that got pwned with a malicious branch name that contained shellshock like code? Yeah, that incident.

I blogged about all vulnerable variables at the time and how the attack works from a pentesting perspective [1].

[1] https://cookie.engineer/weblog/articles/malware-insights-git...

perlgeek•1d ago
CI/CD actions for pull/merge requests are a nightmare. When a developer writes test/verification steps, they are mostly in the mindset "this is my code running in the context of my github/gitlab account", which is true for commits made by themselves and their team members.

But then in a pull request, the CI/CD pipeline actually runs untrusted code.

Getting this distinction correct 100% of the time in your mental model is pretty hard.

For the base case, where you maybe run a test suite and a linter, it's not too bad. But then you run into edge cases where you have to integrate with your own infrastructure (either for end2end tests, or for checking if contributors have CLAs submitted, or anything else that requires a bit more privs), and then it's very easy byte you.

woodruffw•1d ago
I don't think the problem is CI/CD runs on pull requests, per se: it's that GitHub has two extremely similar triggers (`pull_request` and `pull_request_target`). One of these is almost entirely safe (you have to go out of your way to misuse it), while the other is almost entirely unsafe (it's almost impossible to use safely).

To make things worse, GitHub has made certain operations on PRs (like auto-labeling and leaving automatic comments) completely impossible unless the extremely dangerous version (`pull_request_target`) is used. So this is a case of incentive-driven insecurity: people want to perform reasonable operations on third-party PRs, but the only mechanism GitHub Actions offers is a foot-cannon.

baobun•13h ago
> while the other is almost entirely unsafe (it's almost impossible to use safely).

I don't believe this is fair. "Don't run untrusted code" is what it comes down to. Don't trust test suites or scripts in the incoming branch, etc.

That pull_request_target workflows are (still) privileged by default is nuts and indeed a footgun but no need for "almost impossible" hysteria.

woodruffw•12h ago
> I don't believe this is fair. "Don't run untrusted code" is what it comes down to. Don't trust test suites or scripts in the incoming branch, etc.

TFA is a great example of how this breaks down. The two examples in the post obtain code execution/credential exfiltration without running an attacker controlled test suite or script.

silverwind•1h ago
I never understood what it is about labeling/commenting that prevents in from working in the regular event. They could just add a permission that specifically allows those actions.
lostmsu•1d ago
There's a huge footgun in that article that has broader impact:

> but it gets worse. since the workflow was checking out our PR code, we could replace the OWNERS file with a symbolic link to ANY file on the runner. like, say, the github actions credentials file

So git allows committing soft links. So the issue above could affect almost any workflow.

danudey•18h ago
Yes, but IIRC when you run `pull_request_target` the credentials are to the target repository - i.e. the one you're merging into. When you run `pull_request`, it's to the source repository, the one the attacker is in control of.
ishouldbework•1d ago
> It is not possible for xargs to be used securely

Eh... That is taken out of context quite a bit, that sentence does continue. Just do `cat "$HOME/changed_files" | xargs -r editorconfig-checker --` and this specific problem is fixed.

woodruffw•1d ago
Yeah, I don't think the specific reason for that sentence in the manpage applies here. But the general sentiment is correct: not all programs support `--` as a delimiter between arguments and inputs, so many xargs invocations are one argument injection away from arbitrary code execution.

(This is traditionally a non-issue, since the whole point is to execute code. So this isn't xargs' fault so much as it's the undying problem of tools being reused across privilege contexts.)

ishouldbework•23h ago
Well, anything POSIX or GNU does support the --. I think most golang libraries as well? And if the program does not, you can always pass the files as relative paths (./--help) to work around that.

For sure though, this can get tricky, but I am not really aware of an alternative. :/ Since the calling convention is just an array of strings, there is no generic way to handle this without knowing what program you are calling and how it handles command line. This is not specific to xargs...

Well, I guess FFI would be a way, but it seems like a major PITA to have to figure out how to call a golang function from bash shell just to "call" a program.

woodruffw•23h ago
> This is not specific to xargs...

Right, it's just that xargs surfaces it easily. I suspect most people don't realize that they're fanning arbitrary arguments into programs when they use xargs to fan input files.

hombre_fatal•1d ago
Though that's like adding `<div>{escapeHtml(value)}</div>` everywhere you ever display a value in html to avoid xss.

If you have to opt in to safe usage at every turn, then it's an unsafe way of doing things.

stonogo•23h ago
I don't disagree but "it's not possible for xxx to be used securely" is a long way from "it's cumbersome and tedious to use xxx securely"
JasonSage•19h ago
But "it's not possible for xxx to be used securely" is a better premise if it deflects people who can't do it correctly.
stonogo•16h ago
Lying to people because you think you're smarter than them is bad policy.
rendaw•12h ago
If using it securely requires you to never ever forget, even once, I'd agree with GP.
amluto•1d ago
I find it rather embarrassing that, after all these years of trying to design computer systems, modern workflows are still designed so that bearer tokens, even short-lived, are issued to trusted programs. If the GitHub action framework gave a privileged Unix socket or ssh-agent access instead, then this type of vulnerability would be quite a lot harder to exploit.
Thom2000•22h ago
Exactly!

Bearer tokens should be replaced with schemes based on signing and the private keys should never be directly exposed (if they are there's no difference between them and a bearer token). Signing agents do just that. Github's API is based on HTTP but mutual TLS authentication with a signing agent should be sufficient.

otabdeveloper4•20h ago
The SPIFFE standard does something like this.

It's not used by anyone because nobody actually gives a shit about security, the entire industry is basically a grift.

ants_everywhere•18h ago
Lots of projects use SPIFFE, but lots of people don't like the new tech because they think the old ways are simpler
otabdeveloper4•7h ago
After trying to get SPIFFE mTLS to work with Python asyncio and giving up, I'm sure "lots of projects" is an overstatement.

Even basic parts of the tech stack aren't there yet.

ants_everywhere•4h ago
Several big CNCF security projects use it. Normally you'd just add sidecars to your asyncio service.
immibis•23h ago
> If you’ve read the man page for xargs, you’ll see this warning:

>> It is not possible for xargs to be used securely

However, the security issue this warning relates to is not the one that's applicable here. The one here is possible to avoid by using -- at the end of the command.

aftergibson•19h ago
As time goes on, I find myself increasingly worried about supply chain attacks—not from a “this could cost me my job” or “NixOS, CI/CD, Node, etc. are introducing new attack vectors” perspective, but from a more philosophical one.

The more I rely on, the more problems I’ll inevitably have to deal with.

I’m not thinking about anything particularly complex—just using things like VSCode, Emacs, Nix, Vim, Firefox, JavaScript, Node, and their endless plugins and dependencies already feels like a tangled mess.

Embarrassingly, this has been pushing me toward using paper and the simplest, dumbest tech possible—no extensions, no plugins—just to feel some sense of control or security. I know it’s not entirely rational, but I can’t shake this growing disillusionment with modern technology. There’s only so much complexity I can tolerate anymore.

YouAreWRONGtoo•18h ago
Emacs itself is probably secure and you can easily audit every extension, but if you update every extension blindly via a nicely composable emacs Nix configuration, you would indeed have a problem.

I guess one could automate finding obvious exploits via LLMs and if the LLM finds something abort the update.

The right solution is to use Coq and just formally verify everything in your organization, which incidentally means throwing away 99.999% of software ever written.

otabdeveloper4•7h ago
Formal verification solves nothing. You can have a formally verified 100% secure backdoor exploit. (Ultimately it all depends on the semantics of "sysadmin" vs "hacker", who are really just two different roles of the same person.)

This is also why signing code commits isn't a solution, only a way to trace ends when something fucks up.

lrvick•18h ago
Had the Nix team rolled out signed commits/reviews and independent signed reproducible builds as my (rejected) RFC proposed, then it would not be possible to do any last mile supply chain attacks like this.

In the end NixPkgs wants to be wikipedia easy for any rando to modify, and fear any attempt at security will make volunteers run screaming, because they are primarily focused on being a hobby distro.

That's just fine, but people need to know this, and stop using and promoting Nix in security critical applications.

An OS that will protect anything of value must have strict two party hardware signing requirements on all changes and not place trust in any single computer or person with a decentralized trust model.

Shameless plug, that is why we built Stagex. https://stagex.tools https://codeberg.org/stagex/stagex/ (Don't worry, not selling anything, it is and will always be 100% free to the public)

gmfawcett•18h ago
That's pretty impressive -- thanks for sharing the link.
XorNot•18h ago
Wow...this is possibly exactly what I've wanted to do for a while, but you already did it!
cpuguy83•18h ago
Just a word of encouragement here, this is super interesting!
pyrox•17h ago
Hey! First, a disclaimer: I do not speak for anyone officially, but I am a very regular contributor to nixpkgs and have been involved in trying to increase nixpkgs' security through adopting the Full-Source Bootstrap that Guix and Stagex use. I also assume that the RFC you're talking about is RFC 0100, "Sign Commits"(ref: https://github.com/NixOS/rfcs/pull/100)

As mentioned in the RFC discussion, the major blocker with this is the lack of an ability for contributors to sign from mobile devices. Currently, building tooling for mobile devices is way out-of-scope for nixpkgs, and would be a large time sink for very little gain over what we have now. Further, while I sign my commits because I believe it is a good way to slightly increase the provenance of my commits, there is nothing preventing me from pushing an unsigned commit, or a commit with an untrusted key, and that's, in my opinion, fine. While for a project like Stagex(which as a casual cybersecurity enthusiast and researcher, I thoroughly appreciate the security work you all do), this layer of security is important, as it's clearly part of the security posture of the project, nixpkgs takes a different view to trustworthiness. While I disagree with your conclusion that having this sort of security measure would "make volunteers run screaming", I would be interested in seeing statistics on the usage of these mechanisms in nixpkgs already. Nixpkgs is also definitely not focused on being a hobby distro, considering it's in use at many major companies around the world(just look at NixCon 2025's sponsor list).

To be clear, this isn't to say that all security measures are worthless. Enabling more usage of security features is a good thing, and it's something I know folks are looking into(but I'm not going to speak for them), so this may change in the future. However, I do agree with the consensus that for nixpkgs, enabling commit signing would be very bad overall for the ecosystem, despite the advantages of them. Also, I didn't see anything in your PR about "independent signed reproducible builds", but for a project the size of nixpkgs, this would also be a massive infrastructure undertaking for a 3rd-party, though NixOS is very close to being fully reproducible(https://reproducible.nixos.org/) at the moment, we're not there yet though.

In conclusion, while I agree that signing commits would a good improvement, the downsides for nixpkgs are significant enough that I don't believe it would be a good move. It's something to definitely continue thinking about as nixpkgs and nix continue to refine and work on their security practices, though. I would also love some more information about how Stagex does two-party hardware signing, as that sounds like something interesting as well. Thank you so much!

Edit: Also, want to be very clear: I am not saying you're entirely wrong, or trying to disparage the very interesting and productive work that Stagex is doing. However, there were some (what I felt were)misconceptions I wanted to clean up.

vlovich123•13h ago
The reason I dislike this is this is the first thing in the article:

> in nixpkgs that would have allowed us to pwn pretty much the entire nix ecosystem and inject malicious code into nixpkg

OP provided a mechanism to stymie the attack. The counter from your position needs to be how the nix project otherwise solves this problem, not “this isn’t the right approach for hand wavy reasons”. Given the reasonings stated, OP has convinced me that Nix isn’t actually serious about security as this should be treated as an absolutely critical vulnerability that has several hardening layers wrapped to prevent such techniques.

typpilol•11h ago
Their leadership and community is also a disaster
tennysont•8h ago
> in nixpkgs that would have allowed us to pwn pretty much the entire nix ecosystem and inject malicious code into nixpkg

Isn't that what happens when a build server or source code is compromised? I'm not sure if the existence of this exploit was egregious, but the blast radius seems normal for a build server exploit.

> how the nix project otherwise solves this problem

You can go into `/etc/nix/nix.conf` and remove `trusted-public-keys` so that you don't trust the output of the build servers. Then you just need to audit a particular commit from nixpkgs (and the source code of the packages that you build) and pin your config to that specific commit.

Otherwise, it seems like the solution is to harden the build system and source code control so that you can freely trust the source code without auditing it yourself. I'm not sure what else can be done.

If your threat model is that the 10+ nixpkg contributors are trustworthy but the github repo is untrustworthy, then git signing would make you safe.

Personally, I worry that a carelessly approved merge in nixpkg or an upstream supply attack is a bigger threat then a github repo exploit (as described here), but I imagine that reasonable minds could disagree.

Regardless, I'm very excited to see that nix builds are almost fully reproducible. That seems great! It seems like this could potentially be the foundation on which a very secure distro is built.

lrvick•6h ago
You absolutely should never trust a centralized build server. Any security critical software distribution process should have all packages independently built, verified to have identical hashes, and signed by systems controlled by as many different trusted maintainers or third parties as possible.

Then any user can prove the binary they got was built faithfully from source due to those redundant build system signatures. We designed ReprOS for this purpose.

stagex has also been 100% deterministic, full source bootstrapped, and independently reproduced/signed by multiple maintainers since our first release with a small team of 10ish regular contributors, so it can be done.

rendaw•12h ago
> the major blocker with this is the lack of an ability for contributors to sign from mobile devices

Do you mean a significant number of nixpkgs contributors make nixpkgs PRs from their phones... via the github web editor?

That seems weird to me at face value... editing code is hard enough on a phone, but this is also for a linux distro (definitely not a mobile os today), not a web app or something else you could even preview on your phone.

Edit: Per https://docs.github.com/en/authentication/managing-commit-si... the web editor can/does sign commits...

Xylakant•10h ago
Mobile devices are not restricted to phones, but include tablets, some of which are pretty powerful and usable for code editing.

Note that the signature for the web interface is made with a GitHub owned key on your behalf and not with your personal key.

lrvick•8h ago
Sorry to be that guy, but if someone cannot afford a $10 bit of hardware for the most basic attempt at protecting others from being harmed by someone impersonating them... then they have no business being a trusted maintainer in a Linux distribution relied on for billions of dollars in infrastructure.

That would be like someone saying they could not afford a mask in COVID or something. It is hard to believe these people really exist. I could go find $10 in change looking on the ground of a few nearby fast food pick-up windows, because I have done it. Many times. Free money!

Anyway, such people will be easy to bribe, easy to target, easy to steal from. Letting that sort of person have trust in a major OS is endangering them, and frankly irresponsible.

For anyone that makes excuses about being unable to produce a hardware signing device, of course let them contribute, but then let two confirmed real humans with hardware keys adopt, review, and sign that PR, and always have at least two real confirmed humans with hardware keys sign every change both as code, and as reproducible artifacts after.

We have taken in tons of drive-by unsigned contributions in stagex. This is no problem. We just pretend an AI bot wrote it, and require one maintainer to "adopt" the commit to sign it (maintaining attribution), and then a second maintainer reviews, and does a signed merge as usual.

lrvick•11h ago
My RFC was much earlier in 2018 https://github.com/NixOS/rfcs/pull/34

Lack of supply chain integrity controls as a means to reduce contribution friction to maximize the number of packages contributed is a perfectly valid strategy for a workstation distribution targeted at hobby developers.

Volunteers can do what they want, so that RFC convinced me stagex needed to exist for high security use cases, as Nix was explicitly not interested in those.

This is all fine. The reason I speak in a tone of frustration whenever Nix comes up is because as a security auditor I regularly see Nix used to protect billions of dollars in value or human lives. Sysadmins uneducated on supply chain integrity just assume Nix does security basics and has some sort of web of trust solution like even OG distros like Debian, but that is just not the case.

Nix maintainers did not ask to be responsible for human lives and billions in value, but they are, and people will target them over it. I am afraid this is going to get people hurt.

https://github.com/jlopp/physical-bitcoin-attacks

Nix choosing low supply chain security to maximize the total number of packages endangers themselves and others every time someone ignorantly deploys nix for high value applications.

If nix chooses to maintain their status quo of no commit signing, no review signing, no developer key pinning, and no independent reproducible build signing, they need to LOUDLY warn people seeking to build high risk systems about these choices.

Even those basic supply chain controls which we use in stagex, are nowhere near enough, but they are the bare minimum for any distro seeking to be used in production.

tennysont•8h ago
Out of curiosity, why don't/didn't you start a new version of nixpkgs with hardened source? You could forgo the build server, forcing users to build from scratch (at least to start). You could leverage the plentiful, albeit, less secure, packaging code in the nixpkgs to quickly build out your hardened versions.

Effectively, you're building out an audited copy of nixpkgs on the same build engine, but with hardened configs. Write wrappers to validate git signatures when users update, and you got yourself a chain of trust on the source code distribution for your hardened nixpkg.

I'm sure you had reasons, I'm just interested to know your thought process.

lrvick•6h ago
I ultimately thought out what would be easier, a decade political fight to make massive changes to nix, or a fork of it written solo to improve auditability and security, or starting over from the top with a design that checks every dream box I wanted from a linux distro.

I had many RFCs that would have followed this rejected one if there was any change tolerance... so my fastest path to prove out my ideas for a distro with decentralized trust was to start one with that explicit goal.

If I wanted to make things maximally auditable and portable to different build engines, a published dead simple spec with multiple competing implementations that most software engineers already know how to write would be ideal. People could review an engine they use, or ensure all existing implementations on any operating system get identical results and are thus trusted that way. If it natively supports a ton of features to make deterministic builds wildly simpler, major bonus.

OCI/Containerfile was a check on all fronts, and some early maintainers and I riffed on design patterns and realized the OCI ecosystem already had specified multi party signing and verification, artifact integrity, smart layer by layer caching etc etc. This fit our dev experience and threat model perfectly and we could just skip implementing the package build and distribution layer and just start writing packages, like that day. None of us needed to learn or invent a new language or ask auditors to do so or fork nix ecosystem to have proper signing support and write a sane spec... that could be years of wheel spinning.

The time saved by choosing an existing widely used and implemented spec meant we were able to put all energy into full source bootstrapping, universal multi party hardware signing on every build, change, review, and reproduction. Just full source bootstrapped linux from scratch in containerfiles with OCI native multi party signing if all parties independently get the same oci hashes from local builds. Oh and we are going LLVM native like Chimera next week. Big sweeping changes like that are easy with our ultralight setup.

I would note that the features we need for deterministic builds in docker, the most popular OCI implementation, only landed a couple of months before we started stagex, and the full source bootstrapping work by the bootstrappable builds team only got a complete bootstrap for the first time a few months before that and Guix shortly after. Tons of reference material.

If stagex had started before 2022 I imagine we might have used a heavily trimmed down nix clone or tried to convince guix to adopt our threat model, which is much further along in supply chain security than nix but scheme would have been a very isolating choice. I think stagex got lucky starting at exactly the right time when two huge pieces of the puzzle were done for us.

otabdeveloper4•7h ago
NixOS is miles better from a security standpoint than any Debian or Red Hat already, so take what you can.
friendly_wizard•7h ago
Stagex is honestly the most interesting discovery in this thread