frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
233•theblazehen•2d ago•68 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
694•klaussilveira•15h ago•206 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
6•AlexeyBrin•1h ago•0 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
962•xnx•20h ago•555 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
130•matheusalmeida•2d ago•35 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
67•videotopia•4d ago•6 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
54•jesperordrup•5h ago•24 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
37•kaonwarb•3d ago•27 comments

ga68, the GNU Algol 68 Compiler – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
10•matt_d•3d ago•2 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
236•isitcontent•15h ago•26 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
233•dmpetrov•16h ago•125 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
32•speckx•3d ago•21 comments

UK infants ill after drinking contaminated baby formula of Nestle and Danone

https://www.bbc.com/news/articles/c931rxnwn3lo
11•__natty__•3h ago•0 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
335•vecti•17h ago•147 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
502•todsacerdoti•23h ago•244 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
386•ostacke•21h ago•97 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
300•eljojo•18h ago•186 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
361•aktau•22h ago•185 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
425•lstoll•21h ago•282 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
68•kmm•5d ago•10 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
96•quibono•4d ago•22 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
21•bikenaga•3d ago•11 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
19•1vuio0pswjnm7•1h ago•5 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
265•i5heu•18h ago•216 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
33•romes•4d ago•3 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
64•gfortaine•13h ago•28 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1076•cdrnsf•1d ago•460 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
39•gmays•10h ago•13 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
298•surprisetalk•3d ago•44 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
154•vmatsiiako•20h ago•72 comments
Open in hackernews

Weaponizing Dependabot: Pwn Request at its finest

https://boostsecurity.io/blog/weaponizing-dependabot-pwn-request-at-its-finest
111•chha•8mo ago

Comments

woodruffw•8mo ago
The folks at Synaktiv had a nice detailed blog post on this same vector last year[1].

The bottom line with these kinds of things is that virtually nobody should be using `pull_request_target`, even with “trusted” machine actors like Dependabot. It’s a pretty terrible footgun.

[1]: https://www.synacktiv.com/en/publications/github-actions-exp...

gdubya•8mo ago
Fixed link: https://www.synacktiv.com/en/publications/github-actions-exp...
woodruffw•8mo ago
Thanks, I've fixed my comment as well.
radicalexponent•8mo ago
Step one in making a new repo is disabling Dependabot. Automate intentionally or get pwned your choice.

Also, is automated version bumps really such a good thing? Many times I have wasted hours tracking down a bug that was introduced by bumping library. Sometimes only the patch version of the library is different so it shouldn't be breaking anything... but it does! It is so much better to update intentionally, test, deploy. Though this does assume you have a modest number of dependencies which pretty much excludes any kind of server-side javascript project.

woodruffw•8mo ago
I don’t disagree about automating intentionally, but it’s worth noting that Dependabot isn’t enabled by default: you have to explicitly configure it.

(The larger problem here isn’t even Dependabot per se, since all Dependabot does is fire PRs off. The problem is that people then try to automate the merging of those PRs, and end up shooting themselves in the foot with GHA’s more general footguns. It also doesn’t help that, until recently, GitHub’s documentation recommended using these kinds of dangerous triggers for automating Dependabot.)

lmm•8mo ago
> Dependabot isn’t enabled by default: you have to explicitly configure it.

Really? Dependabot runs on a number of my repositories without my having consciously enabled it.

woodruffw•8mo ago
> Really? Dependabot runs on a number of my repositories without my having consciously enabled it.

I've never experienced this. Do you have a `.github/dependabot.yml` file in your repository? That's how it's enabled.

(GitHub has muddied the water here a bit by having two related but distinct things with the same name: there's "Dependabot" the subject of this post, and then there's "Dependabot security updates" which are documented separately and appear to operate on a different cycle[1]. I don't know if this latter one is enabled by default or not, but the "normal" one is definitely disabled until you configure it.)

[1]: https://docs.github.com/en/code-security/dependabot/dependab...

lmm•8mo ago
> I've never experienced this. Do you have a `.github/dependabot.yml` file in your repository? That's how it's enabled.

Nope. Example: https://github.com/m50d/tierney/pull/55

woodruffw•8mo ago
I'm at a loss to explain that! My only other guess is that you might have enabled Dependabot at some point further back in history, when it was a third-party integration and directly owned by or integrated into GitHub.

Do you have a Dependbot entry in your account/org-level applications?

lmm•8mo ago
> Do you have a Dependbot entry in your account/org-level applications?

I don't think so. I have no memory of such a thing, and there is no org.

woodruffw•8mo ago
Okay, I have no idea then. I guess perhaps at one point Dependabot was enabled by default for some people, although that strikes me as a bad idea and I can only assume they've disabled it since then, since I haven't seen this on any new repository I've made.
WorldMaker•8mo ago
My understanding, and it may be wrong, is you may be grandfathered in to an ancient Personal, Public Repo opt-out from a brief window of time just after GitHub was excited to announce the first/earliest version of Dependabot and was hoping it would clean up some Open Source supply chain attacks and just before GitHub realized Dependabot was a useful thing to charge people an upcharge on (now under the umbrella known as GitHub Advanced Security). I believe that GitHub auto-opted in a lot of personal accounts with "significant" Public repos (anything with a bunch of forks/stars, or a package identifier visible in the dependency graphs of the Ruby or npm ecosystems, or any of the things that awarded "badges" like Mars Rover badge or the Artic Vault badge). There's a page buried in your Personal Account Settings to turn off that ancient Dependabot option. (I'm on a work machine without access to my personal account at the moment or I'd directly tell you where to find it.)
thayne•8mo ago
Are these repos forks of projects that already set up dependabot? Or maybe created from templates that included dependabot configuration?
lmm•8mo ago
Nope. My own original projects.
matijs•8mo ago
Could it have been a Dependabot security update? These are different from normal Dependabot updates and do not require `.github/dependabot.yml`.
bravesoul2•8mo ago
I always check changelogs of the dependencies. I treat a dependant PR as seriously as any PR.
bugtodiffer•8mo ago
changelogs, but not the code?
bravesoul2•8mo ago
That's a judgement call. It would be too much to review all code change of all dependencies unfortunately.

The corollary of reviewing all code on all dependency updates is you should review all code or the new deps you add, including the transformation by build processes that might mean what is in the package manager might be different and same for all transitive dependencies.

Same with the language and runtime tooling.

It is too hard to be perfect!

robszumski•8mo ago
How do you scale this besides keeping the dep list short? Are you reading every item or just scanning for words like "deprecated" or "breaking change"?
ImPostingOnHN•8mo ago
How do you prevent exposing yourself to supply chain attacks like the tj-actions/changed-files one [0] if you don't?

I get your question regarding scaling, but that's the job: you can choose to outsource code to 3rd-party libraries, and eternal vigilance is the trade-off.

Assume your 3rd-party dependencies will try to attack you at some point: they could be malicious; they could be hacked; they could be issued a secret court order; they could be corrupted; they could be beaten up until they pushed a change.

Unless you have some sort of contract or other legal protection and feel comfortable enforcing them, behave accordingly.

0: https://www.wiz.io/blog/github-action-tj-actions-changed-fil...

bravesoul2•8mo ago
It's not a huge part of the job to read every item. Looking at code changes in deps though is a whole other thing.
bobbiechen•8mo ago
"Is backward compatibility even possible?" -> https://digitalseams.com/blog/is-backward-compatibility-even...
nextaccountic•8mo ago
> After you finish apologizing to all your users for the recent breakages, you decide to make some performance improvements. Everyone loves faster code, right? This is a safe thing to do and surely nothing could go wrong.

> The new app now calculates the yearly tax summary almost instantaneously. It’s a huge improvement over the previous version, which used to take several seconds. You ship it.

> …and oops, one of your biggest partners has called you to complain. You’ve broken their website, which embeds your app as part of their business management suite. It turns out their code expected your calculation to take at least 5 seconds. Now that it’s faster, users encounter lots of errors and results that don’t make any sense.

> In frustration, you quit your job and return to the construction industry. At least here, no one expects a house upgrade without disruption.

Huge https://xkcd.com/1172/ vibes

esafak•8mo ago
You run CI on your dependabot PRs, right? IF so, how is it any different than doing the same thing manually?
udev4096•8mo ago
I have not seen any serious OSS project to auto-merge version bumps. Most of them have manual approval for it afaik
robszumski•8mo ago
Curious if auto-merge philosophy changes between libraries and applications. The library definitely has a larger user base to break and a wider matrix of use-cases. IMO, auto-merge is more palatable for an application – do you agree? Especially when you're under SOC2/FedRAMP/etc.
Joker_vD•8mo ago
> Sometimes only the patch version of the library is different so it shouldn't be breaking anything... but it does!

Still have flashbacks from that one time when some dependency in our Go project dropped support for go1.18 in its patch version update, and we almost couldn't rebuild the project before the Friday evening. Because obviously /s being literally unable to build the dependency is a backwards-compatible change.

duped•8mo ago
> Also, is automated version bumps really such a good thing?

Depends. Do you want to persist the belief that software requires constant maintenance because it's constantly changing? Then yes: automate your version bumps and do it as often as possible.

If you want software to be stable then only update versions when you have a bug.

jerf•8mo ago
Work has been fiddling around with Dependabot and force-enabling it everywhere (not auto-merging, just having it generating the PRs)... my feedback was that it is built on the presumption that All Updates Are Automatically Good, but this is transparently a false statement. In fact, Dependabot, by being so fast and automatic, may actually raise the probability of some project getting malicious code injected into it! Consider the timeline of malicious code injection:

    1. Malicious code is injected into some project.
    2. People have a chance to pick it up and put it into their code.
    3. The malicious code is found, publicized, and people react.
The faster you act after step 1, the more chance you'll have time to put it into your system before the world reaches step 3. Dependabot maximizes the speed of reaction after step one. If I'm doing things somewhat more manually then I'm much more likely to experience the world finding out about a corrupted dependency before I start incorporating it.

Now, just typing it out it may sound like I'm more freaked out than I actually am. While supply-chain attacks are a problem, they are getting worse, and they will continue to get worse, they are also still an exotic situation bubbling on the fringe of my awareness, as opposed to something I'm encountering regularly. For a reasonable project the most likely outcome is that dependabot enhancing this exposure window will still not have any actual real-world impact, and I'm aware of that. However, where this is relevant is if you are thinking of Dependabot and its workflow as a way of managing security risk, because you imagine updates as likely carrying security improvements, and that's your primary purpose for using it (as opposed to other uses, such as, your system slowly falling behind in dependencies until it calcifies and can't be updated without a huge degree of effort, a perfectly reasonable threat and Dependabot is a sensible response to that), then you also need to consider the ways in which is may actually increase your vulnerability to threats like supply-chain attacks.

And of course, projects do not start out with all their vulnerabilities on day one and then monotonically remove them. Many vulnerabilities are introduced later. For each such vulnerability, there is a first release that includes them, and for which treating that update as if it was just a Good Thing was in fact not true, and anyone who pushed it in as quickly as possible made a mistake. Unfortunately, sometimes hard problems are just hard problems.

Though I have wondered about the idea of programming something like Dependabot, but telling it, hey, tell me about known CVEs and security releases, but otherwise, let things cook for 6 months before automatically building a PR for me to update. That would radically reduce this risk I'm outlining here.

(In fact, after pondering, I'm kind of reminded of how Debian and a lot of Linux distros work, with their staged Cutting Edge versus Testing versus Stable versus Long Term Support. Dependabot sort of builds in the presumption that you want that Cutting Edge level of updates... but in many cases, no, I really don't. I'd much rather build with Stable or Long Term Support for a lot of things, and dip into the riskier end of the pool for specific things if I need to.)

zingababba•8mo ago
Dependabot already differentiates between version updates and security updates:

https://docs.github.com/en/code-security/dependabot/dependab...

https://docs.github.com/en/code-security/dependabot/dependab...

jerf•8mo ago
I have only skimmed the docs and I wouldn't be completely shocked if there's a "wait X weeks/months to notify about non-security updates" but I don't know about it if it's there. If it is there, hey, great! Won't be the first time I really wished that X did Y and found out that yes it already does.
morningsam•8mo ago
Seems like this was requested in 2021 and is currently in beta testing for select ecosystems only: https://github.com/dependabot/dependabot-core/issues/3651
morningsam•8mo ago
>Though I have wondered about the idea of programming something like Dependabot, but telling it, hey, tell me about known CVEs and security releases, but otherwise, let things cook for 6 months before automatically building a PR for me to update.

Renovate can do both of these things already:

https://docs.renovatebot.com/configuration-options/#vulnerab...

https://docs.renovatebot.com/configuration-options/#minimumr...

latchkey•8mo ago
I'm going through SOC2 right now, and it is essentially requiring it to be enabled. I'll just do the minimal to pass, but it isn't something you can just disable in some cases.
sethhochberg•8mo ago
FWIW: this might be a suggestion of the specific audit team you’re working with or a requirement of one of the “follow our playbook and you’ll pass” vendors if you’re using one of those, but the SOC 2 on its own doesn’t really impose specific technical feature/control requirements like this.

I don’t have the exact exam language in front of me right now but the requirement would be something like “you have some process for learning about, assessing, and mitigating vulnerabilities in software dependencies that you use”.

Enabling an automated scan and version bump tool like dependabot is a common and easy way to prove your organization has those capabilities. But you could implement whatever process you want here and prove that you do it on the schedule you say you do in order to satisfy the audit requirement.

latchkey•8mo ago
True on all counts. But the lowest effort is "just turn off dependabot", which is what I suspect most of the people trying to get past SOC2 will do (like myself).
gregwebs•8mo ago
I passed SOC2 with dependabot set to only perform security updates
latchkey•8mo ago
Yes, that is about all you need on it.
jt2190•8mo ago
> Sometimes only the patch version of the library is different so it shouldn't be breaking anything... but it does!

OT: Semantic versioning’s major flaw is that is the presumption that the entire chain of package maintainers is extremely diligent about correctly bumping their version numbers. There have certainly been a few projects that are very good about this, but the vast majority are not.

The only solution to this that I know of is to test: Manually exercise the features with the dependencies, write automated tests that check your critical needs from a dependency, or identify and run the tests from the dependency’s test suite that cover your use cases. (Also: Contribute tests that you want in the suite!)

udev4096•8mo ago
Wait, how is it possible for anyone who opens a PR to issue dependabot commands for main repository? There should be some kind of authorization in place to avoid it, right? Should it not ignore any commands coming from outside users who do not have commit access?
bugtodiffer•8mo ago
its a fork
bavarianbob•8mo ago
This is explained here:

> Here's the trick: github.actor does not always refer to the actual creator of the Pull Request. It's the user who caused the latest event that triggered the workflow.

abhisek•8mo ago
Seems like a bit forced scenario to me. I have never seen anyone auto-merge Dependabot PR automatically using GitHub Action.

Also pull_request_target is a big red flag in any GHA and even highlighted in GHA docs. It’s like running untrusted code with all your secrets handed over to it.

woodruffw•8mo ago
> I have never seen anyone auto-merge Dependabot PR automatically using GitHub Action.

For better or worse, it's a pattern that GitHub explicitly documents[1].

(An earlier version of this page also recommended `pull_request_target`, hence the long tail of public repositories that use it.)

[1]: https://docs.github.com/en/code-security/dependabot/working-...

phyzome•8mo ago
What a weird and distracting AI-generated header image.
zingababba•8mo ago
Yeah, why is he saying "confused deputy" as he pulls the lever. Sounds like he knows he shouldn't even be in charge!
minitech•8mo ago
And what a weird and distracting AI-generated article. The bold-labelled numbered lists (e.g. “the evil plan”) are especially awkward.
phyzome•7mo ago
Might have been AI-generated, or just poorly written. I found it hard to read either way.
ImPostingOnHN•8mo ago
> So, they created workflows to auto-merge PRs if the creator was Dependabot. Seems safe, doesn't it?

No? In what world would it be safe to merge code, AI-generated or not, which you haven't reviewed, much less do it automatically without you even knowing it happened?

How do you know that you need the changes (whether bug or CVE)? How do you know the code isn't malicious? How do you know your systems are compatible with the change? How do you know you won't need to perform manual work during the migration?

carefulfungi•8mo ago
I think this response is understandable. But also feels disingenuous. If a dependency update appears to be an automated version bump, and you trust your test base, and you trust the dependency author (did you read the whole dependency's dependency graph before importing it? You probably didn't.) it's only a small step to auto-merge. The horror here is that the bot isn't the author and the version update isn't legitimate. That's on Github's presentation and settings, in my opinion.

Relying on a human reviewer, regardless, is a weak guarantee. If your security posture is "Joe shouldn't make mistakes", you still have a weak security posture.

ImPostingOnHN•8mo ago
> ... and you trust the dependency author ... it's only a small step to auto-merge

I disagree for the reasons listed above, but let's focus for a moment on 3rd-party dependencies here, versus trusted ones. Given the numerous scenarios I listed above, it's a huge step from "using a 3rd party library" to "trusting the author of it".

We'd have to start with "...and you don't trust the author", because for most 3rd-party dependencies, the author has given us neither sufficient evidence to do so, nor sufficient recourse if that trust is violated.

> Relying on a human reviewer, regardless, is a weak guarantee.

Relying on countless random 3rd party to not own you when they know people are pulling in their code as a dependency is a far, far worse guarantee. How would that strategy have protected someone against this supply-chain attack?:

https://www.wiz.io/blog/github-action-tj-actions-changed-fil...

carefulfungi•8mo ago
For most projects (that don't comprehensively review their entire dependency graph's codebases), whether you started with foobar-1.0.1 or foobar-1.0.2 is just a coincidence of timing. They downloaded whatever the current version was when first needing foobar. Selecting 1.0.1 with little review but then declaring that all upgrades deserve deep review is a contradiction. I think it is this contradiction that leads people to auto-update dependencies.

The problem isn't the auto-merging 1.0.2 - it's the lack of attestation in the PR that appears to be code from the foobar authors who were trusted in version 1.0.1.

ImPostingOnHN•8mo ago
> Selecting 1.0.1 with little review

Why would you choose to give little review to a dependency?

> The problem isn't the auto-merging 1.0.2 - it's the lack of attestation in the PR that appears to be code from the foobar authors who were trusted in version 1.0.1.

The authors were never trusted and never will be. What was trusted was the code at the commit hash for the first time 1.0.1 got tagged, and now a bot is saying you should move away from that trusted code.

Is that a good idea? That depends on, among other things: if you need to; and if you trust the new code.

The foobar author could have intentionally or unintentionally included an exploit, or they could have had their account hacked and someone else included one on their behalf, or just changed the code behind an existing tag (see my previous comments for a recent example).

I get what you're saying about this being overwhelming. Without this, though, we've seen it's just security theater, because your code is as strong as it's weakest link. More eyeballs on a given release/commit also means more people looking out for something nefarious, which is counteracted by a shorter time-since-release. Maybe multiple AI agents will make it easier

carefulfungi•8mo ago
> Why would you choose to give little review to a dependency?

Given finite developer hours, what activity has the highest security impact per hour - is it reviewing the dependency graph? That's not the choice most projects are making. Maybe they are wrong. Or maybe they know where to spend their time for better impact. I dunno...

I have worked in commercial codebases that vendored 100% of their dependencies (including kernel and driver source) and reviewed large swathes of those dependencies carefully. I'm absolutely not dismissive of this. I think we agree more people should be doing it.

However, over the decases, I've seen very few projects take this approach. Many choose to trust third party code (naively, as you point out!). If that's the reality, I think we should continue to work on improving provenance, automated signature verification, and other tooling so we can at least better know that if we choose to trust foobar, it's actually foobar who is distributing foobar 1.0.2.

The AI comment is provocative - can future-AI find vulnerabilities better than future-AI can inject hard-to-find vulnerabilities? And how do we know our AI reviewers themselves aren't hacked... a horrible twist on https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_Ref....

snickerbockers•8mo ago
If people are this obsessed with making the little version number of their dependencies as big as possible why don't they just make downloading the dependencies part of the build system?? Or better yet make the user supply their own dependencies and link them in as shared libraries? This problem was solved at least 30 years ago...
TZubiri•8mo ago
Yesterday on an article on some Package URL proposal and I wrote a comment about these super deep supply chains based on free volunteer work, and how it's becoming even more work to just keep a handle on them. But hn didn't allow me to post it.

When you need to add dependencies that manage packages to avoid version conflicts, when you need to add dependencies that check for vulnerabilities in your dependencies, that's when you know you are in too deep.

And it's not like these things are necessary, for the big majority of systems that have less than 100k users, you can just build systems that run for less than 200$ per month. I've started working with an empty requirements.txt and empty package.json, everything is fine.

Before you call me a dinosaur and claim that I might as well be writing assembly, first, I do depend on an Operating System and a Programming Language, and maybe a database if I need the convenience, there's such a thing as nuance. And second, I'm using cutting edge tech, these things (Linux, python, mysql) have existed for less than 50 years! They are in their infancy!

But man, I'm really banking on the ethos of building on top of the shoulders of "giants" falls soon. I hate to be on the side of hackers, but it's inevitable that something bad will happen if you keep on doing shit like this.

It's like having a friend or group of people that constantly have orgies with random people and they develop a huge culture around using different types of condoms and profilactics of some kind or other, and giving tips on what works and what doesn't and switching strategies when one of their friends gets an STD but still not quite ever abandoning the idea of having massive orgies.

All of the while the cost of developing software is dropping to 0, almost no one uses GPL, and surprise surprise, companies build proprietary systems on top of 95% software of the commons. It sucks to compete with companies and programmers that just npm install 1000 things, not sure what I'm banking on, maybe a slew of lawsuits that increase the liabilities of writing bad software that incentivizes actually understanding the shit that we build?

Gnight HN.

lrvick•8mo ago
Mandate signed commits and signed reviews.

If someone wants to merge a bot PR or any other PR by an untrusted third party, they will have to "adopt" the bot commit as their own, sign the commit locally, and then wait for a second human reviewer to do a signed merge.

Not signing code means it could be tampered with all sorts of ways. Get a Nitrokey and setup git to sign with it and have your team do the same.

msgodel•8mo ago
Ah the classic "do something to prevent outside organizations from having agency in your project" (pin dependency versions) followed by "delegate the agency to an automation because agency comes with responsibilities."

It reminds me of the github bot someone posted here at the beginning of the year (it was hard to tell if it was satire) that just automatically approved (not merged, just approved) PRs to get around some technicality in some (supposedly machine verified) infosec standard that required two human approvals on every PR.