frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: ZigZag – A Bubble Tea-Inspired TUI Framework for Zig

https://github.com/meszmate/zigzag
1•meszmate•56s ago•0 comments

Metaphor+Metonymy: "To love that well which thou must leave ere long"(Sonnet73)

https://www.huckgutman.com/blog-1/shakespeare-sonnet-73
1•gsf_emergency_6•2m ago•0 comments

Show HN: Django N+1 Queries Checker

https://github.com/richardhapb/django-check
1•richardhapb•18m ago•1 comments

Emacs-tramp-RPC: High-performance TRAMP back end using JSON-RPC instead of shell

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•todsacerdoti•22m ago•0 comments

Protocol Validation with Affine MPST in Rust

https://hibanaworks.dev
1•o8vm•27m ago•1 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
2•gmays•28m ago•0 comments

Show HN: Zest – A hands-on simulator for Staff+ system design scenarios

https://staff-engineering-simulator-880284904082.us-west1.run.app/
1•chanip0114•29m ago•1 comments

Show HN: DeSync – Decentralized Economic Realm with Blockchain-Based Governance

https://github.com/MelzLabs/DeSync
1•0xUnavailable•34m ago•0 comments

Automatic Programming Returns

https://cyber-omelette.com/posts/the-abstraction-rises.html
1•benrules2•37m ago•1 comments

Why Are There Still So Many Jobs? The History and Future of Workplace Automation [pdf]

https://economics.mit.edu/sites/default/files/inline-files/Why%20Are%20there%20Still%20So%20Many%...
2•oidar•39m ago•0 comments

The Search Engine Map

https://www.searchenginemap.com
1•cratermoon•47m ago•0 comments

Show HN: Souls.directory – SOUL.md templates for AI agent personalities

https://souls.directory
1•thedaviddias•48m ago•0 comments

Real-Time ETL for Enterprise-Grade Data Integration

https://tabsdata.com
1•teleforce•51m ago•0 comments

Economics Puzzle Leads to a New Understanding of a Fundamental Law of Physics

https://www.caltech.edu/about/news/economics-puzzle-leads-to-a-new-understanding-of-a-fundamental...
3•geox•52m ago•0 comments

Switzerland's Extraordinary Medieval Library

https://www.bbc.com/travel/article/20260202-inside-switzerlands-extraordinary-medieval-library
2•bookmtn•52m ago•0 comments

A new comet was just discovered. Will it be visible in broad daylight?

https://phys.org/news/2026-02-comet-visible-broad-daylight.html
3•bookmtn•57m ago•0 comments

ESR: Comes the news that Anthropic has vibecoded a C compiler

https://twitter.com/esrtweet/status/2019562859978539342
2•tjr•59m ago•0 comments

Frisco residents divided over H-1B visas, 'Indian takeover' at council meeting

https://www.dallasnews.com/news/politics/2026/02/04/frisco-residents-divided-over-h-1b-visas-indi...
3•alephnerd•59m ago•3 comments

If CNN Covered Star Wars

https://www.youtube.com/watch?v=vArJg_SU4Lc
1•keepamovin•1h ago•1 comments

Show HN: I built the first tool to configure VPSs without commands

https://the-ultimate-tool-for-configuring-vps.wiar8.com/
2•Wiar8•1h ago•3 comments

AI agents from 4 labs predicting the Super Bowl via prediction market

https://agoramarket.ai/
1•kevinswint•1h ago•1 comments

EU bans infinite scroll and autoplay in TikTok case

https://twitter.com/HennaVirkkunen/status/2019730270279356658
6•miohtama•1h ago•5 comments

Benchmarking how well LLMs can play FizzBuzz

https://huggingface.co/spaces/venkatasg/fizzbuzz-bench
1•_venkatasg•1h ago•1 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
19•SerCe•1h ago•14 comments

Octave GTM MCP Server

https://docs.octavehq.com/mcp/overview
1•connor11528•1h ago•0 comments

Show HN: Portview what's on your ports (diagnostic-first, single binary, Linux)

https://github.com/Mapika/portview
3•Mapika•1h ago•0 comments

Voyager CEO says space data center cooling problem still needs to be solved

https://www.cnbc.com/2026/02/05/amazon-amzn-q4-earnings-report-2025.html
1•belter•1h ago•0 comments

Boilerplate Tax – Ranking popular programming languages by density

https://boyter.org/posts/boilerplate-tax-ranking-popular-languages-by-density/
1•nnx•1h ago•0 comments

Zen: A Browser You Can Love

https://joeblu.com/blog/2026_02_zen-a-browser-you-can-love/
1•joeblubaugh•1h ago•0 comments

My GPT-5.3-Codex Review: Full Autonomy Has Arrived

https://shumer.dev/gpt53-codex-review
2•gfortaine•1h ago•0 comments
Open in hackernews

Weaponizing Dependabot: Pwn Request at its finest

https://boostsecurity.io/blog/weaponizing-dependabot-pwn-request-at-its-finest
111•chha•8mo ago

Comments

woodruffw•8mo ago
The folks at Synaktiv had a nice detailed blog post on this same vector last year[1].

The bottom line with these kinds of things is that virtually nobody should be using `pull_request_target`, even with “trusted” machine actors like Dependabot. It’s a pretty terrible footgun.

[1]: https://www.synacktiv.com/en/publications/github-actions-exp...

gdubya•8mo ago
Fixed link: https://www.synacktiv.com/en/publications/github-actions-exp...
woodruffw•8mo ago
Thanks, I've fixed my comment as well.
radicalexponent•8mo ago
Step one in making a new repo is disabling Dependabot. Automate intentionally or get pwned your choice.

Also, is automated version bumps really such a good thing? Many times I have wasted hours tracking down a bug that was introduced by bumping library. Sometimes only the patch version of the library is different so it shouldn't be breaking anything... but it does! It is so much better to update intentionally, test, deploy. Though this does assume you have a modest number of dependencies which pretty much excludes any kind of server-side javascript project.

woodruffw•8mo ago
I don’t disagree about automating intentionally, but it’s worth noting that Dependabot isn’t enabled by default: you have to explicitly configure it.

(The larger problem here isn’t even Dependabot per se, since all Dependabot does is fire PRs off. The problem is that people then try to automate the merging of those PRs, and end up shooting themselves in the foot with GHA’s more general footguns. It also doesn’t help that, until recently, GitHub’s documentation recommended using these kinds of dangerous triggers for automating Dependabot.)

lmm•8mo ago
> Dependabot isn’t enabled by default: you have to explicitly configure it.

Really? Dependabot runs on a number of my repositories without my having consciously enabled it.

woodruffw•8mo ago
> Really? Dependabot runs on a number of my repositories without my having consciously enabled it.

I've never experienced this. Do you have a `.github/dependabot.yml` file in your repository? That's how it's enabled.

(GitHub has muddied the water here a bit by having two related but distinct things with the same name: there's "Dependabot" the subject of this post, and then there's "Dependabot security updates" which are documented separately and appear to operate on a different cycle[1]. I don't know if this latter one is enabled by default or not, but the "normal" one is definitely disabled until you configure it.)

[1]: https://docs.github.com/en/code-security/dependabot/dependab...

lmm•8mo ago
> I've never experienced this. Do you have a `.github/dependabot.yml` file in your repository? That's how it's enabled.

Nope. Example: https://github.com/m50d/tierney/pull/55

woodruffw•8mo ago
I'm at a loss to explain that! My only other guess is that you might have enabled Dependabot at some point further back in history, when it was a third-party integration and directly owned by or integrated into GitHub.

Do you have a Dependbot entry in your account/org-level applications?

lmm•8mo ago
> Do you have a Dependbot entry in your account/org-level applications?

I don't think so. I have no memory of such a thing, and there is no org.

woodruffw•8mo ago
Okay, I have no idea then. I guess perhaps at one point Dependabot was enabled by default for some people, although that strikes me as a bad idea and I can only assume they've disabled it since then, since I haven't seen this on any new repository I've made.
WorldMaker•8mo ago
My understanding, and it may be wrong, is you may be grandfathered in to an ancient Personal, Public Repo opt-out from a brief window of time just after GitHub was excited to announce the first/earliest version of Dependabot and was hoping it would clean up some Open Source supply chain attacks and just before GitHub realized Dependabot was a useful thing to charge people an upcharge on (now under the umbrella known as GitHub Advanced Security). I believe that GitHub auto-opted in a lot of personal accounts with "significant" Public repos (anything with a bunch of forks/stars, or a package identifier visible in the dependency graphs of the Ruby or npm ecosystems, or any of the things that awarded "badges" like Mars Rover badge or the Artic Vault badge). There's a page buried in your Personal Account Settings to turn off that ancient Dependabot option. (I'm on a work machine without access to my personal account at the moment or I'd directly tell you where to find it.)
thayne•8mo ago
Are these repos forks of projects that already set up dependabot? Or maybe created from templates that included dependabot configuration?
lmm•8mo ago
Nope. My own original projects.
matijs•8mo ago
Could it have been a Dependabot security update? These are different from normal Dependabot updates and do not require `.github/dependabot.yml`.
bravesoul2•8mo ago
I always check changelogs of the dependencies. I treat a dependant PR as seriously as any PR.
bugtodiffer•8mo ago
changelogs, but not the code?
bravesoul2•8mo ago
That's a judgement call. It would be too much to review all code change of all dependencies unfortunately.

The corollary of reviewing all code on all dependency updates is you should review all code or the new deps you add, including the transformation by build processes that might mean what is in the package manager might be different and same for all transitive dependencies.

Same with the language and runtime tooling.

It is too hard to be perfect!

robszumski•8mo ago
How do you scale this besides keeping the dep list short? Are you reading every item or just scanning for words like "deprecated" or "breaking change"?
ImPostingOnHN•8mo ago
How do you prevent exposing yourself to supply chain attacks like the tj-actions/changed-files one [0] if you don't?

I get your question regarding scaling, but that's the job: you can choose to outsource code to 3rd-party libraries, and eternal vigilance is the trade-off.

Assume your 3rd-party dependencies will try to attack you at some point: they could be malicious; they could be hacked; they could be issued a secret court order; they could be corrupted; they could be beaten up until they pushed a change.

Unless you have some sort of contract or other legal protection and feel comfortable enforcing them, behave accordingly.

0: https://www.wiz.io/blog/github-action-tj-actions-changed-fil...

bravesoul2•8mo ago
It's not a huge part of the job to read every item. Looking at code changes in deps though is a whole other thing.
bobbiechen•8mo ago
"Is backward compatibility even possible?" -> https://digitalseams.com/blog/is-backward-compatibility-even...
nextaccountic•8mo ago
> After you finish apologizing to all your users for the recent breakages, you decide to make some performance improvements. Everyone loves faster code, right? This is a safe thing to do and surely nothing could go wrong.

> The new app now calculates the yearly tax summary almost instantaneously. It’s a huge improvement over the previous version, which used to take several seconds. You ship it.

> …and oops, one of your biggest partners has called you to complain. You’ve broken their website, which embeds your app as part of their business management suite. It turns out their code expected your calculation to take at least 5 seconds. Now that it’s faster, users encounter lots of errors and results that don’t make any sense.

> In frustration, you quit your job and return to the construction industry. At least here, no one expects a house upgrade without disruption.

Huge https://xkcd.com/1172/ vibes

esafak•8mo ago
You run CI on your dependabot PRs, right? IF so, how is it any different than doing the same thing manually?
udev4096•8mo ago
I have not seen any serious OSS project to auto-merge version bumps. Most of them have manual approval for it afaik
robszumski•8mo ago
Curious if auto-merge philosophy changes between libraries and applications. The library definitely has a larger user base to break and a wider matrix of use-cases. IMO, auto-merge is more palatable for an application – do you agree? Especially when you're under SOC2/FedRAMP/etc.
Joker_vD•8mo ago
> Sometimes only the patch version of the library is different so it shouldn't be breaking anything... but it does!

Still have flashbacks from that one time when some dependency in our Go project dropped support for go1.18 in its patch version update, and we almost couldn't rebuild the project before the Friday evening. Because obviously /s being literally unable to build the dependency is a backwards-compatible change.

duped•8mo ago
> Also, is automated version bumps really such a good thing?

Depends. Do you want to persist the belief that software requires constant maintenance because it's constantly changing? Then yes: automate your version bumps and do it as often as possible.

If you want software to be stable then only update versions when you have a bug.

jerf•8mo ago
Work has been fiddling around with Dependabot and force-enabling it everywhere (not auto-merging, just having it generating the PRs)... my feedback was that it is built on the presumption that All Updates Are Automatically Good, but this is transparently a false statement. In fact, Dependabot, by being so fast and automatic, may actually raise the probability of some project getting malicious code injected into it! Consider the timeline of malicious code injection:

    1. Malicious code is injected into some project.
    2. People have a chance to pick it up and put it into their code.
    3. The malicious code is found, publicized, and people react.
The faster you act after step 1, the more chance you'll have time to put it into your system before the world reaches step 3. Dependabot maximizes the speed of reaction after step one. If I'm doing things somewhat more manually then I'm much more likely to experience the world finding out about a corrupted dependency before I start incorporating it.

Now, just typing it out it may sound like I'm more freaked out than I actually am. While supply-chain attacks are a problem, they are getting worse, and they will continue to get worse, they are also still an exotic situation bubbling on the fringe of my awareness, as opposed to something I'm encountering regularly. For a reasonable project the most likely outcome is that dependabot enhancing this exposure window will still not have any actual real-world impact, and I'm aware of that. However, where this is relevant is if you are thinking of Dependabot and its workflow as a way of managing security risk, because you imagine updates as likely carrying security improvements, and that's your primary purpose for using it (as opposed to other uses, such as, your system slowly falling behind in dependencies until it calcifies and can't be updated without a huge degree of effort, a perfectly reasonable threat and Dependabot is a sensible response to that), then you also need to consider the ways in which is may actually increase your vulnerability to threats like supply-chain attacks.

And of course, projects do not start out with all their vulnerabilities on day one and then monotonically remove them. Many vulnerabilities are introduced later. For each such vulnerability, there is a first release that includes them, and for which treating that update as if it was just a Good Thing was in fact not true, and anyone who pushed it in as quickly as possible made a mistake. Unfortunately, sometimes hard problems are just hard problems.

Though I have wondered about the idea of programming something like Dependabot, but telling it, hey, tell me about known CVEs and security releases, but otherwise, let things cook for 6 months before automatically building a PR for me to update. That would radically reduce this risk I'm outlining here.

(In fact, after pondering, I'm kind of reminded of how Debian and a lot of Linux distros work, with their staged Cutting Edge versus Testing versus Stable versus Long Term Support. Dependabot sort of builds in the presumption that you want that Cutting Edge level of updates... but in many cases, no, I really don't. I'd much rather build with Stable or Long Term Support for a lot of things, and dip into the riskier end of the pool for specific things if I need to.)

zingababba•8mo ago
Dependabot already differentiates between version updates and security updates:

https://docs.github.com/en/code-security/dependabot/dependab...

https://docs.github.com/en/code-security/dependabot/dependab...

jerf•8mo ago
I have only skimmed the docs and I wouldn't be completely shocked if there's a "wait X weeks/months to notify about non-security updates" but I don't know about it if it's there. If it is there, hey, great! Won't be the first time I really wished that X did Y and found out that yes it already does.
morningsam•8mo ago
Seems like this was requested in 2021 and is currently in beta testing for select ecosystems only: https://github.com/dependabot/dependabot-core/issues/3651
morningsam•8mo ago
>Though I have wondered about the idea of programming something like Dependabot, but telling it, hey, tell me about known CVEs and security releases, but otherwise, let things cook for 6 months before automatically building a PR for me to update.

Renovate can do both of these things already:

https://docs.renovatebot.com/configuration-options/#vulnerab...

https://docs.renovatebot.com/configuration-options/#minimumr...

latchkey•8mo ago
I'm going through SOC2 right now, and it is essentially requiring it to be enabled. I'll just do the minimal to pass, but it isn't something you can just disable in some cases.
sethhochberg•8mo ago
FWIW: this might be a suggestion of the specific audit team you’re working with or a requirement of one of the “follow our playbook and you’ll pass” vendors if you’re using one of those, but the SOC 2 on its own doesn’t really impose specific technical feature/control requirements like this.

I don’t have the exact exam language in front of me right now but the requirement would be something like “you have some process for learning about, assessing, and mitigating vulnerabilities in software dependencies that you use”.

Enabling an automated scan and version bump tool like dependabot is a common and easy way to prove your organization has those capabilities. But you could implement whatever process you want here and prove that you do it on the schedule you say you do in order to satisfy the audit requirement.

latchkey•8mo ago
True on all counts. But the lowest effort is "just turn off dependabot", which is what I suspect most of the people trying to get past SOC2 will do (like myself).
gregwebs•8mo ago
I passed SOC2 with dependabot set to only perform security updates
latchkey•8mo ago
Yes, that is about all you need on it.
jt2190•8mo ago
> Sometimes only the patch version of the library is different so it shouldn't be breaking anything... but it does!

OT: Semantic versioning’s major flaw is that is the presumption that the entire chain of package maintainers is extremely diligent about correctly bumping their version numbers. There have certainly been a few projects that are very good about this, but the vast majority are not.

The only solution to this that I know of is to test: Manually exercise the features with the dependencies, write automated tests that check your critical needs from a dependency, or identify and run the tests from the dependency’s test suite that cover your use cases. (Also: Contribute tests that you want in the suite!)

udev4096•8mo ago
Wait, how is it possible for anyone who opens a PR to issue dependabot commands for main repository? There should be some kind of authorization in place to avoid it, right? Should it not ignore any commands coming from outside users who do not have commit access?
bugtodiffer•8mo ago
its a fork
bavarianbob•8mo ago
This is explained here:

> Here's the trick: github.actor does not always refer to the actual creator of the Pull Request. It's the user who caused the latest event that triggered the workflow.

abhisek•8mo ago
Seems like a bit forced scenario to me. I have never seen anyone auto-merge Dependabot PR automatically using GitHub Action.

Also pull_request_target is a big red flag in any GHA and even highlighted in GHA docs. It’s like running untrusted code with all your secrets handed over to it.

woodruffw•8mo ago
> I have never seen anyone auto-merge Dependabot PR automatically using GitHub Action.

For better or worse, it's a pattern that GitHub explicitly documents[1].

(An earlier version of this page also recommended `pull_request_target`, hence the long tail of public repositories that use it.)

[1]: https://docs.github.com/en/code-security/dependabot/working-...

phyzome•8mo ago
What a weird and distracting AI-generated header image.
zingababba•8mo ago
Yeah, why is he saying "confused deputy" as he pulls the lever. Sounds like he knows he shouldn't even be in charge!
minitech•8mo ago
And what a weird and distracting AI-generated article. The bold-labelled numbered lists (e.g. “the evil plan”) are especially awkward.
phyzome•7mo ago
Might have been AI-generated, or just poorly written. I found it hard to read either way.
ImPostingOnHN•8mo ago
> So, they created workflows to auto-merge PRs if the creator was Dependabot. Seems safe, doesn't it?

No? In what world would it be safe to merge code, AI-generated or not, which you haven't reviewed, much less do it automatically without you even knowing it happened?

How do you know that you need the changes (whether bug or CVE)? How do you know the code isn't malicious? How do you know your systems are compatible with the change? How do you know you won't need to perform manual work during the migration?

carefulfungi•8mo ago
I think this response is understandable. But also feels disingenuous. If a dependency update appears to be an automated version bump, and you trust your test base, and you trust the dependency author (did you read the whole dependency's dependency graph before importing it? You probably didn't.) it's only a small step to auto-merge. The horror here is that the bot isn't the author and the version update isn't legitimate. That's on Github's presentation and settings, in my opinion.

Relying on a human reviewer, regardless, is a weak guarantee. If your security posture is "Joe shouldn't make mistakes", you still have a weak security posture.

ImPostingOnHN•8mo ago
> ... and you trust the dependency author ... it's only a small step to auto-merge

I disagree for the reasons listed above, but let's focus for a moment on 3rd-party dependencies here, versus trusted ones. Given the numerous scenarios I listed above, it's a huge step from "using a 3rd party library" to "trusting the author of it".

We'd have to start with "...and you don't trust the author", because for most 3rd-party dependencies, the author has given us neither sufficient evidence to do so, nor sufficient recourse if that trust is violated.

> Relying on a human reviewer, regardless, is a weak guarantee.

Relying on countless random 3rd party to not own you when they know people are pulling in their code as a dependency is a far, far worse guarantee. How would that strategy have protected someone against this supply-chain attack?:

https://www.wiz.io/blog/github-action-tj-actions-changed-fil...

carefulfungi•8mo ago
For most projects (that don't comprehensively review their entire dependency graph's codebases), whether you started with foobar-1.0.1 or foobar-1.0.2 is just a coincidence of timing. They downloaded whatever the current version was when first needing foobar. Selecting 1.0.1 with little review but then declaring that all upgrades deserve deep review is a contradiction. I think it is this contradiction that leads people to auto-update dependencies.

The problem isn't the auto-merging 1.0.2 - it's the lack of attestation in the PR that appears to be code from the foobar authors who were trusted in version 1.0.1.

ImPostingOnHN•8mo ago
> Selecting 1.0.1 with little review

Why would you choose to give little review to a dependency?

> The problem isn't the auto-merging 1.0.2 - it's the lack of attestation in the PR that appears to be code from the foobar authors who were trusted in version 1.0.1.

The authors were never trusted and never will be. What was trusted was the code at the commit hash for the first time 1.0.1 got tagged, and now a bot is saying you should move away from that trusted code.

Is that a good idea? That depends on, among other things: if you need to; and if you trust the new code.

The foobar author could have intentionally or unintentionally included an exploit, or they could have had their account hacked and someone else included one on their behalf, or just changed the code behind an existing tag (see my previous comments for a recent example).

I get what you're saying about this being overwhelming. Without this, though, we've seen it's just security theater, because your code is as strong as it's weakest link. More eyeballs on a given release/commit also means more people looking out for something nefarious, which is counteracted by a shorter time-since-release. Maybe multiple AI agents will make it easier

carefulfungi•8mo ago
> Why would you choose to give little review to a dependency?

Given finite developer hours, what activity has the highest security impact per hour - is it reviewing the dependency graph? That's not the choice most projects are making. Maybe they are wrong. Or maybe they know where to spend their time for better impact. I dunno...

I have worked in commercial codebases that vendored 100% of their dependencies (including kernel and driver source) and reviewed large swathes of those dependencies carefully. I'm absolutely not dismissive of this. I think we agree more people should be doing it.

However, over the decases, I've seen very few projects take this approach. Many choose to trust third party code (naively, as you point out!). If that's the reality, I think we should continue to work on improving provenance, automated signature verification, and other tooling so we can at least better know that if we choose to trust foobar, it's actually foobar who is distributing foobar 1.0.2.

The AI comment is provocative - can future-AI find vulnerabilities better than future-AI can inject hard-to-find vulnerabilities? And how do we know our AI reviewers themselves aren't hacked... a horrible twist on https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_Ref....

snickerbockers•8mo ago
If people are this obsessed with making the little version number of their dependencies as big as possible why don't they just make downloading the dependencies part of the build system?? Or better yet make the user supply their own dependencies and link them in as shared libraries? This problem was solved at least 30 years ago...
TZubiri•8mo ago
Yesterday on an article on some Package URL proposal and I wrote a comment about these super deep supply chains based on free volunteer work, and how it's becoming even more work to just keep a handle on them. But hn didn't allow me to post it.

When you need to add dependencies that manage packages to avoid version conflicts, when you need to add dependencies that check for vulnerabilities in your dependencies, that's when you know you are in too deep.

And it's not like these things are necessary, for the big majority of systems that have less than 100k users, you can just build systems that run for less than 200$ per month. I've started working with an empty requirements.txt and empty package.json, everything is fine.

Before you call me a dinosaur and claim that I might as well be writing assembly, first, I do depend on an Operating System and a Programming Language, and maybe a database if I need the convenience, there's such a thing as nuance. And second, I'm using cutting edge tech, these things (Linux, python, mysql) have existed for less than 50 years! They are in their infancy!

But man, I'm really banking on the ethos of building on top of the shoulders of "giants" falls soon. I hate to be on the side of hackers, but it's inevitable that something bad will happen if you keep on doing shit like this.

It's like having a friend or group of people that constantly have orgies with random people and they develop a huge culture around using different types of condoms and profilactics of some kind or other, and giving tips on what works and what doesn't and switching strategies when one of their friends gets an STD but still not quite ever abandoning the idea of having massive orgies.

All of the while the cost of developing software is dropping to 0, almost no one uses GPL, and surprise surprise, companies build proprietary systems on top of 95% software of the commons. It sucks to compete with companies and programmers that just npm install 1000 things, not sure what I'm banking on, maybe a slew of lawsuits that increase the liabilities of writing bad software that incentivizes actually understanding the shit that we build?

Gnight HN.

lrvick•8mo ago
Mandate signed commits and signed reviews.

If someone wants to merge a bot PR or any other PR by an untrusted third party, they will have to "adopt" the bot commit as their own, sign the commit locally, and then wait for a second human reviewer to do a signed merge.

Not signing code means it could be tampered with all sorts of ways. Get a Nitrokey and setup git to sign with it and have your team do the same.

msgodel•8mo ago
Ah the classic "do something to prevent outside organizations from having agency in your project" (pin dependency versions) followed by "delegate the agency to an automation because agency comes with responsibilities."

It reminds me of the github bot someone posted here at the beginning of the year (it was hard to tell if it was satire) that just automatically approved (not merged, just approved) PRs to get around some technicality in some (supposedly machine verified) infosec standard that required two human approvals on every PR.