frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Nobody has a personality anymore: we are products with labels

https://www.freyaindia.co.uk/p/nobody-has-a-personality-anymore
109•drankl•3h ago•64 comments

Bitchat – A decentralized messaging app that works over Bluetooth mesh networks

https://github.com/jackjackbits/bitchat
47•ananddtyagi•1h ago•30 comments

Building the Rust Compiler with GCC

https://fractalfir.github.io/generated_html/cg_gcc_bootstrap.html
84•todsacerdoti•3h ago•3 comments

Intel's Lion Cove P-Core and Gaming Workloads

https://chipsandcheese.com/p/intels-lion-cove-p-core-and-gaming
57•zdw•2h ago•0 comments

Show HN: I wrote a "web OS" based on the Apple Lisa's UI, with 1-bit graphics

https://alpha.lisagui.com/
247•ayaros•6h ago•78 comments

Centaur: A Controversial Leap Towards Simulating Human Cognition

https://insidescientific.com/centaur-a-controversial-leap-towards-simulating-human-cognition/
12•CharlesW•2h ago•4 comments

I extracted the safety filters from Apple Intelligence models

https://github.com/BlueFalconHD/apple_generative_model_safety_decrypted
253•BlueFalconHD•5h ago•155 comments

Data on AI-related Show HN posts

https://ryanfarley.co/ai-show-hn-data/
218•rfarley04•2d ago•128 comments

Jane Street barred from Indian markets as regulator freezes $566 million

https://www.cnbc.com/2025/07/04/indian-regulator-bars-us-trading-firm-jane-street-from-accessing-securities-market.html
233•bwfan123•11h ago•130 comments

Swedish Campground: "There are too many Apples on the screen!" (1983)

https://www.folklore.org/Swedish_Campground.html
15•CharlesW•1h ago•2 comments

There's a COMPUTER inside my DS flashcart [video]

https://www.youtube.com/watch?v=uq0pJmd7GAA
11•surprisetalk•1h ago•0 comments

Opencode: AI coding agent, built for the terminal

https://github.com/sst/opencode
123•indigodaddy•7h ago•29 comments

Get the location of the ISS using DNS

https://shkspr.mobi/blog/2025/07/get-the-location-of-the-iss-using-dns/
256•8organicbits•12h ago•75 comments

Functions Are Vectors (2023)

https://thenumb.at/Functions-are-Vectors/
149•azeemba•9h ago•79 comments

I don't think AGI is right around the corner

https://www.dwarkesh.com/p/timelines-june-2025
139•mooreds•4h ago•163 comments

A non-anthropomorphized view of LLMs

http://addxorrol.blogspot.com/2025/07/a-non-anthropomorphized-view-of-llms.html
92•zdw•2h ago•74 comments

Backlog.md – Markdown‑native Task Manager and Kanban visualizer for any Git repo

https://github.com/MrLesk/Backlog.md
77•mrlesk•5h ago•16 comments

Lessons from creating my first text adventure

https://entropicthoughts.com/lessons-from-creating-first-text-adventure
26•kqr•2d ago•1 comments

Crypto 101 – Introductory course on cryptography

https://www.crypto101.io/
23•pona-a•4h ago•2 comments

Curzio Malaparte's Shock Tactics

https://www.newyorker.com/books/under-review/curzio-malapartes-shock-tactics
4•mitchbob•3d ago•2 comments

Corrected UTF-8 (2022)

https://www.owlfolio.org/development/corrected-utf-8/
38•RGBCube•3d ago•26 comments

Async Queue – One of my favorite programming interview questions

https://davidgomes.com/async-queue-interview-ai/
89•davidgomes•8h ago•71 comments

Metriport (YC S22) is hiring engineers to improve healthcare data exchange

https://www.ycombinator.com/companies/metriport/jobs/Rn2Je8M-software-engineer
1•dgoncharov•8h ago

Hannah Cairo: 17-year-old teen refutes a math conjecture proposed 40 years ago

https://english.elpais.com/science-tech/2025-07-01/a-17-year-old-teen-refutes-a-mathematical-conjecture-proposed-40-years-ago.html
340•leephillips•10h ago•76 comments

The Broken Microsoft Pact: Layoffs and Performance Management

https://danielsada.tech/blog/microsoft-pact/
33•dshacker•2h ago•14 comments

Mirage: AI-native UGC game engine powered by real-time world model

https://blog.dynamicslab.ai
19•zhitinghu•1d ago•15 comments

Paper Shaders: Zero-dependency canvas shaders

https://github.com/paper-design/shaders
8•nateb2022•2d ago•1 comments

Toys/Lag: Jerk Monitor

https://nothing.pcarrier.com/posts/lag/
46•ptramo•10h ago•37 comments

Why English doesn't use accents

https://www.deadlanguagesociety.com/p/why-english-doesnt-use-accents
64•sandbach•3h ago•60 comments

Collatz's Ant and Σ(n)

https://gbragafibra.github.io/2025/07/06/collatz_ant5.html
24•Fibra•8h ago•3 comments
Open in hackernews

How to harden GitHub Actions

https://www.wiz.io/blog/github-actions-security-guide
218•moyer•2mo ago

Comments

tomrod•1mo ago
I support places that use GH Actions like its going out of style. This article is useful.

I wonder how we get out of the morass of supply chain attacks, realistically.

guappa•1mo ago
We use linux distributions.
tomrod•1mo ago
How do apt, dnf, and apk prevent malicious software from getting into repositories?
liveoneggs•1mo ago
never update
photonthug•1mo ago
I can confirm there's real wisdom in this approach, lol. Nothing bad had happened to me for a while so I decided to update that one computer to ubuntu noble and YUP, immediately bricked by some UEFI problem. Ok cool, it's not like 2004 anymore, this will probably be a quick fix.. 3 hours later...
RadiozRadioz•1mo ago
An OS upgrade broke UEFI. Huh? That doesn't sound right.
photonthug•1mo ago
In the newest iteration of a time-honored tradition, grub (and/or whatever distro's treatment of it) has been finding all kinds of ways to break upgrades for 30 years. If you're on the happy path you can probably go a long time without a problem.

But when you're the unlucky one and need to search for a fix, and you're checking hardware/distro/date details in whatever forums or posts, and that's when you notice that the problems don't actually ever stop.. it just hasn't happened to you lately.

RadiozRadioz•1mo ago
No that's not what I mean, I mean technologically, UEFI is flashed in your motherboard and there isn't any way for an OS to mess with that. You need to boot from a specially prepared USB with compatible firmware in order to change it. Your problem must have been above UEFI, or an error in your OS that mentioned UEFI.
guappa•1mo ago
There have been buggy implementations where UEFI is in fact NOT flashed to the motherboard and can get removed.

If he has one of those crappy computers it could be, but when I read about it happening it was entirely due to users MANUALLY deleting the UEFI files, did not happen upgrading.

So, the story seems still wrong to me.

wongarsu•1mo ago
In principle by having the repository maintainer review the code they are packaging. They can't do a full security review of every package and may well be fooled by obfuscated code or deliberately introduced bugs, but the threshold for a successful attack is much higher than on Github Actions or npm.
KronisLV•1mo ago
It kinda feels like any CI/CD should only be run on the server after one of the maintainers gives it the okay to do so, after reviewing the code. From this, one can also make the assumption that most of the CI (linting, various checks and tests) should all be runnable locally even before any code is pushed.
guappa•1mo ago
It feels to me that CI/CD and builds for release should be completely separated concepts.
guappa•1mo ago
You have a 2nd independent sets of eyes looking at software, rather than "absolutely nobody" like it is if you use npm and friends?
pabs3•1mo ago
Review every single line of source code before use, and bootstrap from source without any binaries.

https://github.com/crev-dev https://bootstrappable.org/ https://lwn.net/Articles/983340/

DGAP•1mo ago
Great article!

I also found this open source tool for sandboxing to be useful: https://github.com/bullfrogsec/bullfrog

mstade•1mo ago
It's pretty depressing that such functionality isn't a core feature of GHA. Seems like low hanging fruit.
cedws•1mo ago
I came across this the other day but I couldn’t really grok how it works. Does it run at a higher privilege level than the workflow or the same? Can a sophisticated enough attack just bypass it?
mdaniel•1mo ago
I spent a few seconds clicking into it before the newfound 429 responses from GitHub caused me to lose interest

I believe a sufficiently sophisticated attacker could unwind the netfilter and DNS change, but in my experience every action that you're taking during a blind attack is one more opportunity for things to go off the rails. The increased instructions (especially ones referencing netfilter and DNS changes) also could make it harder to smuggle in via an obfuscated code change (assuming that's the attack vector)

That's a lot of words to say that this approach could be better than nothing, but one will want to weigh its gains against the onoz of having to keep its allowlist rules up to date in your supply chain landscape

fallard•1mo ago
Hey, I'm one of the co-author of Bullfrog. As you say, a sophisticated and targeted attack could indeed bypass our action. It's meant for blocking mostly opportunistic attacks.

I don't think any egress filtering could properly block everything, given actions will need to interact with Github APIs to function and it would always be possible to exfiltrate data in any private repo hosted on Github. While some solutions can access the outbound HTTP requests payload before it gets encrypted using eBPF, in order to detect egress to untrusted Github org/repos, this isn't a silver bullet either because this relies on targeting specific encryption binaries used by the software/OS. A sophisticated attack could always use a separate obscure or custom encryption binaries to evade detection by eBPF based tools.

So like you say, it's better than nothing, but it's not perfect and there are definitely developer experience tradeoff in using it.

PS: I'm no eBPF expert, so I'd be happy if someone can prove me wrong on my theory :)

DGAP•1mo ago
Yep, and there's an opt-in to disable sudo which prevents circumvention. However this can break some actions especially ones deployed as Docker images. It also doesn't work with macos.
vin10•1mo ago
Interesting project, I think I just found a way to crash the sandbox, just reported via an advisory.
kylegalbraith•1mo ago
Glad this got posted. It's an excellent article from the Wiz team.

GitHub Actions is particularly vulnerable to a lot of different vectors, and I think a lot of folks reach for the self-hosted option and believe that closes up the majority of them, but it really doesn't. If anything, it might open more vectors and potentially scarier ones (i.e., a persistent runner could be compromised, and if you got your IAM roles wrong, they now have access to your AWS infrastructure).

When we first started building Depot GitHub Actions Runners [0], we designed our entire system to never trust the actual EC2 instance backing the runner. The same way we treat our Docker image builders. Why? They're executing untrusted code that we don't control.

So we launch a GitHub Actions runner for a Depot user in 2-5 seconds, let it run its job with zero permissions at the EC2 level, and then kill the instance from orbit to never be seen again. We explicitly avoid the persistent runner, and the IAM role of the instance is effectively {}.

For folks reading the Wiz article. This is the line that folks should be thinking about when going the self-hosted route:

> Self-hosted runners execute Jobs directly on machines you manage and control. While this flexibility is useful, it introduces significant security risks, as GitHub explicitly warns in their documentation. Runners are non-ephemeral by default, meaning the environment persists between Jobs. If a workflow is compromised, attackers may install background processes, tamper with the environment, or leave behind persistent malware.

> To reduce the attack surface, organizations should isolate runners by trust level, using runner groups to prevent public repositories from sharing infrastructure with private ones. Self-hosted runners should never be used with public repositories. Doing so exposes the runner to untrusted code, including Workflows from forks or pull requests. An attacker could submit a malicious workflow that executes arbitrary code on your infrastructure.

[0] https://depot.dev/products/github-actions

cedws•1mo ago
I’ve been reviewing the third party Actions we use at work and seen some scary shit, even with pinning! I’ve seen ones that run arbitrary unpinned install scripts from random websites, cloning the HEAD of repos and running code from there, and other stuff. I don’t think even GitHub’s upcoming “Immutable Actions” will help if people think it’s acceptable to pull and run arbitrary code.

Many setup Actions don’t support pinning binaries by checksum either, even though binaries uploaded to GitHub Releases can be replaced at will.

I’ve started building in house alternatives for basically every third party Action we use (not including official GitHub ones) because almost none of them can be trusted not to do stupid shit.

GitHub Actions is a security nightmare.

MillironX•1mo ago
Even with pinning, a common pattern I've seen in one of my orgs is to have a bot (Renovate, I think Dependabot can do this too) automatically update the pinned SHA when a new release comes out. Is that practically any different than just referencing a tag? I'm genuinely curious.
wongarsu•1mo ago
I guess you still have some reproducibility and stability benefits. If you look at an old commit you will always know which version of the action was used. Might be great if you support multiple releases (e.g. if you are on version 1.5.6 but also make new point releases for 1.4.x and 1.3.x). But the security benefits of pinning are entirely negated if you just autoupdate the pin.
crohr•1mo ago
I guess TL;DR just use ephemeral runners when self hosting? There are lots of solutions for that. Also would be nice for GitHub to do something on the security front (allowlist / blocklists if IPs, hosted, etc or at least just reporting on traffic)
enescakir•1mo ago
The riskiest line in your repo isn’t in "src/", it’s in ".github/workflows/"

Self-hosted runners feel more secure at first since they execute jobs directly on machines you manage. But they introduce new attack surfaces, and managing them securely and reliably is hard.

At Ubicloud, we built managed GitHub Actions runners with security as the top priority. We provision clean, ephemeral VMs for each job, and they're fully isolated using Linux KVM. All communication and disks are encrypted.

They’re fully compatible with default GitHub runners and require just a one-line change to adopt. Bonus: they’re 10× more cost-effective.

https://www.ubicloud.com/use-cases/github-actions

Arch-TK•1mo ago
The recommendation is not to interpolate certain things into shell scripts. Don't interpolate _anything_ into shell scripts as a rule. Use environment variables.

This combined with people having no clue how to write bash well/safely is a major source of security issues in these things.

cedws•1mo ago
Zizmor has a check for this.

https://github.com/woodruffw/zizmor

diggan•1mo ago
> Using Third-Party GitHub Actions

Maybe I'm overly pedantic, but this whole section seems to miss the absolutely most obvious way to de-risk using 3rd party Actions, review the code itself? It talks about using popularity, number of contributors and a bunch of other things for "assessing the risk", but it never actually mentions reviewing the action/code itself.

I see this all the time around 3rd party library usage, people pulling in random libraries without even skimming the source code. Is this really that common? I understand for a whole framework you don't have time to review the entire project, but for these small-time GitHub Actions that handle releases, testing and such? Absolute no-brainer to sit down and review it all before you depend on it, rather than looking at the number of stars or other vanity-metrics.

KolmogorovComp•1mo ago
Because reading the code is useless if you can't pin the version, and the article explains well it's hard to do

> However, only hash pinning ensures the same code runs every time. It is important to consider transitive risk: even if you hash pin an Action, if it relies on another Action with weaker pinning, you're still exposed.

ratrocket•1mo ago
Depending on your circumstances (and if the license of the action allows it) it's "easy" to fork the action and use your own fork. Instant "pinning".
carlmr•1mo ago
But how does that solve the issue with the forked action not using pinned versions itself.

You need to recursively fork and modify every version of the GHA and do that to its sub-actions.

You'd need something like a lockgile mechanism to prevent this.

ratrocket•1mo ago
Yes, that is completely true -- transitive dependencies are a problem. What I suggested only works in the simplest cases and isn't a great solution, more of a bandaid.
analytically•1mo ago
https://centralci.com/blog/posts/concourse_vs_gha
axelfontaine•1mo ago
This is a great article, with many important points.

One nitpick:

> Self-hosted runners should never be used with public repositories.

Public repositories themselves aren't the issue, pull requests are. Any infrastructure or data mutable by a workflow involving pull requests should be burned to the ground after that workflow completes. You can achieve this with ephemeral runners with JIT tokens, where the complete VM is disposed of after the job completes.

As always the principle of least-privilege is your friend.

If you stick to that, ephemeral self-hosted runners on disposable infrastructure are a solid, high-performance, cost-effective choice.

We built exactly this at Sprinters [0] for your own AWS account, but there are many other good solutions out there too if you keep this in mind.

[0] https://sprinters.sh

cyrnel•1mo ago
This has some good advice, but I can't help but notice that none of this solves a core problem with the tj-actions/changed-files issue: The workflow had the CAP_SYS_PTRACE capability when it didn't need it, and it used that permission to steal secrets from the runner process.

You don't need to audit every line of code in your dependencies and their subdependencies if your dependencies are restricted to only doing the thing they are designed to do and nothing more.

There's essentially nothing nefarious changed-files could do if it were limited to merely reading a git diff provided to it on stdin.

Github provides no mechanism to do this, probably because posts like this one never even call out the glaring omission of a sandboxing feature.

delusional•1mo ago
What would be outside the sandbox? If you create a sandbox that only allows git diff, the I suppose you fixed this one issue, but what about everything else? If you allow the sandbox to be configurable, then how do you configure it without that just being programming?

The problem with these "microprograms" have always been that once you delegate so much, once you are willing to put in that little effort. You can't guarantee anything.

If you are willing to pull in a third party dependency to run git diff, you will never research which permissions it needs. Doing that research would be more difficult than writing the program yourself.

esafak•1mo ago
Where can I read about this? I see no reference in its repo: https://github.com/search?q=repo%3Atj-actions%2Fchanged-file...
cyrnel•1mo ago
Every action gets these permissions by default. The reason we know it had that permission is that the exploit code read from /proc/pid/mem to steal the secrets, which requires some permissions: https://blog.cloudflare.com/diving-into-proc-pid-mem/#access...

Linux processes have tons of default permissions that they don't really need.

abhisek•1mo ago
GitHub Actions by default provide isolated VM with root privilege to a workflow. Don’t think job level privilege isolation is in its threat model currently. Although it does allow job level scopes for the default GitHub token.

Also the secrets are accessible only when a workflow is invoked from trusted trigger ie. not from a forked repo. Not sure what else can be done here to protect against compromised 3rd party action.

cyrnel•1mo ago
People have been running different levels of privileged code together on the same machine ever since the invention of virtual machines. We have lots of lightweight sandboxing technologies that could be used when invoking a particular action such as tj-actions/changed-files that only gives it the permissions it needs.

You may do a "docker build" in a pipeline which does need root access and network access, but when you publish a package on pypi, you certainly don't need root access and you also don't need access to the entire internet, just the pypi API endpoint(s) necessary for publishing.

lmeyerov•1mo ago
Yes, by default things should be sandboxed - no network, no repo writes, ... - and should be easy to add extra caps (ex: safelist dockerhub)

Likewise, similar to modern smart phones asking if they should remove excess unused privs granted to certain apps, GHAs should likewise detect these super common overprovisionings and make it easy for maintainers to flip those configs, e.g., "yes" button

wallrat•1mo ago
Been tracking this project for a while https://github.com/chains-project/ghasum . It creates a verifiable checksum manifest for all actions - still in development but looks very promising.

Will be a good compliment to Github's Immutable Actions when they arrive.

esafak•1mo ago
https://github.com/features/preview/immutable-actions
esafak•1mo ago
Here is an example in the wild: https://github.com/actions/checkout/actions/workflows/publis...
bob1029•1mo ago
> By default, the Workflow Token Permissions were set to read-write prior to February 2023. For security reasons, it's crucial to set this to read-only. Write permissions allow Workflows to inadvertently or maliciously modify your repository and its data, making least-privilege crucial.

> Double-check to ensure this permission is set correctly to read-only in your repository settings.

It sounds to me like the most secure GH Action is one that doesn't need to exist in the first place. Any time the security model gets this complicated you can rest assured that it is going to burn someone. Refer to Amazon S3's byzantine configuration model if you need additional evidence of this.

ebfe1•1mo ago
After tj-actions hack, I put together a little tool to go through all of github actions in repository to replace them with commit hash of the version

https://github.com/santrancisco/pmw

It has a few "features" which allowed me to go through a repository quickly:

- It prompts user and recommend the hash, it also provides user the url to the current tag/action to double check the hash value matches and review the code if needed

- Once you accept a change, it will keep that in a json file so future exact vesion of the action will be pinned as well and won't be reprompted.

- It let you also ignore version tag for github actions coming from well-known, reputational organisation (like "actions" belong to github) - as you may want to keep updating them so you receive hotfix if something not backward compatible or security fixes.

This way i have full control of what to pin and what not and then this config file is stored in .github folder so i can go back, rerun it again and repin everything.

tuananh•1mo ago
renovate can be configured to do that too :)
jquaint•1mo ago
Do you have an example config?

Trying to get the same behavior with renovate :)

tuananh•1mo ago
here's one that i use https://github.com/tuananh/hyper-mcp/blob/main/.github/renov...
loginatnine•1mo ago
This is good, just bear in mind that if you put the hash of an external composite action and that action pulls on another one without a hash, you're still vulnerable on that transitive dependency.
ebfe1•1mo ago
oh damn - that is a great point! thanks matey!
newman314•1mo ago
I don't know if your tool already does this but it would be helpful if there is an option to output the version as a comment of the form

action@commit # semantic version

Makes it easy to quickly determine what version the hash corresponds to. Thanks.

ebfe1•1mo ago
Yeap - that is exactly what it does ;)

Example:

uses: ncipollo/release-action@440c8c1cb0ed28b9f43e4d1d670870f059653174 #v1.16.0

And for anything that previously had @master, it becomes the following with the hash on the day it was pinned with "master-{date}" as comment:

uses: ravsamhq/notify-slack-action@b69ef6dd56ba780991d8d48b61d94682c5b92d45 #master-2025-04-04

fartbagxp•1mo ago
I've been using https://github.com/stacklok/frizbee to lock down to commit hash. I wonder how this tool compares to that.
remram•1mo ago
Having control is good, but reading all the code yourself seems unrealistic. We need something like crev or cargo-vet.
ebfe1•1mo ago
Yea hence it prompts for you to check the first time but once you verify the hash for particular version of action, it would automatically apply the hash to that same version of action everywhere. Also you can reuse the same config for all other repos so it is only tedious the first time but after that it is pretty quick to apply to the rest of the org :)

The tool is indeed meant for semi-auto flow to ensure human eye looked at the action being used.

MadsRC•1mo ago
Shameless plug, I pushed a small CLI for detecting unpinned dependencies and automatically fix them the other day: https://codeberg.org/madsrc/gh-action-pin

Works great with commit hooks :P

Also working on a feature to recursively scan remote dependencies for lack of pins, although that doesn’t allow for fixing, only detection.

Very much alpha, but it works.

esafak•1mo ago
Can dependabot pin actions to commits while upgrading them?
duped•1mo ago
I feel like there was a desire from GH to avoid needing a "build" step for actions so you could use `use: someones/work` or whatever, `git push` and see the action run.

But if you think about it, the entire design is flawed. There should be a `gh lock` command you can run to lock your actions to the checksum of the action(s) your importing, and have it apply transitively, and verify those checksums when your action runs every time it pulls in remote dependencies.

That's how every modern package manager works - because the alternative are gaping security holes.

newman314•1mo ago
Step Security has a useful tool that aids in securing GitHub Actions here: https://app.stepsecurity.io/securerepo

Disclaimer: No conflict of interest just a happy user.

RadiozRadioz•1mo ago
GHA Newbie here: what are all these 3rd-party actions that people are using? How complicated is your build / deployment pipeline that you need a bunch of premade special steps for it?

Surely it's simple: use a base OS container, install packages, run a makefile.

For deployment, how can you use pre-made deployment scripts? Either your environment is bespoke VPS/on-prem, In which case you write your deployment scripts anyway, or you use k8s and have no deployment scripts. Where is this strange middleground where you can re-use random third party bits?

TrueDuality•1mo ago
Can't speak for everyone, but workflows can get pretty crazy in my personal experience.

For example the last place I worked had a mono repo that contained ~80 micro services spread across three separate languages. It also contained ~200 shared libraries used by different subsets of the services. Running the entire unit-test suite took about 1.5 hours. Running the integration tests for everything took about 8 hours and the burn-in behavioral QA tests took 3-4 days. Waiting for the entire test suite to run for every PR is untenable so you start adding complexity to trim down what gets run only to what is relevant to the changes.

A PR would run the unit tests only for the services that had changes included in it. Library changes would also trigger the unit tests in any of the services that depended on them. Some sets of unit tests still required services, some didn't. We used an in-house action that mapped the files changed to relevant sets of tests to run.

When we updated a software dependency, we had a separate in-house action that would locate all the services that use that dependency and attempt to set them attempt to set them to the same value, running the subsequent tests.

Dependency caching is a big one and frankly Github's built-in cacheing is so incredibly buggy and inconsistent it can't be relied on... So third party there. It keeps going on:

- Associating bug reports to recent changes

- Ensuring PRs and issues meet your compliance obligations around change management

- Ensuring changes touching specific lines of code have specific reviewers (CODEOWNERS is not always sufficiently granular)

- Running vulnerability scans

- Running a suite of different static and lint checkers

- Building, tagging, and uploading container artifacts for testing and review

- Building and publishing documentation and initial set of release notes for editing and review

- Notifying out to slack when new releases are available

- Validating certain kinds of changes are backported to supported versions

Special branches might trigger additional processes like running a set of upgrade and regression tests from previously deployed versions (especially if you're supporting long-term support releases).

That was a bit off the top of my head. Splitting that from a mono-repo doesn't simplify the problem unfortunately it just moves it.

20thr•1mo ago
These suggestions make a lot of sense.

At Namespace (namespace.so), we also take things one step further: GitHub jobs run under a cgroup with a subset of privileges by default.

Running a job with full capabilities, requires an explicit opt-in, you need to enable "privileged" mode.

Building a secure system requires many layers of protection, and we believe that the runtime should provide more of these layers out of the box (while managing the impact to the user experience).

(Disclaimer: I'm a founder at Namespace)

gose1•1mo ago
> Safely Writing GitHub Workflows

If you are looking for ways to identify common (and uncommon) vulnerabilities in Action workflows, last month GitHub shipped support for workflow security analysis in CodeQL and GitHub Code Scanning (free for public repos): https://github.blog/changelog/2025-04-22-github-actions-work....

The GitHub Security Lab also shared a technical deep dive and details of vulnerabilities that they found while helping develop and test this new static analysis capability: https://github.blog/security/application-security/how-to-sec...

maenbalja•1mo ago
Timely article... I recently learned about self-hosted runners and set one up on a Hetzner instance. Pretty smooth experience overall. If your action contains any SSH commands and you'd like to avoid setting up a firewall with 5000+ rules[0], I would recommend self-hosting a runner to help secure your target server's SSH port.

[0] https://api.github.com/meta

woodruffw•1mo ago
FWIW: Self-hosted runners are non-trivial to secure[1]; the defaults GitHub gives you are not necessarily secure ones, particularly if your self-hosted runner executes workflows from public repositories.

(Self-hosted runners are great for many other reasons, not least of which is that they're a lot cheaper. But I've seen a lot of people confuse GitHub Actions' latent security issues with something that self-hosted runners can fix, which is not per se the case.)

[1]: https://docs.github.com/en/actions/security-for-github-actio...

maenbalja•1mo ago
Hm that's good to know, thanks for the link. I'm just using the runner for private solo projects atm so I think my setup will do for now. But I definitely didn't consider the implications of using it on a private project with other contributors yikes.
goosethe•1mo ago
your article inspired me https://github.com/seanwevans/Ghast still a WIP
colek42•1mo ago
We just built a new version of the witness run action that tracks the who/what/when/where and why of the GitHub actions being used. It provides "Trusted Telemetry" in the form of SLSA and in-toto attestations.

https://github.com/testifysec/witness-run-action/tree/featur...