If someone is actively subverting a control like this, it probably means that the control has morphed from a guardrail into a log across the tracks.
Somewhat in the same vein as AppLocker &co. Almost everyone says you should be using it, but almost no-one does, because it takes a massive amount of effort just to understand what "acceptable software" is across your entire org.
Companies have no business telling their employees which specific programs they can and cannot run to do their jobs, that's an absurd level of micromanagement.
I'm usually on the side of empowering workers, but I believe sometimes the companies do have business saying this.
One reason is that much of the software industry has become a batpoop-insane slimefest of privacy (IP) invasion, as well as grossly negligent security.
Another reason is that the company may be held liable for license terms of the software.
Another reason is that the company may be held liable for illegal behavior of the software (e.g., if the software violates some IP of another party).
Every piece of software might expose the company to these risks. And maybe disproportionately so, if software is being introduced by the "I'm gettin' it done!" employee, rather than by someone who sees vetting for the risks as part of their job.
I am pointing out that if every unique binary never before run/approved is blocked, then no developer will be able to build and then run the software they are paid to write, since them developing it modifies said software into a new and never before seen sequence of bits.
OP may not have meant to say that "it's good to have an absolute allowlist of executable signatures and block everything else", but that is how I interpreted the initial claim and I am merely pointing out that such a system would be more than inconvenient, it'd make the workflow of editing and then running software nearly impossible.
This is often the case, although I’ve very rarely seen environments as restrictive as what you describe being enforced on developers.
Typically developer user accounts and assigned devices are in slightly less restrictive policy groupings, or are given access to some kind of remote build/test infrastructure.
Of course companies need the option to control what software is run on their infrastructure. There are an endless stream of reasons and examples for that. Up-thread there’s a great example of what happens when you let folks install Oracle software without guardrails. Businesses are of course larger and more complex than their developers and have needs beyond their developers.
What matters here is implementation and policy management. You want those to be balanced between audience needs and business needs.
It’s also worth mentioning that plenty of developers have no clue what they’re doing with computers outside their particular area of expertise.
[1] <https://learn.microsoft.com/en-us/windows/security/applicati...>
No, that's not how things are implemented normally, exactly because they wouldn't work.
I used to work for a gov't contractor. I wrote a ~10 line golang http server, just because at the time golang was still new (this was years ago) and I wanted to try it. Not even 2 minutes later I got a call from the IT team asking a bunch of questions about why I was running that program (the http server not golang). I agree the practice is dumb but there are definitely companies who have it setup that way.
Anyway, the IT department spotted it but since I was using SMB it thought it was just another Windows server. No one ever checked up on it despite being plugged into the corporate network.
This was a Fortune 500 company; things have changed a wee bit since then.
The goal isn’t to stop a developer from doing something malicious, but to add a step to the chain for hackers to do something malicious: they need to pwn the developer laptop from the devbox before they can pivot to, eg, internal data systems.
I haven’t worked somewhere we ran code locally in a long, long time. Your IDE is local, but the testing is remote — typically in an environment where you can match the runtime environment more closely (eg, ensuring the same dependencies, access to cloud resources, etc).
does that mean you will never compile it or build it locally?
don't 99% of people just use docker nowadays to have all that environment matches?
https://itwire.com/guest-articles/guest-opinion/is-an-oracle...
Security practitioners are big fans of application whitelisting for a reason: Your malware problems pretty much go away if malware cannot execute in the first place.
The Australian Signals Directorate for example has recommended (and more recently, mandated) application whitelisting on government systems for the past 15 years or so, because it would’ve prevented the majority of intrusions they’ve investigated.
https://nsarchive.gwu.edu/sites/default/files/documents/5014...
Yet so many receptionists think that the application attached to the email sent by couriercompany@hotmail.com is a reasonable piece of software to run. Curious.
At my work currently IT have the first say and final say on all software, regardless of what it does or who is using it. It's an insane situation. Decisions are being made without any input from anyone even in the department of the users using the software... you know... the ones that actually make the company money...
Maybe your employer’s IT department is in the habit of saying no without a proper attempt to accommodate which can be a problem but, the solution is not to put the monkeys in charge of the zoo.
At my old job we had upper management demanding exceptions to office modern auth so they could use their preferred email apps. We denied that, there was no valid business justification that outweighed the security risk of bypassing MFA.
We then allowed a single exception to the policy for one of our devs as they were having issues with Outlook’s plaintext support when submitting patches to the LKML. Clear and obvious business justification without an alternative gets rubber stamped.
Security is a balance that can go too far in either direction. Your workstations probably don’t need to be air gapped, and susan from marketing probably shouldn’t be able to install grammarly.
Again, false dichotomy. It's possible to meet in the middle, collaborate and discuss technical requirements. It's just that that rarely happens.
Our software (built by us, has regular code reviews and yearly external security audits and is internal-use-only amongst electrical engineers and computer-science guys) regularly gets disabled or removed by IT without warning by accident, and it's usually a few days before it's re-enabled/able to be reinstalled, since the tiny IT dept is forced to rely on external agencies to control their white-listing software.
Your "monkeys in charge of the zoo" metaphor is in full effect at my workplace, but in this case, the monkeys are IT and their security theater.
You said exactly that.
Again, maybe your IT team is garbage, I don’t really care to litigate your issue with them. I specifically said IT should accommodate requests when possible and not be overzealous when saying no.
What you previously suggested is that is that stakeholders should give their demands to IT and that IT should figure out how to make it happen. Doesn’t sound like collaboration to me.
In my experience end users and management are very rarely aware of the requirements placed upon IT to ensure the security of company infrastructure when it comes passing audits, whether that’s for cyber insurance, or CMMC compliance or whatever else.
It’s plainly obvious that products don’t exist to sell without developers or engineers. But you can’t sell your product to customers if they require SOC and you don’t have it or if your entire infrastructure gets ransomwared.
I’ve had to tell very intelligent and hard working people that if I accommodated their request the government would no longer buy products from our company.
That's fair; I did make it sound pretty one-sided there.
Yeah but software isnt software.
Like I have a customer with users that just randomly started using VPN software to manage their client sites. VPN software that exposes the user machine directly to uncontrolled networks. This causes risks in both directions, because their clients run things like datacenters and power stations. Increases security risks for their business, and increases security risks for their customers, not to mention liability.
IT should be neutral. but IT done right, is guided by best practice. IT is ultimately responsible and accountable for security and function. You cant be responsible and accountable without control, or you exist just to be beaten up when shit goes sideways.
>the ones that actually make the company money...
Making the company money in an uncontrolled fashion is just extra distance to fall. If you ship a fantastic product with a massive supply chain induced vuln that destroys your clients there was no point in making that money in the first place.
This is a lovely take if your business exclusively running on FOSS on premise software, but is a receipe for some hefty bills from software vendors due to people violating licensing conditions
Agreed.
> and cannot run
I strongly disagree. I think those controls are great for denylists. For example, almost no one needs to run a BitTorrent client on their work laptops. (I said almost. If you’re one of them, make a case to your IT department.) Why allow it? Its presence vastly increases the odds of someone downloading porn (risk: sexual harassment) or warez (risks: malware, legal issues) with almost no upside to the company. I’m ok with a company denylisting those.
I couldn’t care less if you want to listen to Apple Music or Spotify while you work. Go for it. Even though it’s not strictly work-related, it makes happier employees with no significant downside. Want to use Zed instead of VSCode? Knock yourself out. I have no interest in maintaining an allowlist of vetted software. That’s awful for everyone involved. I absolutely don’t want anyone running even a dev version of anything Oracle in our non-Oracle shop, though, and tools to prevent that are welcome.
Good lord.
This is akin to saying "Instead of doing `apt-get install <PACKAGE>`, one can bypass the apt policies by downloading the package and running `dpkg -i <PACKAGE>`.
(But also: in a structural sense, if a system did have `apt` policies that were intended to prevent dependency introduction, then such a system should prevent that kind of bypass. That doesn't mean that the bypass is life-or-death, but it's a matter of hygiene and misuse prevention.)
If it were phrased like this then you would be right. The docs would give a false sense of security, would be misleading. So I went to check, but I didn't find such assertion in the linked docs (please let me know if I missed it) [0]
So I agree with the commenter above (and Github) that "editing the github action to add steps to download a script and running" is not a fundamental flaw of this system designed to do exactly that, to run commands as instructed by the user.
Overall we should always ask ourselves: what's the threat model here? If anyone can edit the Github Action, then we can make it do a lot of things, and this "Github Action Policy" filter toggle is the last of our worry. The only way to make the CI/CD pipeline secure (especially since the CD part usually have access to the outside world) is to prevent people from editing and running anything they want in it. It means preventing the access of users to the repository itself in the case off Github Actions.
[0] https://blog.yossarian.net/2025/06/11/github-actions-policie...
I suppose there's room for interpretation here, but I think an intuitive scan of "Allowing select actions and reusable workflows to run" is that the contrapositive ("not allowed actions and reusable workflows will not run") also holds. The trick in the post violates that contrapositive.
I think people are really getting caught up on the code execution part of this, which is not really the point. The point is that a policy needs to be encompassing to have its intended effect, which in the case of GitHub Actions is presumably to allow large organizations/companies to inventory their CI/CD dependencies and make globally consistent, auditable decisions about them.
Or in other words: the point here is similar to the reason companies run their own private NPM, PyPI, etc. indices -- the point is not to stop the junior engineers from inserting shoddy dependencies, but to know when they do so that remediation becomes a matter of policy, not "find everywhere we depend on this component." Bypassing that policy means that the worst of both worlds happens: you have the shoddy dependency and the policy-view of the world doesn't believe you do.
[1]: https://docs.github.com/en/repositories/managing-your-reposi...
We had a contractor that used some random action to ssh files to the server and referenced master as the version to boot. First, ssh isn't that difficult to upload files and run commands but the action owner could easily add code to save private keys and information to another server.
I am a bit confused on the "bypass" though. Wouldn't the adversary need push access to the repository to edit the workflow file? So, the portion that needs hardening is ensuring the wrong people do not have access to push files to the repository?
On public repositories I could see this being an issue if they do it in a section of the workflow that is run when a PR is created. Private repositories, you should take care with who you give access.
Those are good practices. I would add that pinning the version (tag) is not enough, as we learnt with the tj-actions/changed-files event. We should pin the commit sha.[0]. Github states this in their official documentation [1] as well:
> Pin actions to a full length commit SHA
> Pin actions to a tag only if you trust the creator
[0] https://www.stepsecurity.io/blog/harden-runner-detection-tj-...
[1] https://docs.github.com/en/actions/security-for-github-actio...
I understand it that way, too. But: Having company-wide policies in place (regarding actions) might be misunderstood/used as a security measure for the company against malicious/sloppy developers.
So documenting or highlighting the behaviour helps the devops guys avoid a wrong sense of security. Not much more.
That sinking feeling when you search for how to do something and all of the top results are issues that were opened over a decade ago...
It is especially painful trying to use github to do anything useful at all after being spoiled by working exclusively from a locally hosted gitlab instance. I gave up on trying to get things to cache correctly after a few attempts of following their documentation, it's not like I'm paying for it.
Was also very surprised to see that the recommended/suggested default configuration that runs CodeQL had burned over 2600 minutes of actions in just a day of light use, nearly doubling the total I had from weeks of sustained heavy utilization. Who's paying for that??
My main problem with the policy and how it's implemented at my job is that the ones setting the policies aren't the ones impacted by them, and never consult people who are. Our security team tells our GitHub admin team that we can't use 3rd party actions.
Our GitHub admin team says sure, sounds good. They don't care, because they don't use actions, and they in fact don't delivery anything at all. Security team also delivers nothing, so they don't care. Combined, these teams crowning achievement is buying GitHub Enterprise and moving it back and forth between cloud and on prem 3 times in the last 7 years.
As a developer, I'll read the action I want to use, and if it looks good I just clone the code and upload it into our own org/repo. I'm already executing a million npm modules in the same context that do god knows what. If anyone complains, it's getting hit by the same static/dynamic analysis tools as the rest of the code and dependencies.
My company has a similar whitelist of actions, with a list of third-party actions that were evaluated and rejected. A lot of the rejected stuff seems to be some sort of helper to make a release, which pretty much has a blanket suggestion to use the `gh` CLI already on the runners.
Seems like policies are impossible to enforce in general on what can be executed, so the only recourse is to limit secret access.
Is there a demonstration of this being able to access/steal secrets of some sort?
1. BigEnterpriseOrg central IT dept click the tick boxes to disable outside actions because <INSERT SECURITY FRAMEWORK> compliance requires not using external actions [0]
2. BigBrainedDeveloper wants to use ExternalAction, so uses the method documented in the post because they have a big brain
3. BigEnterpriseOrg is no longer compliant with <INSERT SECURITY FRAMEWORK> and, more importantly, the central IT dept have zero idea this is happening without continuously inspecting all the CI workflows for every team they support and signing off on all code changes [1]
That's why someone else's point of "you're supposed to fork the action into your organisation" is a solution if disabling local `uses:` is added as an option in the tick boxes -- the central IT dept have visibility over what's being used and by whom if BigBrainedDeveloper can ask for ExternalAction to be forked into BigEnterpriseOrg GH organisation. Central IT dept's involvement is now just review the codebase, fork it, maintain updates.
NOTE: This is not a panacea against all things that go against <INSERT SECURITY FRAMEWORK> compliance (downloading external binaries etc). But it would be an easy gap getting closed.
----
[0]: or something, i dunno, plenty of reasons enterprise IT depts do stuff that frustrates internal developers
[1]: A sure-fire way to piss off every single one of your internal developers.
The author relates to exactly that: "ineffective policy mechanisms are worse than missing policy mechanisms, because they provide all of the feeling of security through compliance while actually incentivizing malicious forms of compliance."
And I totally agree. It is so abundant. "Yes, we are in compliance with all the strong password requirements, strictly speaking there is one strong password for every single admin user for all services we use, but that's not in the checklist, right?"
Anyone who can write code to the repo can already do anything in GitHub actions. This security measure was never designed to mitigate against a developer doing something malicious. Whether they clone another action into the repo or write custom scripts themselves, I don’t see how GitHub’s measures could protect against that.
(The point is not directly malicious introductions: it's supply chain risk in the form of engineers introducing actions/reusable workflows that are themselves malleable/mutable/subject to risk. A policy that claims to do that should in fact do it, or explicitly document its limitations.)
The same guard helps prevent accidents, not maliciousness, and security breaches. If code somehow gets onto our systems, but we prevent most outbound connections, exfiltrating is much harder.
Yes, people do code review but stuff slips through. See eg Google switching one of their core libs that did mkdir with a shell to run mkdir -p (tada! every invocation better understand shell escaping rules). That made it through code review. People are imperfect; telling your network no outbound connections (except for this small list) is much closer to perfect.
The idea is that the organization does not trust these third-parties, therefore they disable their access.
However this solution bypasses those lists by cloning open-source actions directly into the runner. At that point it’s just running code, no different from if the maintainers wrote a complex action themselves.
The dumb thing is GitHub offers “action policies” pretending they actually do something.
A more ideal approach could be to expose a simple rest API or webhook that allows for the repo owner to integrate external tooling that is better suited for the purpose of enforcing status checks.
I would much rather write CI/CD tooling in something like python or C# than screw around with yaml files and weird shared libraries of actions. You can achieve something approximating this right now, but you would have to do it by way of GH Actions to some extent.
PRs are hardly latency sensitive, so polling a REST API once every 60 seconds seems acceptable to me. This is essentially what we used to do with Jenkins, except we'd just poll the repo head instead of some weird API.
Most people opt for it for convenience. There's a balance you can strike between all the yaml and shared actions, and running your own scripts.
That... has existed for years? https://docs.github.com/en/rest?apiVersion=2022-11-28
That was the only thing available before github actions. That was also the only thing available if you wanted to implement the not rocket science principle before merge queues.
It's hard to beat free tho, especially for OSS maintainership.
And GHA gives you concurrency you'd have to maintain an orchestrator (or a completely bespoke solution), just create multiple jobs or workflow.
And you don't need to deal with tokens to send statuses with. And you get all the logs and feedback in the git interface rather than having to BYO again. And you can actually have PRs marked as merged when you rebased or squashed them (a feature request which is now in middle school: https://github.com/isaacs/github/issues/2)
> PRs are hardly latency sensitive, so polling a REST API once every 60 seconds seems acceptable to me.
There is nothing to poll: https://docs.github.com/en/webhooks/types-of-webhooks
If you work for an org with restrictive policy but not restrictive network controls, anyone at work could stand up a $5 VPS and break the network control. Or a Raspberry Pi at home and DynDNS. Or a million others.
Don't be stupid and think that a single security control means you don't need to do defense in depth.
That way we were still tracking the individual commits which we approved as a team.
Now there is interesting dichotomy. On one hand PMs want us to leverage GitHub Actions to build out stuff more quickly using pre-built blocks, but on the other hand security has no capacity or interest to whitelist actions (not to mention that the whitelist list is limited to 100 actions as per the article).
That said, even tagging GitHub actions with a sha256 isn't perfect for container actions as they can refer to a tag, and the contents of that tag can be changed: https://docs.github.com/en/actions/sharing-automations/creat...
E.g. I publish an action with code like
runs:
using: 'docker'
image: 'docker://optionoft/actions-tool:v3.0.0'
You use the action, and pin it to the SHA of this commit.I get hacked, and a hacker publishes a new version of optionoft/actions-tool:v3.0.0
You wouldn't even get a Dependabot update PR.
> The check works by looking for unpinned dependencies in Dockerfiles, shell scripts, and GitHub workflows which are used during the build and release process of a project.
Does it detect an unpinned (eg a Docker tag) of a pinned dependency.
Optionally, you can tell your action to reference the docker image by sha256 hash also, in which case it's effectively immutable.
OTOH, if in addition to restricting to a whitelist of actions you completely forbid ad hoc shell commands (i.e., `run:` blocks), now you have something that can be made secure.
I was planning to do this myself. GitLab for dev work proper. GitHub push mirror on `main` for gen-pop access (releases/user issue reporting).
(yes it is a security issue (as it defeats a security policy) but I hope it remains unfixed because it's a stupid policy)
https://dlthub.com/docs/walkthroughs/deploy-a-pipeline/deplo...
It's the easiest way for many startups to get people to try out your software for free.
That's like saying it's a security flaw in the Chrome store that you could enable dev mode, copy the malware and run it that way.
Obviously it's impossible to block all ways of "bypassing" the policy. If you are a developer who has already been entrusted with the ability to make your GitHub Actions workflows run arbitrary code, then OF COURSE you can make it run the code of some published action, even if it's just by manual copy and paste. This fact doesn't need documenting because it's trivially obvious that it could not possibly be any other way.
Nor does it follow from this that the existence of the policy and the limited automatic enforcement mechanism is pointless and harmful. Instead of thinking of the enforcement mechanism as a security control designed to outright prevent a malicious dev from including code from a malicious action, instead think of it more like a linting rule: its purpose is to help the developer by bringing the organisation's policy on third party actions to the dev's attention and pointing out that what they are trying to do breaks it.
If they decide to find a workaround at that point (which of course they CAN do, because there's no feasible way to constrain them from doing so), that's an insubordination issue, just like breaking any other policy. Unless his employer has planted a chip in his brain, an employee can also "bypass" the sexual harassment policy "in the dumbest way possible" - just walk up to Wendy from HR and squeeze her tits! There is literally no technical measure in place to make it physically impossible for him do so. Is the sexual harassment policy therefore also worse than nothing, and is it a problem that the lack of employee brain chips isn't documented?
The problem of audit of third-party code is real. Especially because of the way GitHub allows embedding it in users' code: it's not centralized, doesn't require signatures / authentication.
But, I think, the real security-minded approach here should be at the container infrastructure level. I.e. security policies should apply to things like container network in the way similar to security groups in popular cloud providers, or executing particular system calls, or accessing filesystem paths.
Restrictions on the level of what actions can be mentioned in the "manifest" are just a bad approach that's not going to stop anyone.
hiatus•1d ago
internobody•1d ago
rawling•1d ago
mystifyingpoi•1d ago
jadamson•1d ago
GitHub has a page on this:
https://securitylab.github.com/resources/github-actions-prev...
woodruffw•1d ago
Or as an intuitive framing: if you can understand the value of branch protection and secret pushing policies for helping your junior engineers, the same holds for your CI/CD policies.
hiatus•1d ago
woodruffw•1d ago
Clearly in an ideal world runners would be hermetic. But I think the presence of other sources of non-hermeticity doesn't justify a poorly implemented policy feature on GitHub's part.
solumos•1d ago
and
“We only allow actions published by our organization and reusable workflows OR ones that are manually downloaded from an outside source”
are very very different policies
hiatus•1d ago