There are so many third party actions where the docs or example reference the master branch. A quick malicious push and they can presumably exfiltrate data from a ton of repositories
(Even an explicit tag is vulnerable because it can just be moved still, but master branch feels like not even trying)
Why do CI/CD systems need access to secrets? I would argue need access to APIs and they need privileges to perform specific API calls. But there is absolutely nothing about calling an API that fundamentally requires that the caller know a secret.
I would argue that a good CI/CD system should not support secrets as a first-class object at all. Instead steps may have privileges assigned. At most there should be an adapter, secure enclave style, that may hold a secret and give CI/CD steps the ability to do something with that secret, to be used for APIs that don’t support OIDC or some other mechanism to avoid secrets entirely.
Github actually is doing something right here. You can set it up as a trusted identity provider in AWS, and then use Github to assume a role in your AWS account. And from there, you can get access to credentials stored in Secret Manager or SSM.
- name: Retrieve keystore for apk signing
env:
KEYSTORE: ${{ secrets.KEYSTORE }}
run: echo "$KEYSTORE" | base64 --decode > /home/runner/work/keystore.pfkGitHub should instead let you store that key as a different type of secret such that a specific workflow step can sign with it. Then a compromised runner VM could possibly sign something that shouldn’t be signed but could not exfiltrate it.
Even better would be to be able to have a policy that the only thing that can be signed is something with a version that matches the immutable release that’s being built.
CI/CD does not exist in the vacuum. If you had CI/CD entirely integrated with the rest of the infrastructure it might be possible to do say an app deploy without passing creds to user code (say have the platform APIs that it can call to do the deployment instead of typical "install the client, get the creds, run k8s/ssh/whatever else needed for deploy").
But that's a high level of integration that's very environment specific, and without all that many positives (so what you don't need creds, you still have permission to do a lot of mess if it gets hijacked), and a lot, lot more code to write vs "run a container and pass it some env vars" that had become a standard
Of course the general purpose task runner that both run on does need to support secrets
Only the CI part needs to build; it needs little else and it's the only part of a coherent setup that needs to build.
https://docs.github.com/en/actions/how-tos/secure-your-work/...
IE no prod access by editing the workflow definition and pushing it to a branch.
On the one hand, CD workflows are less exposed than CI workflows. You only deploy code that has made it through your review and CI processes. In a non-continuous deployment model, you only deploy code when you decide to. You are not running your CD workflow on a third-party pull request.
On the other hand, the actual CD permission is a big deal. If you leak a credential that can deploy to your k8s cluster, you are very, very pwned. Possibly in a manner that is extremely complex to recover from.
I also admit that I find it rather surprising that so many workflows have a push model of deployment like this. My intuition for how to design a CD-style system would be:
1. A release is tagged in source control.
2. Something consumes that release tag and produces a production artifact. This might be some sort of runner that checks out the tagged release, builds it, and produces a ghcr image. Bonus points if that process is cleanly reproducible and more bonus points if there's also an attestation that the release artifact matches the specified tag and all the build environment inputs. (I think that GitHub Actions can do this, other than the bonus points, without any secrets.)
3. Something tells production to update to the new artifact. Ideally this would trigger some kind of staged deployment. Maybe it's continuous, maybe it needs manual triggering. I think that, in many production systems, this could be a message from the earlier stages that tells an agent with production privileges to download and update. It really shouldn't be that hard to make a little agent in k8s or whatever that listens to an API call from a system like GitHub Actions, authenticates it using OIDC, and follows its deployment instructions.
P.S. An attested-reproducible CD build system might be an interesting startup idea.
...but I saw that anti-pattern of "just add a step that does the deploy after CI in same" often enough that I think it might be the most common way to do it.
There is if you pay for API access, surely?
How is that not secrets management?
Or: the deployment service knows the identity of the instance, so its secret is its private key
Or, how PyPI does it: the deployment service coordinates with the trusted CI/CD service to learn the identity of the machine (like its IP address, or a trusted assertion of which repository it’s running on), so the secret is handled in however that out-of-band verification step happens. (PyPI communicates with Github Actions about which pipeline from which repository is doing the deployment, for example)
It’s still just secrets all the way down
But how does the metadata server know that the CI instance is allowed to access the secret? Especially when the CI/CD system is hosted at a 3rd. party. It needs to present some form of credentials. The CI system may also need permission or credentials for a private repository of packages or artifacts needed in the build process.
For me, a CI/CD system needs two things: Secret management and the ability to run Bash.
As for deploying from a trusted service without managing credentials, PyPI calls this "trusted publishing": https://docs.pypi.org/trusted-publishers/
From the docs:
1. Certain CI services (like GitHub Actions) are OIDC identity providers, meaning that they can issue short-lived credentials ("OIDC tokens") that a third party can strongly verify came from the CI service (as well as which user, repository, etc. actually executed);
2. Projects on PyPI can be configured to trust a particular configuration on a particular CI service, making that configuration an OIDC publisher for that project;
3. Release automation (like GitHub Actions) can submit an OIDC token to PyPI. The token will be matched against configurations trusted by different projects; if any projects trust the token's configuration, then PyPI will mint a short-lived API token for those projects and return it;
4. The short-lived API token behaves exactly like a normal project-scoped API token, except that it's only valid for 15 minutes from time of creation (enough time for the CI to use it to upload packages).
You have to add your github repository as a "trusted pulbisher" to your PyPI packages.
Honetsly the whole workflow bothers me -- how can PyPI be sure it's talking to github? what if an attacker could mess with PyPI's DNS? -- but it's how it's done.
I keep meaning to write a partially federated CI tool that uses Prometheus for all of its telemetry data but never get around to it. I ended up carving out a couple other things I’d like to be part of the process as a separate app because I was still getting panopticon vibes and some data should just be private.
Those tests will need creds to access third party database endpoints.
I don't really understand what you mean by "secure enclave style"? How would that be different?
I suppose I would make an exception for license keys. Those have minimal blast radii if they leak.
Your approach boils down to “lets give each step its own access to its own hardware-protected secrets, but developers shouldn’t otherwise have access”
Which is a great way to “support secrets,” just like the article says.
Let’s just call it secret support.
I agree with your suggestion that capabilities-based APIs are better, but CI/CD needs to meet customers where they’re at currently, not where they should be. Most customers need secrets.
Pedantically I'd say maybe it's more fair to say they shouldn't have access to long lived secrets and should only use short lived values.
The "I" stands for Integration so it's inevitable CI needs to talk to multiple things--at the very least a git repo which most cases requires a secret to pull.
This all seems right, but the reality is that people will put secrets into CI/CD, and so the platform should provide an at least passably secure mechanism for them.
(A key example being open source: people want to publish from CI, and they’re not going to set up additional infrastructure when the point of using third-party CI is to avoid that setup.)
Because you need to be able to sign/notarize with private keys and deploy to cloud environments. Both of these require secrets known to the runner.
This works well for _most_ things. There are some issues with doing docker-in-docker for volume mapping, but they're mostly trivial. We're using taskfiles to run tasks, so I can just rely on it for that. It also has a built-in support for nice output grouping ( https://taskfile.dev/docs/reference/schema#output ) that Github actions can parse.
Pros:
1. Ability to run things in parallel.
2. Ability to run things _locally_ in a completely identical environment.
3. It's actually faster!
4. No vendor lock-in. Offramp to github runners and eventually local runners?
Cons:
It often takes quite a while to understand how actions work when you want to run them in your own environment. For example, how do you get credentials to access the Github Actions cache and then pass them to Docker? Most of documentation just tells: "Use this Github Action and stop worrying your pretty little head about it".
My biggest concern with it is that it’s somehow the de facto industry standard. You could do so much better with relatively small investments, but MS went full IE6 with it… and now there’s a whole generation of young engineers who don’t know how short their end of the stick actually is since they never get to compare it to anything.
Personally I've just retired a laptop and I'm planning to turn it into a little home server. I think I'm gonna try spinning up Woodpecker on there, I'm curious to see what a CI system people don't hate is like to live with!
steps:
- name: backend
image: golang
commands:
- go build
- go test
- name: frontend
image: node
commands:
- npm install
- npm run test
- npm run build
Yes, it's easy to read and understand and it's container based, so it's easy to extend. I could probably intuitively add on to this. I can't say the same for GitHub, so it has that going for it.But the moment things start to get a little complex then that's when the waste starts happening. Eventually you're going to want to _do_ something with the artifacts being built, right? So what does that look like?
Immediately that's when problems start showing up...
- You'll probably need a separate workflow that defines the same thing, but again, only this time combining them into a Docker image or a package.
- I am only now realizing that woodpecker is a fork of Drone. This was a huuuge issue in Drone. We ended up using Starlark to generate our drone yaml because it lacked any kind of reusability and that was a big headche.
- If I were to only change a `frontend` file or a `backend` file, then I'm probably going to end up wasting time and compute rebuilding the same artifacts over and over. - GitHub's free component honestly hurts itself here. I don't have to care about waste if it's mostly free anyways.
- Running locally using the local backend... looks like a huge chore. In Drone this was basically impossible.I really wish someone would take a step back and really think about the problems being solved here and where the current tooling fails us. I don't see much effort being put into the things that really suck about github actions (at least for me): legibility, waste, and the feedback loop.
By adding one file to your git repo, you get cross-platform build & test of your software that can run on every PR. If your code is open source, it's free(ish) too.
It feels like a weekend project that a couple people threw together and then has been held together by hope and prayers with more focus on scaling it than making it well designed.
I'm from a generation who had to use VSS for a few years. The sticks are pretty long these days, even the ones you get from github.
I just had trauma!
I will say that SourceSafe had one advantage: You could create "composite" proxy workspaces.
You could add one or two files from one workspace, and a few from another, etc. The resulting "avatar" workspace would act like they were all in the same workspace. It was cool.
However, absolutely everything else sucked.
I don't miss it.
(Git has octopus merges, jj just calls them “merge commits” even though they may have more than two parents)
Git has the concept of "atomic repos." Repos are a single unit, including all files, branches, tags, etc.
Older systems basically had a single repo, with "lenses" into sections of the repo (usually called "workspaces," or somesuch. VSS called them something else, but I can't remember).
I find the atomic repo thing awkward; especially wrt libraries. If I include a package, I get the whole kit & kaboodle; including test harnesses and whatnot. My libraries thend to have a lot more testing code than library code.
Also, I would love to create a "dependency repo," that aggregates the exported parts of the libraries that I'm including into my project, pinned at the required versions. I guess you could say package managers are that, but they are kind of a blunt instrument. Since I eat my own dog food, I'd like to be able to write changes into the dependency, and have them propagate back to their home repo, which I can sort of do now, if I make it a point to find the dependency checkout, make a change, then push that change, but it's awkward.
But that seems crazy complex (and dangerous), so I'm OK with the way things work now.
Both git and jj have sparse checkouts these days, it sounds like you’d be into that
Do you vendor the libraries you use? Python packages typically don’t include the testing or docs in wheels uploaded to PyPI, for instance
These days in Pythonland, it’s typical to use a package manager with a lockfile that enforces build reproducibility and SHA signatures for package attestation. If you haven’t worked with tools like uv, you might like their concepts (or you might be immediately put off by their idea of hermetically isolated environments idk)
You can see most of my stuff in GH. You need to look at the organizations, as opposed to my personal repos: https://github.com/ChrisMarshallNY#browse-away
Thanks for the heads-up. I'll give it a gander.
in a centralized VCS there are viable CICD options like 'check the compiler binaries in' or even 'check the whole builder OS image in' which git is simply not able to handle by design and needs extensions to work around deficiencies. git winning the mindshare battle made these a bit forgotten, but they were industry standard a couple decades ago.
We moved from VSS to SVN, and it took a little encouraging for the person who had set up our branching workflow using that VSS feature to be happy losing it if that freed us from VSS.
> actions/checkout@f43a0e5ff2bd294095638e18286ca9a3d1956744
Positive example: https://github.com/codecov/codecov-action/blob/96b38e9e60ee6... Negative example: https://github.com/armbian/build/blob/54808ecff253fb71615161...
If I write actions/setup-python@v1, I'm expecting the action to run with the v1 tag of that repository. If I rerun it, I expect it to run with the v1 tag of that repository...which I'm aware may not be the same if the tag was updated.
If I instead use actions/setup-python@27b31702a0e7fc50959f5ad993c78deac1bdfc29 then I'm expecting the action to run with that specific commit. And if I run it again it will run with the same commit.
So, whether you choose the tag or the commit depends on whether you trust the repository or not, and if you want automatic updates. The option is there...isn't it?
That's the mistake that breaks the following. People don't usually expect that it's an arbitrary modifiable reference, but instead that it's the same version they've picked when they created the file (ie a tag is just a human friendly name for a commit)
Mind you, CI does always involve a surprising amount of maintenance. Update churn is real. And Macs still are very much more fiddly to treat as "cattle" machines.
Current job is using blacksmith to save on costs, but the reality of it is that this caching layer only adds costs in some of our projects
I get it's use, especially in large companies and I also get the culture leading up to it being widely used but I can't help but chuckle a bit about the problems we cause for ourselves in this industry.
- Using the commit SHA of a released action version is the safest for stability and security.
- If the action publishes major version tags, you should expect to receive critical fixes and security patches while still retaining compatibility. Note that this behavior is at the discretion of the action's author.
So you can basically implement your own lock file, although it doesn't work for transitive deps unless those are specified by SHA as well, which is out of your control. And there is an inherent trade-off in terms of having to keep abreast if critical security fixes and updating your hashes, which might count as a charitable explanation for why using hashes is less prevalent.
This is not true for stability in practice: the action often depends on a specific Node version (which may not be supported by the runner at some point) and/or a versioned API that becomes unsupported. I've had better luck with @main.
So in other words the strategy in the docs doesn't actually address the issue
Sure you can implement it yourself for direct dependencies and decide to only use direct dependencies that also use commit sha pinning, but most users don’t even realize it’s a problem to begin with. The users who know often don’t bother to use shas anyway.
Or GitHub could spend a little engineer time on a feasible lock file solution.
I say this as somebody who actually likes GitHub Actions and maintains a couple of somewhat well-used actions in my free time. I use sha pinning in my composite actions and encourage users to do the same when using them, but when I look at public repos using my actions it’s probably 90% using @v1, 9% @v1.2 and 1% using commit shas.
[0] Actions was the first Microsoft-led project at GitHub — from before the acquisition was even announced. It was a sign of things to come that something as basic as this was either not understood or swept under the rug to hit a deadline.
I maintain an R package that is quite stable and is widely used. But every month or so, the GHA on one of the R testing machines will report an error. The messages being quite opaque, I typically spend a half hour trying to see if my code is doing something wrong. And then I simply make a calendar item to recheck it each day for a while. Sure enough, the problems always go away after a few days.
This might be specific to R, though.
Why not just build the workflows themselves as docker images? I guess running other docker images in the workflow would then become a problem.
Because it's clear to write and read. You don't want your CI/CD logic to end up being spaghetti because a super ninja engineer decided they can do crazy stuff just because they can. Same reason why it's a bad idea to create your infrastructure directly in a programming language (unless creating infrastructure is a core part of your software).
> Why not just build the workflows themselves as docker images? I guess running other docker images in the workflow would then become a problem.
That's how Drone CI handled it. GitLab kind of does the same, where you always start as a docker image, and thus if you have a custom one with an entrypoint, it does whatever you need it to.
YAML is fine for data, but inevitably stuff like workflows end up tacking on imperative features to a declarative language.
You can have conditions and types without having the full flexibility allowing madness of a full language with HCL.
I really really want to use dagger, but I don’t think there’s organizational interest in it.
Also, using the dagger github action should make the transition easier I suppose: https://github.com/dagger/dagger-for-github
I guess the best solution is to just write custom scripts in whatever language one prefers and just call those from the CI runner. Probably missing out on some fancy user interfaces but at least we'd no longer be completely locked into GHA...
Harsh given GitHub makes it very easy to setup attestations for Artifact (like build & sbom) provenances.
That said, Zizmor (static analyser for GitHub Actions) with Step Security's Harden Runner (a runtime analyser) [0] pair nicely, even if the latter is a bit of an involved setup.
[0] https://github.com/step-security/harden-runner
> The fix is a lockfile.
Hopefully, SLSA drafts in Hermetic build process as a requirement: https://slsa.dev/spec/v1.2/future-directions
I have a little launcher for that which helps: https://github.com/7mind/mudyla
I'm pretty sure it contains the exact line of it being "deeply confused about being a package manager".
Well... not Pip!
Pip has been a flag bearer for Python packaging standards for some time now, so that alternatives can implement standards rather than copy behavior. So first a lock file standard had to be agreed upon which finally happened this year: https://peps.python.org/pep-0751/
Now it's a matter of a maintainer, who are currently all volunteers donating their spare time, to fully implement support. Progress is happening but it is a little slow because of this.
For those who can still escape the lock-in, this is probably a good occasion to point to Forgejo, an open-source alternative that also has CI actions: https://forgejo.org/2023-02-27-forgejo-actions/ It is used by Codeberg: https://codeberg.org/
However, as noted in the article, Forgejo's implementation currently has all the same "package manager" problems.
When you have a multi-platform image the actual per-platforms are usually not tagged. No point.
But that doesn't mean that they are untagged.
So on GitHub Actions when you upload a multi-platform image the per-platform show up in the untagged list. And you can delete them, breaking the multi-platform image, as now it points to blobs that don't exist anymore.
The main problem, which this article touches, is that GHA adds a whole new dimension of dependency treadmill. You now have a new set of upstreams that you have to keep up to date along with your actual deployment upstreams.
If you do, please submit a "show HN." I'd love to use it.
GitHub actions has some rough edges around caching, but all the packaging is totally unimportant and best avoided.
I hope that Codeberg will become more mainstream for FOSS projects.
I hope another competent player, beside GitLab and Bitbucket, will emerge in the corporate paid space.
Has anyone been bitten by a breaking change from an action update mid-pipeline?
The vast majority of users use GitHub-hosted runners. If you don't trust GitHub, you have bigger problems than whether the correct code for an action is downloaded.
Anyway, software is so complicated that at some level, you need to trust something because it's impossible to personally comprehend and audit all code.
So, you still need to trust git. You still need to trust your OS. You still need to trust the hardware. You just don't have enough minutes in your life to go down through all those levels and understand it well enough to know that there's nothing malicious in there.
Like, what did one expect?
I just converted our old parrot travis runners to github actions. There I had constant troubles with travis timeouts of 15m 10 years ago. With the new github actions I can run the full tests (which was not possible with travis) in 3 minutes. About 8x faster hardware.
saagarjha•2mo ago
bilekas•2mo ago
mhitza•2mo ago
Actions is one thing, but after all these years where the new finegrained access tokens aren't still supported across all the product endpoints (and the wack granularity) is more telling about their lack of investment in maintenance.
Cthulhu_•2mo ago
(we run a private gitlab instance and a merge request can spawn hundreds of jobs, that's a lot of potential Gitlab credits)
crote•2mo ago
anentropic•2mo ago
https://github.com/actions/create-release
ptx•2mo ago
WorldMaker•2mo ago
coryrc•2mo ago
i.e. from https://github.com/actions/cache/?tab=readme-ov-file#note
crote•2mo ago
saagarjha•2mo ago
conartist6•2mo ago
weikju•2mo ago
captn3m0•2mo ago
> Instead of writing bespoke scripts that operate over GitHub using the GitHub API, you describe the desired behavior in plain language. This is converted into an executable GitHub Actions workflow that runs on GitHub using an agentic "engine" such as Claude Code or Open AI Codex. It's a GitHub Action, but the "source code" is natural language in a markdown file.
kokada•2mo ago
Edit: ok, looking at example it makes more sense. The idea is to run specific actions that are probably not well automated, like generating and keeping documentation up-to-date. I hope people don't use it to automate things like CI runs though.
woodruffw•2mo ago
imglorp•2mo ago
everfrustrated•2mo ago
They will occasionally make changes if it aligns with a new product effort driven from within the org.
Saying they're dropping support is a stretch esp as very few people actually pay for their Support package anyway..... (Yes they do offer it as a paid option to Enterprise customers)
mixedbit•2mo ago
samhh•2mo ago
LilBytes•2mo ago
kylegalbraith•2mo ago
999900000999•2mo ago
GitHub actions more or less just work for what most people need. If you have a complex setup, use a real CI/CD system.
bastardoperator•2mo ago
999900000999•2mo ago
GitHub Actions are really for just short scripts. Don't take your Miata off road.
cyberpunk•2mo ago
999900000999•2mo ago
It's a bit bloated, but it's free and works.
kakwa_•1mo ago
This stuff is a nightmare to manage, and with large code bases/products, you need a dedicated "devops" just to babysit the thing and avoid it becoming a liability for your devs.
I'm actually looking forward our migration to GHEC from on-prem just because Github Actions, as shitty as they are, are far less of an headache than Jenkins.
lijok•2mo ago
999900000999•2mo ago
I get the vibe it was never intended to seriously compete with real CI/CD systems.
But then people started using it as such, thus this thread is full of complaints.
lijok•2mo ago
bastardoperator•2mo ago
https://github.com/jenkinsci/jenkins/tree/master/.github/wor...
drdrey•2mo ago
lijok•2mo ago
kylegalbraith•2mo ago
servercobra•2mo ago
Only downside is they never got back to us about their startup discount.
bksmithconnor•2mo ago
could you shoot me your GH org so I can apply your startup discount? feel free to reach out to support@blacksmith.sh and I'll get back to you asap. thanks for using blacksmith!
servercobra•2mo ago
herpdyderp•2mo ago
kylegalbraith•2mo ago
[0] https://depot.dev/
herpdyderp•2mo ago
whiskey-one•1mo ago
Ygg2•2mo ago
What if GH actions is considered legacy business in favour of LLMs?
eviks•2mo ago
blibble•2mo ago
and switch everyone to the dumpster fire that is Azure DevOps
and if you thought GitHub Actions was bad...
fuzzy2•2mo ago
From my perspective, Azure Pipelines is largely the same as GitHub Actions. I abhor this concept of having abstract and opaque “tasks”.
WorldMaker•2mo ago
Microsoft claims Azure DevOps still has a roadmap, but it's hard to imagine that the real roadmap isn't simply "Wait for more VPs in North Carolina to retire before finally killing the brand".
re-thc•2mo ago
> and switch everyone to the dumpster fire that is Azure DevOps
The other way around. Azure DevOps is 1/2 a backend for Github these days. Github re-uses a lot of Azure Devops' infrastructure.
everfrustrated•2mo ago
The GitHub Actions runner source code is all dotnet. GitHub was a Ruby shop.
jen20•1mo ago
Normal_gaussian•2mo ago
GitHub also runs a free tier with significant usage.
There are ~1.4b paid instances of Windows 10/11 desktop; and ~150m Monthly active accounts on GitHub, of which only a fraction are paid users.
Windows is generating something in the region of $30b/yr for MS, and GitHub is around $2b/yr.
MS have called out that Copilot is responsible for 40% of revenue growth in GitHub.
Windows isn't what developers buy, but it is what end users buy. There are a lot more end users than developers. Developers are also famously stingy. However, in both products the margin is in the new tech.
tonyhart7•2mo ago
but github is pair well with MS other core product like Azure and VS/VSC department
MS has a good chance to have vertical integration on how software get written from scratch to production, if they can somehow bundle everything to all in one membership like Google one subs, I think they have a good chance
rurban•2mo ago
Hamuko•2mo ago
I guess Bitbucket is cheaper but you'll lose the savings in your employees bitching about Bitbucket to each other on Slack.
blackqueeriroh•2mo ago
Hamuko•2mo ago
nevon•1mo ago
Now for the people who were operating Bitbucket, I'm sure it's a relief.
Hamuko•1mo ago
silverwind•2mo ago
miohtama•2mo ago
These include
- https://circleci.com/
- https://www.travis-ci.com/
- Gitlab
Open source:
- https://concourse-ci.org/ (discussed in the context of Radicle here https://news.ycombinator.com/item?id=44658820 )
- Jenkins
-etc.
Anyone can complain as much as they want, but unless they put the money where their mouth is, it's just noise from lazy people.
ramon156•2mo ago
input_sh•2mo ago
What that type of section usually means is "there's someone from Microsoft that signed up for our service using his work account", sometimes it means "there's some tiny team within Microsoft that uses our product", but it very rarely (if ever) means "the entire company is completely reliant on our product".
SkyPuncher•2mo ago
koakuma-chan•2mo ago
saagarjha•2mo ago
ironmagma•2mo ago
rjzzleep•2mo ago
Here we are talking about one of the worlds most valuable companies that gets all sorts of perks, benefits and preferential treatment from various entities and governments on the globe and somehow we have to be grateful when they deliver garbage while milking the business they bought.
ironmagma•2mo ago
baq•2mo ago
ironmagma•2mo ago
baq•2mo ago
ironmagma•2mo ago
And besides that, a lot of people on here do pay for Github in the first place.
rjzzleep•2mo ago
How did we go in 20 years from holding these companies to account when they'd misbehave to acting as if they are poor damsels in distress whenever someone points out a flaw?
hexbin010•2mo ago
They hired a ton of people on very very good salaries
nsoqm•2mo ago
The opposite, to be lazy and to continue giving them money whilst being unhappy with what you get in return, would actually be more like defending the companies.
ImPostingOnHN•2mo ago
The opposite we see here: to not criticize them; to blame Microsoft's failure on the critics; and even to discourage any such criticism, are actually more like defending large companies.
miohtama•2mo ago
This especially includes governments and other institutional buyers.
thrdbndndn•2mo ago
Their size or past misbehaviors shouldn't be relevant to this discussion. Bringing those up feels a bit like an ad hominem. Whether criticism is valid should depend entirely on how GitHub Actions actually works and how it compares to similar services.
Sl1mb0•2mo ago
thrdbndndn•2mo ago
Tostino•2mo ago
gcr•2mo ago
wizzwizz4•2mo ago
If the past misbehaviours are exactly the same shape, there's not all that much point re-hashing the same discussion with the nouns renamed.
tonyhart7•2mo ago
You better thank god for MS for being lazy and incompetent, the last thing we want for big tech is being innovative and have a stronger monopoly
drdec•2mo ago
Honestly I think the problem is more a rosy view of the past versus any actual change in behavior. There have always been defenders of such companies.
CamouflagedKiwi•2mo ago
I used Travis rather longer ago, it was not great. Circle was a massive step forward. I don't know if they have improved it since but it only felt useful for very simplistic workflows, as soon as you needed anything complex (including any software that didn't come out of the box) you were in a really awkward place.
aprilnya•2mo ago
olafmol•2mo ago
For some examples of more advanced usecases take a look: https://circleci.com/blog/platform-toolkit/
Disclaimer: i work for CircleCI.
CamouflagedKiwi•1mo ago
Also, honestly, I don't care about any of those features. The main thing I want is a CI system that is fast and customisable and that I don't have to spend a lot of time debugging. I think CircleCI is pretty decent in that regard (the "rerun with SSH" thing is way better than anything else I've seen) but it doesn't seem to be getting any better over time (e.g. caching is still very primitive and coarse-grained).
olafmol•1mo ago
Griffinsauce•1mo ago
c0balt•2mo ago
klausa•2mo ago
miohtama•2mo ago
weakfish•2mo ago
So I’m part of the problem? Me specifically?
klausa•2mo ago
Just refuse to do my job because I think the tools suck?
ChrisMarshallNY•2mo ago
I used to work for a Japanese company, and one of their core philosophies was “Don’t complain, unless you have a solution.” In my experience, this did not always have optimal outcomes: https://littlegreenviper.com/problems-and-solutions/
hrimfaxi•2mo ago
ChrisMarshallNY•2mo ago
NamlchakKhandro•2mo ago
Don't waste your time
Bombthecat•2mo ago
gabrielgio•2mo ago
Once I'm encharged of budge decisions of my company I'll make sure that none will go to any MS and Atlassian product. Until then I'll keep complaining.
XCabbage•2mo ago
(I find it extremely sketchy from a competition law perspective that Microsoft, as the owner of npm, has implemented a policy banning npm publishers from publishing via competitors to GitHub Actions - a product that Microsoft also owns. But they have; that is the reality right now, whether it's legal or not.)
LtWorf•2mo ago
woodruffw•2mo ago
(It can also be extended to arbitrary third party IdPs, although the benefit of that is dependent on usage. But if you have another CI/CD provider that you’d like to integrate into PyPI, you should definitely flag it on the issue tracker.)
dimgl•2mo ago
IshKebab•2mo ago
Github Actions is actually one of the better CI options out there, even if on an absolute scale it is still pretty bad.
As far as I can tell nobody has made a CI system that is actually good.
no_wizard•2mo ago
kspacewalk2•2mo ago
IshKebab•2mo ago
rileymichael•2mo ago
really surprised there are no others though. dagger.io was in the space but the level of complexity is an order of magnitude higher
Marsymars•2mo ago
zulban•2mo ago
Bombthecat•2mo ago
souenzzo•2mo ago
vbezhenar•2mo ago
esafak•2mo ago
vbezhenar•2mo ago
wnevets•2mo ago
That isn't gonna get better anytime soon.
"GitHub Will Prioritize Migrating to Azure Over Feature Development" [1]
[1] https://thenewstack.io/github-will-prioritize-migrating-to-a...
amarant•2mo ago
phantasmish•2mo ago
Retrofitting that into "cloud" bullshit is such a bad idea.
theamk•2mo ago
Using bare-metal requires competent Unix admins, and Actions team is full of javascript clowns (see: decision to use dashes in environment variable; lack of any sort of shell quoting support in templates; keeping logs next to binaries in self-hosted runners). Perhaps they would be better off using infra someone else maintains.
dijit•2mo ago
So does running VMs in a cloud provider.
Except now we call them "DevOps" or "SRE" and pay them 1.5-2x.
(as a former SRE myself, I'm not complaining).
godelski•2mo ago
sebazzz•2mo ago
gregoryl•1mo ago
We had a critical outage because they deprecated Windows 2019 agents a month earlier than scheduled. MS support had the gall to both blame us for not migrating sooner, and refuse to escalate for 36 hours!
drysart•1mo ago
The initial banners and warning emails about it went out well ahead of the original EOL timeline; and again as the extended EOL drew close.
If you were caught off guard by the brownout period, it's your devops team that's to blame, not Microsoft; and Microsoft was absolutely right to blame you for not migrating sooner. They gave you an extra 6 months to do it because you should have had all this done back in the first half of the year.
(If you want to blame Microsoft for anything here, blame them for not having a comprehensive tool to identify all your windows-2019 pipelines and instead just relying on "just go look at the latest pipeline runs page and hope everything's run recently enough to be on that".)