I do find that some things just work locally and fail in real actions and vice versa. For the most part it’s made it easier to move the bar forward though.
I've tried twice now to get it working, pulling down many GBs of images and installing stuff and then getting stuck in some obscure configuration or environment issue. I was even running Linux locally which I figured would be the happiest path.
I'm not eager to try again, and unless there's a CI that's very slow or GH actions that need to be updated often for some reason, I feel like it's better to just brute-force it - waiting for CI to run remotely.
It’s slow and arduous work to inject at the right point and find out what went wrong. But it’s miles better than act. act is a noble idea, and I hope it can meet its target, but it falls short currently.
ssh-action gets you on the host, and lets you quickly establish what is happening and implement fixes. It’s top notch!
(I’m unaffiliated, just a big fan of ssh-action).
Last week I moved from a custom server back to GH Pages + GH Actions, and broke a feature that depended on an out dir existing after one build phase and before another. I have no idea how to fix it. It's probably simple, I'm just burnt out.
https://github.com/ChristopherHX/runner-image-blobs/pkgs/con...
I’m pretty sure GH actions don’t run the latest Ruby version.
Granted, all this time I’ve always only used rbenv-managed Ruby versions (and rvm before that). I’ve long disliked/avoided Python because its ecosystem lacked any equivalent, but I know “uv” has gained popularity there, and perhaps Mise is good in Python land as well.
On my team, Nix seems to work pretty well for encapsulating most of the interesting things CI jobs do. We don't use virtualization to run local versions of our jobs, and we don't run into architecture mismatch issues.
I use dagger to read these .env/mise env vars and inject dummy values into the test container. Production is taken care of with a secrets manager.
Issues or discussions related to providing support/coverage/compatibility/workarounds for podman are closed with a terse message. Unusual for an open source project.
Also, understandably, there is no macOS support and I use macOS on GHA for iOS builds (another place I have to debug that this wouldn’t cover).
For simple use cases that won't matter, but if you have complex GitHub Actions you're bound to find varying behavior. That can lead to frustration when you're debugging bizarre CI failures.
If you may say "but, why?!" it's because I really wanted to know what versions of everything I could expect in GHA, and I detest "git commit -amdummy && git push" stupidity so I guess the answer is "because I'm wired that way"
I guess it would be nice to have a tool to convert a Gitlab YAML to Docker incantations, but I've never needed to do it that often for it to be a problem.
Agreed. I’m thankful for tools like act, but there really should be an officially supported way to run gh actions locally.
CI must be local-first and platform-agnostic.
How do you figure that? I'd buy "lock people in the platform," but in that way GitHub Issues has been the moat for far longer than Actions has
I'm open to your suggestion but I struggle to get onboard that Microsoft thinks that CI compute billing is the future
Which GitHub Actions context variables are emulated or customizable in act, like github.event, github.actor, or secrets
My alternative was to have a dedicated repository where I can spam the Git and workflow run histories as much as I need to experiment with workflows.
From the outside, pixi looks like a way to replace (conda + pip) with (not-conda + uv). It’s like uv-for-conda, but also uses uv internally.
Better task running is cool, but it would be odd to use pixi if you don’t otherwise need conda stuff. And extra super duper weird if you don’t have any python code!
Our commands for CI are all just one liners that go to wrappers than pin all our dependencies.
Lately I've been working with a lot of cross-platform Bash scripts that run natively on macOS, WSL, and Linux servers, with little to no consideration for the differences. It's been good!
I want to be able to use this project, I really do. But it’s just not yet there, and this isn’t on Nektos. Nektos is, as best I understand it, trying to approximate GHA, but it’s not easy.
I wonder if there is a proposal to support code-based actions. Config-based CI needs to die.
nu is my default shell. Note that I am not talking about dagger shell. https://dagger.io/blog/a-shell-for-the-container-age-introdu...
I’ve been a long time user, but I’ve run into tons of problems with act over the last year and am wondering if there are better alternatives out there.
Despite the name, act is really only for the latter. You can try to test a local action by putting it in a workflow and calling that, but if you do a checkout in your workflow that will overwrite the mount of your local code into the act container, meaning you’ll get the version from the remote branch instead. Depending on the action, you may not be able to comment out the checkout step while testing.
it works out very well.
Is Dagger usable yet? Is there still hope for Earthly? Are general purpose workflow systems winning any time soon, or are we still afraid of writing code? Any new systems out there that cover all the totally basic features like running locally, unlike Github Actions?
When I they did, I said "fuck it" and just started distributing a Nix flake with wrappers for all the tools I want to run in CI so that at least that part gets handled safely and is easy to run locally.
The worst and most annoying stuff to test is the CI platforms' accursed little pseudo-YAML DSLs. I still would like that piece.
devenv.sh tasks might be a good basis for building on it. I think maybe bob.build also wants to be like this
I also want to say that this approach has largely spared my team the pain many users seem to have with Docker on aarch64 Macs. Nix works well on both aarch64 and x86_64, and doesn't require emulation to run. This is really more appropriate for running development tools locally, I think.
- cd into repo
- `flox activate` -> puts you into a subshell with your desired tools, environment variables, services, and runs any setup scripts you've defined
- You do your work
- `exit` -> you're back in your previous shell
Setting up and managing an environment is also super easy:
- cd into project directory
- `flox init` -> creates an "environment"
- `flox install python312` -> installing new packages is very simple
- `flox edit` -> add any environment variables, setup scripts, services in an editor
- `flox activate` -> get to work
The reason we call these "environments" instead of "developer environments" is that what we provide is a generalization of developer environments, so they're useful in more than just local development contexts. For example, you can use Flox to replace Homebrew by creating a "default" environment in your home directory [2]. You can also bundle an environment up into a container [3] to fit Flox into your existing deployment tools, or use Flox in CI [4].
All that stuff I described is free, but we have some enterprise features in development that won't be, and I think people are going to find those very appealing.
[1] https://flox.dev
[2] https://flox.dev/docs/tutorials/migrations/homebrew/
[3] https://flox.dev/docs/reference/command-reference/flox-conta...
dagger is the only code-based solution. It works, but it does have some edges since it has a much bigger surface area and is constantly growing.
This is what Earthfiles look like: https://docs.earthly.dev/docs/earthfile
Jenkins too going away, but, like Windows XP, people often reach back out for it.
I mostly like Jenkins though, idk why people so desperately want to always move to something new. I guess I've never been on the maintenance side of running Jenkins for thousands of engineers though.
If these systems thought a moment about the developer experience, they'd be magical, but they do not.
https://github.com/jenkinsci/jenkins/tree/master/.github/wor...
Managing the underlying infra is painful, and it remains a security liability even when not making obvious mistakes like exposing it to any open network.
And good luck if you're having that fun at a windows shop and aren't using a managed instance (or 5).
How so? I've been maintaining an instance for a decade, and it really doesn't seem that bad. Updating plugins we do about monthly, it's largely clicking a couple buttons in the UI. Updating Jenkins itself is an apt update. Groovy takes a bit to grok, sure, LLMs help a lot here. The UX isn't that bad, IMHO, but I can see why some would say that. We've switched over almost entirely to using a couple runners, docker, and Jenkinsfiles, which works great. We do still run deploys directly on the Jenkins box, largely because we need to run them single-threaded so they don't step on each-other with load balancer rotations.
I can give you a few examples of where it falls short:
- security: there are constant CVEs about anything and everything in Jenkins
- upgrade paths: if your company uses lots of plugins, the resulting spaghetti is of Italian proportions
- if said company is in a Windows-only infra, on prem, and they still decided to use Jenkins then good luck doing anything. Try putting an agent on a non-system disk for example, Windows paths aren't handled and you find yourself already passing very specific "pre" commands that your master will send over ssh.
- Said ssh connection can be lost due to a variety of things for which there are quite a few combinations of parameters when invoking Jenkins
- While we're at it, SSO isn't exactly supported in Windows environments. There are two external plugins you can try, one created because the other doesn't work, and even then good luck with that.
- At scale you end up having to be at least minimally interested in GC tuning, as Jenkins runs within the JVM
- UI: normal "Views" are not informative, and a bunch of custom views need to be made but rarely work with all sorts of plugins that people using your CI can consider crucial (say, parametrized cronjobs)
- Using the functionnalities Jenkins offers to "install tooling". Try to get it to use a certain version of node in a pipeline. Any current typical CI solution turns that into a straightforward task that's extendable, but in Jenkins you have to configure a very archaic and barely-working "tooling" area in your system to use that, and this barely works beyond the most basic tools.
- Having to maintain enterprise-level groovy libraries. Good luck. It's Groovy but not exactly Groovy. It's all inside the JVM, but inside Jenkins's abstractions of it inside of the JVM.
- Good luck monitoring lots of agents and doing typical tasks with them. Maintenance Windows are slowly coming in, monitoring sort-of-kinda-works via plugins..
I've maintainted instances in small companies and larger ones. With less custom stuff and with a lot more. Compiling C++, C#, Java/Kotlin, Objective-C/Swift. Building webapps, iOS SDKs, Android SDKs, and native Windows apps.
Jenkins can do everything if you bend it the way you want with some plugins and custom code. That's a strength that nothing else offers, but by and large it is a weakness of CI in the long run. Being opinionated isn't amazing, but sometimes it is required to be less complex, easier to maintain, more secure, easier to use, etc.
/me puts tinfoilhat
https://gitea.com/gitea/act -> https://gitea.com/gitea/act_runner
https://code.forgejo.org/forgejo/act -> https://code.forgejo.org/forgejo/runner
Earthly was amazing. The exact same setup in CI and locally. They're reviving it with a community effort, but I'm not sure if it'll live
(My war story:) I stopped using GHAs after an optimistic attempt to save myself five key strokes ‘r’ ‘s’ ‘p’ ‘e’ ‘c’ led to 40+ commits and seeing the sunrise but still no successful test run via GHA. Headless browsers can be fragile but the cost benefit ratio against using GHA was horrible, at least for an indy dev.
Your action can be empty and actions generate webhook events.
Do whatever you want with the webhook.
The trap and tradeoff is that the thirtieth time you’ve done that is when you realize you’ve screwed yourself and the organization by building this Byzantine half baked DAG with a very sketchy security story that you can’t step through, run locally or test
Pour one out, I guess, but it's okay since I previously was super angry at it for languishing in the uncanny valley of "hello world works, practically nothing else does" -- more or less like nektos/act and its bazillions of forks
There are optimizations you’ll want (caching downloaded dependencies etc); if you wait to make those after your build is CI-agnostic you’ll be less tempted by vendor specific shortcuts.
Usually means more code - but, easier to test locally. Also, swapping providers later will be difficult but not “spin up a team and spend 6 months” level.
https://github.com/Cloudef/zig-aio/blob/master/flake.nix#L25...
https://github.com/Cloudef/zig-budoux/blob/master/flake.nix#...
The actual GA workflow file is pretty simple: https://github.com/Cloudef/zig-aio/blob/master/.github/workf...
It’s like the dark times before free and open source compilers.
When are we going to push back and say enough is enough!?
CI/CD desperately needs something akin to Kubernetes to claw back our control and ability to work locally.
Personally, I’m fed up with pipeline development inner loops that involve a Git commit, push, and waiting around for five minutes with no debugger, no variable inspector, and dumping things to console logs like I’m writing C in the 1980s.
You and I shouldn’t be reinventing these wheels while standing inside the tyre shop.
Turns out CI/CD is not an easy problem. I built a short-lived CI product before containers were really much of a thing ...you can guess how well that went.
Also, I'll take _any_ CI solution, closed or open, that tries to be the _opposite_ of the complexity borg that is k8s.
It's inevitable that things will be more difficult to debug once you're using a third party asynchronous tool as part of the flow.
- You may need something to connect the dots between code changes and containers. It's not always possible to build everything on every change, especially in a multi/mono-repo setup.
- You probably need some way to connect container outcomes back to branch protection rules. Again, if you are building everything, every time, it's pretty simple, but less so otherwise.
- You likely want to have some control over the compute capacity on which the actions run, both for speed and cost control. And since caching matters, some compute solutions are better than others.
I don't think GitHub Actions solves any of these problems well, but neither do containers on their own.
CI/CD is one of the topics that is barely solved or usable ...
In my experience, GitHub Actions work quite well, especially for smaller projects or when you can avoid using public templates and actions. While shortcuts like third-party libraries or actions can be convenient, they often come with trade-offs.
For scenarios where having your own runners and build images is beneficial, I would recommend GitLab CI.
Jenkins is a great choice if you need to run builds on dedicated hardware or within a virtualized environment.
When discussing the challenges of CI/CD, I often notice that many issues stem from complex chains of dependencies within the software or product itself. For example, consider a release pipeline that typically takes six hours to complete but fails after four hours. In such cases, having to restart the entire pipeline instead of just the failed job can be quite inefficient.
Additionally, debugging can become much easier when you have your own build images on runners, as you can run them locally for testing and troubleshooting.
What specific aspects of CI/CD do you find broken or challenging?
This is annoying when the code you write is native or reaches out to native stuff. Then all of the sudden your container that builds perfectly on the Mac doesn't build on Linux anymore. (and vice versa, looking at you gcc on debian).
GitHub just (January) got Linux ARM runners, but they're not available for private customers yet.
And their Mac runners do not come with Docker, and it cannot be installed.
I would guess that just `alias docker="docker --platform=linux/amd64"` type thing would go a long way toward evicting those platform mismatches[1]. I would also guess there's a way to switch off the binfmt "helper" in colima such that trying to run arm64 in the VM would immediately puke rather than silently helping you
1: or, of course, don't push production containers from your workstation, but the years have taught me that I'm talking to myself
> alias docker="docker --platform=linux/amd64"
I think that on its own breaks a Rust project which uses openssl with the vendored feature.
?????
If you have a Docker runner on macOS, you're just running a Linux runner in a VM. So just register a Linux runner in a VM, right?
GitHub Actions' macOS runners do not come with Docker installed, because of licensing issues: https://github.com/actions/runner-images/issues/17
Then there is the issue that the macOS runners are virtual instances, and pass-through virtualization has not been enabled:
https://github.com/douglascamata/setup-docker-macos-action#:...
My action just calls `make build test` but it is convenient to have it hooked to GH for automated testing on PRs
Ive given up on gh actions a while ago and transitioned to bash scripts in docker containers. I can reproduce stuff locally, can test it, can debug it, etc.
I dont get how GH Actions do not yet have a tool that looks and works like jupyter workbook. Then it would have been fantastic.
Apart from that it's been quite good!
You can specify different runners to use. The default images are a compromise to keep size down. There is a very large image that tries to include everything you might want. I would suggest trying that if you don’t mind the very large (15GB IIRC) image.
igor47•1mo ago
GuinansEyebrows•1mo ago
jlongman•1mo ago
https://nektosact.com/usage/index.html?highlight=Secret#secr...