Small scale: a project is almost small enough to run the build and tests locally, but you still want to have a consistent environment and avoid "works on my machine" problems.
Large scale: a project is so large that you need to leverage remote, distributed computing to run everything with a reasonable feedback loop, ideally under 10 minutes.
The opposite ends of the spectrum warrant different solutions. For small scale, actually being able to run the whole CI stack locally is ideal. For large scale, it's not feasible.
> A CI system that’s a joy to use, that sounds like a fantasy. What would it even be like? What would make using a CI system joyful to you?
I spent the past few years building RWX[1] to make a CI system joyful to use for large scale projects.
- Local CLI to read the workflow definitions locally and then run remotely. That way can you test changes to workflow definitions without having to commit and push.
- Remote breakpoints to pause execution at any point and connect via ssh, which is necessary when running on remote infrastructure.
- Automatic content-based caching with sandboxed executions, so that you can skip all of the duplicative steps that large scale CI otherwise would. Sandboxing ensures that the cache never produces false positives.
- Graph-based task definitions, rather than the 1 job : 1 VM model. This results in automatic and maximum parallelization, with no redundancy in setup for each job.
- The graph based model also provides an improved retry experience, and more flexibility in resource allocation. For example, one task in the DAG can crank up the CPU and memory without having to run more resources for downstream tasks (steps in other platforms).
We've made dozens of other improvements to the UX for projects with large build and test workflows. Big engineering teams love the experience.
[1] https://rwx.com
A custom lazy, typed functional language that doesn't differentiate between expressions and "builds" would be much better. Even better if you add "contexts", aka implicit tags under values for automatic dependency inference. Also do "serializable bytecode", and closing over dependencies of thunks efficiently like Unison does for great distrubuted builds.
And it would be pretty easy to add a debugger to this system, same logic as doing "import"
Nix gets somewhat close, but it misses the mark by separating the eval and build phases. It having terrible documentation, 1332432 ways to do the same thing, not properly separating the nixpkgs/nix divide, and nixpkgs being horribly, but still insufficiently abstracted also doesn't help.
Also, I'm not sure why you posted this comment here, as there is nothing that prevents you from writing a Radicle CI adapter that can handle huge repositories. You can reference the bare git repo stored in the Radicle home, so you just need to be able to store the repo itself.
kuehle•7h ago
locally running CI should be more common
apwell23•7h ago
goku12•6h ago
Besides all that, this is not at all what the author and your parent commenter is discussing. They are saying that the practice of triggering and running CI jobs entirely locally should be more common, rather than having to rely on a server. We do have CI runners that work locally. But the CI job management is still done largely from servers.
apwell23•6h ago
yes this is what i was taking about. If there are a lots of tests that are not practical to run locally then they are bad tests no matter how useful one might think they are. only good tests are the ones that run fast. It is also a sign that code itself is bad that you are forced to write tests that interact with outside world.
For example, you can extract logic into a presention layer and write unit test for that instead of mixing ui and business logic and writing browser tests for it. there are also well known patterns for this like 'model view presenter'.
I would rather put my effort into this than trying to figure out how to run tests that launch databases, browsers, call apis , start containers ect. Everywhere i've seen these kind of tests they've contributed to "it sucks to work on this code" feeling, bad vibes is the worst thing that can happen to code
NewJazz•6h ago
andrewaylett•6h ago
I'd really recommend against mocking dependencies for most tests though. Don't mock what you don't own, do make sure you test each abstraction layer appropriately.
apwell23•6h ago
8n4vidtmkvmk•6h ago
apwell23•6h ago
what if the API changes all of sudden in production? what about cases where api stays the same but content of response is all wrong? how do tests protect you from that?
edit: they are not hypothetical scenarios. wrong responses are way more common than schema breaking. tooling upsteam is often pretty good at catching schema breakages.
wrong responses often cause way more havoc than schema breakages because you get an alert for schema failures in app anyways.
chriswarbo•4h ago
For your example, the best place to invest would be in that API's own test suite (e.g. sending its devs examples of usage that we rely on); but of course we can't rely on others to make our lives easier. Contracts can help with that, to make the API developers responsible for following some particular change notification process.
Still, such situations are hypothetical; whereas the sorts of integration tests that the parent is describing are useful to avoid our deployments from immediately blowing up.
girvo•5h ago
andrewaylett•6h ago
Although I'd slightly rephrase that to "if you don't change anything, you should end up running pretty much the same code locally as in CI".
GitHub Actions is really annoying for this, as it has no supported local mode. Act is amazing, but insufficient: the default runner images are huge, so you can't use the same environment, and it's not supported.
Pre-commit on the other hand is fantastic for this kind of issue, as you can run it locally and it'll fairly trivially run the same checks in CI as it does locally. You want it to be fast, though, and in practice I normally wind up having pre-commit run only cacheable tests locally and exclude any build and test hooks from CI because I'll run them as separate CI jobs.
I did release my own GHA action for pre-commit (https://github.com/marketplace/actions/cached-pre-commit), because the official one doesn't cache very heavily and the author prefers folk to use his competing service.
maratc•6h ago
E.g. if your build process is simply invoking `build.sh`, it should be trivial to run exactly that in any CI.
ambicapter•6h ago
maratc•5h ago
0x457•3h ago
esafak•5h ago
maratc•5h ago
However, there are stateless VMs and stateless BMs too.
BeeOnRope•4h ago
maratc•4h ago
esafak•5h ago
__MatrixMan__•5h ago
Most tests have a step where you collect some data, and another step where you make assertions about that data. Normally, that data only ever lived in a variable, so it is not kept around for later analysis. All you get when you're viewing a failed test is logs with either exception or a failed assertion. It's not enough to tell a full story, and I think this contributes to the frustration you're talking about.
I've been playing with the idea that all of the data generation should happen first (since it's the slow part), it then gets added to a commit (overwriting data from the previous CI run) and then all of the assertions should run afterwards (this is typically very fast).
So when CI fails, you can pull the updated branch and either:
- rerun the assertions without bothering to regenerate the data (faster, and useful if the fix is changing an assertion)
- diff the new data against data from the previous run (often instructive about the nature of the breakage)
- regenerate the data and diff it against whatever caused CI to fail (useful for knowing that your change will indeed make CI happy once more)
Most tools are uncomfortable with using git to transfer state from the failed CI run to your local machine so you can just rerun the relevant parts locally, so there's some hackery involved, but when it works out it feels like a superpower.
Norfair•4h ago
gsaslis•4h ago
And yet, that's technically not CI.
The whole point we started using automation servers as an integration point was to avoid the "it works on my machine" drama. (Have watched at least 5 seasons of it - they were all painful!).
+1 on running the test harness locally though (where feasible) before triggering the CI server.
popsticl3•3h ago
everforward•2h ago
Managing those lifetimes is annoying, especially when it needs to work on desktops too. On the server side, you can do things like spin up a VM that CI runs in, use Docker in the VM to make dependencies in containers, and then delete the whole VM.
That's a lot of tooling to do locally though, and even then it's local but has so many abstractions that it might as well be running in the cloud.
globular-toast•23m ago