Im still rocking my good old jenkins machine, which to be fair took me a long time to set up, but has been rock solid ever since and will never cost me much and will never be shut down.
But i can definitely see the appeal of github actions, etc…
god help you, and don’t even bother with the local emulators / mocks.
At $dayjob they recently set up git runners. The effort I’m currently working on has the OS dictated to us, long story don’t ask. The OS is centos 7.
The runners do not support this. There is an effort to move to Ubuntu 22.04. The runners also don’t support this.
I’m setting up a Jenkins instance.
This times a million.
Use a real programming language with a debugger. YAML is awful and Starlark isn’t much better.
I was with you until you said "Starlark". Starlark is a million times better than YAML in my experience; why do you think it isn't?
And I want it all as a SaaS!
One workaround that I have briefly played with but haven't tried in anger: Gitlab lets you dynamically create its `.gitlab-ci.yaml` file: https://docs.gitlab.com/ci/pipelines/downstream_pipelines/#d...
So you can have your build system construct its DAG and then convert that into a `.gitlab-ci.yaml` to run the actual commands (which may be on different platforms, machines, etc.). Haven't tried it though.
FWIW Github also allows creating CI definitions dynamically.
(Of course, this is only possible because I can build software in a bash shell. Basically: if you're using bash already, you don't need a foreign CI service - you just need to replace yourself with a bash script.)
I've got one for updating repo's and dealing with issues, I've got one for setting up resources and assets required prior to builds, I've got one for doing the build - then another one for packaging, another for signing and notarization, and finally one more for delivering the signed, packaged, built software to the right places for testing purposes, as well as running automated tests, reporting issues, logging the results, and informing the right folks through the PM system.
And this all integrates with our project management software (some projects use Jira, some use Redmine), since CLI interfaces to the PM systems are easily attainable and set up. If a dev wants to ignore one stage in the build pipeline, they can - all of this can be wrapped up very nicely into a Makefile/CMakeLists.txt rig, or even just a 'build-dev.sh vs. build-prod.sh' mentality.
And the build server will always run the build/integration workflow according to the modules, and we can always be sure we'll have the latest and greatest builds available to us whenever a dev goes on vacation or whatever.
And all this with cross-platform, multiple-architecture targets - the same bash scripts, incidentally, run on Linux, MacOS and Windows, and all produce the same artefacts for the relevant platform: MacOS=.pkg, Windows=.exe, Linux=.deb(.tar)
Its a truly wonderful thing to onboard a developer, and they don't need a Jenkins login or to set up Github accounts to monitor actions, and so on. They just use the same build scripts, which are a key part of the repo already, and then they can just push to the repo when they're ready and let the build servers spit out the product on a network share for distribution within the group.
This works with both Debug and Release configs, and each dev can have their own configuration (by modifying the bash scripts, or rather the env.sh module..) and build target settings - even if they use an IDE for their front-end to development. (Edit: /bin/hostname is your friend, devs. Use it to identify yourself properly!)
Of course, this all lives on well-maintained and secure hardware - not the cloud, although theoretically it could be moved to the cloud, there's just no need for it.
I'm convinced that the CI industry is mostly snake-oil being sold to technically incompetent managers. Of course, I feel that way about a lot of software services these days - but really, to do CI properly you have to have some tooling and methodology that just doesn't seem to be being taught any more, these days. Proper tooling seems to have been replaced with the ideal of 'just pay someone else to solve the problem and leave management alone'.
But, with adequate methods, you can probably build your own CI system and be very productive with it, without much fuss - and I say this with a view on a wide vista of different stacks in mind. The key thing is to force yourself to have a 'developer workstation + build server' mentality from the very beginning - and NEVER let yourself ship software from your dev machine.
(EDIT: call me a grey-beard, but get off my lawn: if you're shipping your code off to someone else [github actions, grrr...] to build artefacts for your end users, you probably haven't read Ken Thompsons' "Reflections On Trusting Trust" deeply or seriously enough. Pin it to your forehead until you do!)
Maybe the the problem with CI is that it's over there. As soon as it stops being something that I could set up and run quickly on my laptop over and over, the frog is already boiled.
The comparison to build systems is apt. I can and occasionally do build the database that I work on locally on my laptop without any remote caching. It takes a very long time, but not too long, and it doesn't fail with the error "people who maintain this system haven't tried this."
The CI system, forget it.
Part of the problem, maybe the whole problem, is that we could get it all working and portable and optimized for non-blessed environments, but it still will only be expected to work over there, and so the frog keeps boiling.
I bet it's not an easy problem to solve. Today's grand unified solution might be tomorrow's legacy tar pit. But that's just software.
Oh the DSL doesn't support what I need it to do.
Can I just have some templating or a little bit of places to put in custom scripts?
Congratulations! You now have a turing complete system. And yes, per the article that means you can cryptocurrency mine.
Ansible terraform Maven Gradle.
Unfortunate fact is that these IT domains (builds and CI) are at a junction of two famous very slippery slopes.
1) configuration
2) workflows
These two slippery slopes are famous for their demos of how clean and simple they are and how easy it is to do. Anything you need it to do.
In the demo.
And sure it might stay like that for a little bit.
But inevitably.... Script soup
Build the software inside of containers (or VMs, I guess): a fresh environment for every build, any caches or previous build artefacts explicitly mounted.
Then, have something like this, so those builds can also be done locally: https://docs.drone.io/quickstart/cli/
Then you can stack as many turtles as you need - such as having build scripts that get executed as a part of your container build, having Maven or whatever else you need inside of there.
I have never seen a system with documentation as awful as Jenkins, with plugins as broken as Jenkins, with behaviors as broken as Jenkins. Groovy is a cancer, and the pipelines are half assed, unfinished and incompatible with most things.
However, with time, you can have a very good feel of these CI systems, their strong and weak points, and basically learn how to use them in the simplest way possible in a given situation. Many problems I saw IRL are just a result of an overly complex design.
My conclusion was that this is near 100% a design taste and business model problem. That is, to make progress here will require a Steve Jobs of build systems. There's no technical breakthroughs required but a lot of stuff has to gel together in a way that really makes people fall in love with it. Nothing else can break through the inertia of existing practice.
Here are some of the technical problems. They're all solvable.
• Unifying local/remote execution is hard. Local execution is super fast. The bandwidth, latency and CPU speed issues are real. Users have a machine on their desk that compared to a cloud offers vastly higher bandwidth, lower latency to storage, lower latency to input devices and if they're Mac users, the fastest single-threaded performance on the market by far. It's dedicated hardware with no other users and offers totally consistent execution times. RCE can easily slow down a build instead of speeding it up and simulation is tough due to constantly varying conditions.
• As Gregory observes, you can't just do RCE as a service. CI is expected to run tasks devs aren't trusted to do, which means there has to be a way to prove that a set of tasks executed in a certain way even if the local tool driving the remote execution is untrusted, along with a way to prove that to others. As Gregory explores the problem he ends up concluding there's no way to get rid of CI and the best you can do is reduce the overlap a bit, which is hardly a compelling enough value prop. I think you can get rid of conventional CI entirely with a cleverly designed build system, but it's not easy.
• In some big ecosystems like JS/Python there aren't really build systems, just a pile of ad-hoc scripts that run linters, unit tests and Docker builds. Such devs are often happy with existing CI because the task DAG just isn't complex enough to be worth automating to begin with.
• In others like Java the ecosystem depends heavily on a constellation of build system plugins, which yields huge levels of lock-in.
• A build system task can traditionally do anything. Making tasks safe to execute remotely is therefore quite hard. Tasks may depend on platform specific tooling that doesn't exist on Linux, or that only exists on Linux. Installed programs don't helpfully offer their dependency graphs up to you, and containerizing everything is slow/resource intensive (also doesn't help for non-Linux stuff). Bazel has a sandbox that makes it easier to iterate on mapping out dependency graphs, but Bazel comes from Blaze which was designed for a Linux-only world inside Google, not the real world where many devs run on Windows or macOS, and kernel sandboxing is a mess everywhere. Plus a sandbox doesn't solve the problem, only offers better errors as you try to solve it. LLMs might do a good job here.
But the business model problems are much harder to solve. Developers don't buy tools only SaaS, but they also want to be able to do development fully locally. Because throwing a CI system up on top of a cloud is so easy it's a competitive space and the possible margins involved just don't seem that big. Plus, there is no way to market to devs that has a reasonable cost. They block ads, don't take sales calls, and some just hate the idea of running proprietary software locally on principle (none hate it in the cloud), so the only thing that works is making clients open source, then trying to saturate the open source space with free credits in the hope of gaining attention for a SaaS. But giving compute away for free comes at staggering cost that can eat all your margins. The whole dev tools market has this problem far worse than other markets do, so why would you write software for devs at all? If you want to sell software to artists or accountants it's much easier.
Putting too much responsibility in the ci environment makes life as a developer (or anyone responsible for maintaining the ci process) more difficult. It’s far more superior to have a consistent use of the build system that can be executed the same way on your local machine as it is in your ci environment. I suppose this is the mess you find yourself in when you have other teams building your pipelines for you?
I recently spent a day trying to get a GH Actions build going but got frustrated and just wrote my own console app to do it. Polling git, tracking a commit hash and running dotnet build is not rocket science. Putting this agent on the actual deployment target skips about 3 boss fights.
Need AWS, Azure or GCP deployment? Ever thought about putting it on bare metal yourself? If not, why not? Because it's not best practice? Nonsense. The answer with these things is: it depends, and if your app has not that many users, you can get away with it, especially if it's a B2B or internal app.
It's also too US centric. The idea of scalability applies less to most other countries.
jph•3h ago
https://dagger.io/
ajb•1h ago
For example, there is a quick start, so I skip that and click on "core concepts". That just redirects to quick start. There's no obvious reference or background theory.
If I was going to trust something like this I want to know the underlying theory and what guarantees it is trying to make. For example, what is included in a cache key, so that I know which changes will cause a new invocation and which ones will not.