As a result, we ended up building something that's sort of a "meta-controller programming language" on top of Kubernetes called Koreo. It provides a solution for configuration management and resource orchestration in Kubernetes by basically letting you program controllers. We've been using Koreo for a while now to build internal developer platform capabilities for our commercial product and our clients, and we recently open sourced it to share it with the community.
Koreo has some similarities to configuration languages like KCL, Jsonnet, etc. since it is a means of configuration management (e.g. you can define base configurations, apply overlays, point patches, and so forth). Where it really diverges though is Koreo provides a unified approach to config management and resource orchestration. This means you can start to treat Kubernetes resources as "legos" to build pretty sophisticated workflows. For instance, the output of a resource can be used as the input to another resource. This isn't really possible with Helm, even with `lookup` because `lookup` requires the resource to already be in-cluster in order to reference it.
This is why we refer to Koreo as a meta-controller programming language because it effectively lets you program and compose Kubernetes controllers into cohesive platforms—either built-in controllers (think Deployment or StatefulSet), off-the-shelf ones such as AWS ACK or GCP's Config Connector, or custom operators. It lets you build or combine controllers without actually needing to implement an operator. Through this lens, Koreo is really more akin to Crossplane but without some of the limitations such as Providers and cluster-scoped managed resources.
It seems crazy and maybe it is, but I've found working in Koreo to actually be surprisingly fun since it kind of turns Kubernetes primitives into legos you can easily piece together, reuse, and build some pretty cool automated workflows. You can learn more about the motivation and thinking behind it here: https://theyamlengineer.com
techpineapple•1w ago
But I’m not sure I feel the advantage of this indirection. It feels confusing to be that the applied resource will be different from what’s in VCS, and the code feels super heavy for what you’re getting. I’ve been at like this and cross plane, and can’t quite grok why this is better than doing it in a classical programming language. But I think I’m wrong, can you help me understand?
robertkluin•1w ago
Everything is in-code and designed to have a "proper" SDLC lifecycle—code reviews, approvals, merges. It is designed to be used in gitops workflows.
For the more nuanced questions, here's some background: I like Kustomize and KPT a lot. In _my_ opinion, they should be your starting point. They are clean and easy to reason about tools. They do not work as well when you have more complexity. They're very painful if you've got _dynamic_ values or values you need to inject programmatically (think Helm's values.yaml).
The next important item to note: Koreo's relative value is lower if you're building highly bespoke one-offs or you do not care about having standard resource configurations / application architectures. The value is not zero, but there are lighter solutions and you should consider them instead.
Koreo is meant to model application architectures and resource capabilities. Using your example, you can build a BucketResource. That BucketResource will then ensure that S3 Buckets follow your company standards, including things like automatically handling the IAM setup and permissions for the service that uses the bucket. That lets you define required capabilities: An S3 Bucket is always tagged with the owner service and product domain. In production environments buckets must have lifecycle rules specifying a minimum 30 day retention. In development environments, you lifecycle rules are optional. The developers then only need to specify that their workload uses an S3 Bucket and it will be configured based on your company standards. But, we have designed it so that you can decide how much abstraction is right for your needs—you can directly expose the full, underlying API or you can abstract it more.
Effectively, it gives you an "easy" solution for building a PaaS that implements _your_ standards and opinionation.
Our original versions were directly implemented using go and Python. The issue is that iterating on the application models was much, much slower. This approach allows us to rapidly implement new capabilities and features, and even expose unique or experimental architectures to only certain application domains.