Any tool that encourages templating on top of YAML, in a way that prevents the use of tools like yamllint on them, is a bad tool. Ansible learned this lesson much earlier and changed syntax of playbooks so that their YAML passes lint.
Additionally, K8s core developers don't like it and keep inventing things like Kustomize and similar that have better designs.
DevOps has more friction for tooling changes because of the large blast radius
Helm, and a lot of devops tooling, is fundamentally broken.
The core problem is that it is a templating language and not a fully functional programming language, or at least a DSL.
This leads us to the mess we are in today. Here is a fun experiment: Go open 10 helm charts, and compare the differences between them. You will find they have the same copy-paste bullshit everywhere.
Helm simply does not provide powerful enough tools to develop proper abstractions. This leads to massive sprawl when defining our infrastructure. This leads to the DevOps nightmare we have all found ourselves in.
I have developed complex systems in Pulumi and other CDKs: 99% of the text just GOES AWAY and everything is way more legible.
You are not going to create a robust solution with a weak templating language. You are just going to create more and more sprawl.
Maybe the answer is a CDK that outputs helm charts.
I liked KRO's model a lot but stringly typed text templating at the scale of thousands of services doesn't work, it's not fun when you need to make a change. I kinda like jsonnet plus the google cli i forget the name of right now, and the abstraction the Grafana folks did too but ultimately i decided to roll my own thing and leaned heavily into type safety for this. It's ideal. With any luck i can open source it. There's a few similar ideas floating around now - Scala Yaga is one.
buster•1h ago
Also, please fix the "default" helm chart template, it's a nightmare of options and values no beginner understands. Make it basic and simple.
Nowadays i would very much prefer to just use terraform for kubernetes deployments, especially if you use terraform anyway!
nullwarp•1h ago
bigstrat2003•1h ago
phyrog•1h ago
I'd love something that works more like Kustomize but with other benefits of Helm charts (packaging, distribution via OCI, more straight forward value interpolation than overlays and patches, ...). So far none have ticked all my boxes.
glotzerhotze•22m ago
https://fluxcd.io/flux/components/helm/helmreleases/#post-re...
phyrog•6m ago
honkycat•1h ago
Helm is not good enough to develop abstractions with. So go the opposite way: keep it stupid simple.
Pairing helm with Kustomize can help a lot as well. You do most of the templating in the helm chart but you have an escape hatch if you need more patches.
cogman10•1h ago
A single purpose chart for your project is generally a lot easier to grok and consume vs what can be done.
I think the likes of "kustomize" is probably a more sane route to go down. But our entire infrastructure is already helm so hard to switch that all out.
bigstrat2003•26m ago
dev_l1x_be•48m ago
verdverm•1h ago
Nowadays I'm using CUE in front of TF & k8s, in part because I have workloads that need a bit of both and share config. I emit tf.json and Yaml as needed from a single source of truth
mkroman•1h ago
I've been trying to apply CUE to my work, but the tooling just isn't there for much of what I need yet. It also seems really short-sighted that it is implemented in Go which is notoriously bad for embedding.
hvenev•1h ago
verdverm•54m ago
CUE was a fork of the Go compiler (Marcel was on the Go team at the time and wanted to reuse much of the infra within the codebase)
Also, so much of the k8s ecosystem is in Go that it was a natural choice.
candiddevmike•1h ago
verdverm•56m ago
timiel•1h ago
I’d love to dig a bit.
Aeolun•1h ago
jadbox•1h ago
lxe•1h ago
Alir3z4•45m ago
I'm still using it with not a single issue (except when is messes up the iptables rules)
I still confidently, upgrade the docker across all the nodes, workers and managers and it just works. Not a single time that it caused an issue.
lxe•42m ago
Cyphus•11m ago
lxe•6m ago
Cyphus•13m ago
pphysch•57m ago
K8s isn't for running containers, it's for implementing complex distributed systems: tenancy/isolation and dynamic scaling and no-downtime service models.
lxe•1h ago
I know that there are solutions like CDK and SST that attempt this, but because the underlying mechanisms are not native to those solutions, it's simply not enough, and the resulting interfaces are still way too brittle and complex.
JohnMakin•14m ago
dev_l1x_be•48m ago
pests•44m ago
JamesSwift•25m ago
buster•44s ago
If you used helm + terraform before, you'll have no problem understanding the terraform kubernetes provider (as opposed to the helm provider).