frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

How Mark Zuckerberg unleashed his inner brawler

https://www.ft.com/content/a86f5ca3-f841-4cdc-9376-304e085c4cfd
1•raythanwho•1m ago•1 comments

Why do we need DNSSEC?

https://howdnssec.works/why-do-we-need-dnssec/
1•gpi•2m ago•0 comments

Why many TWS earbuds are capped to 128 kbit/s AAC?

https://btcodecs.valdikss.org.ru/aac-lq/
1•f311a•2m ago•0 comments

Ask HN: Tips for hiring? It has been difficult

2•aprdm•2m ago•0 comments

The Dyad Language Toolchain

https://github.com/DyadLang/dyad-lang
1•KenoFischer•5m ago•0 comments

Show HN: We released a vibe coding platform

https://databutton.com/
2•martolini•5m ago•2 comments

Meta AI prompts are in a live, public feed

https://pluralistic.net/2025/06/19/privacy-breach-by-design/#bringing-home-the-beacon
1•almost-exactly•6m ago•0 comments

The Art of Bijective Combinatorics

https://www.viennot.org/abjc.html
1•doubledamio•6m ago•0 comments

The OpenAI Files

https://twitter.com/robertwiblin/status/1935353770981884022
1•MrBuddyCasino•6m ago•0 comments

Mutually Assured Mediocrity

https://staysaasy.com/saas/2025/02/17/judge.html
1•thisismytest•6m ago•0 comments

The inability to count correctly: Debunking Kyber-512 security calculation(2023)

https://blog.cr.yp.to/20231003-countcorrectly.html
1•RA2lover•6m ago•0 comments

Can All Knowledge Be Mined? A Formal Framework for φ^∞ Consequence Closure

https://www.researchgate.net/publication/392826629_ph_Consequence_Mining_Formal_Foundations_and_Collapse_Dynamics
1•WASDAai•7m ago•1 comments

To Conquer the Primary Energy Consumption Layer of Our Entire Civilization

https://terraformindustries.wordpress.com/2025/04/03/to-conquer-the-primary-energy-consumption-layer-of-our-entire-civilization/
1•waynenilsen•9m ago•0 comments

The End of YouTubers [video]

https://www.youtube.com/watch?v=C7I1-H5FUiY
2•geerlingguy•9m ago•0 comments

'What are the bathrooms like at the White House?' (2014)

https://www.independent.co.uk/news/world/americas/what-are-the-bathrooms-like-at-the-white-house-9155249.html
1•dannyphantom•10m ago•0 comments

Aiming at the Dollar, China Makes a Pitch for Its Currency

https://www.nytimes.com/2025/06/18/business/china-dollar-renminbi.html
1•sandwichsphinx•12m ago•0 comments

Interactive, Time-Travel Debugger for TLA+

https://github.com/will62794/spectacle
1•jwww55556•12m ago•0 comments

It's pretty easy to get DeepSeek to talk dirty

https://www.technologyreview.com/2025/06/19/1119066/ai-chatbot-dirty-talk-deepseek-replika/
1•gnabgib•13m ago•0 comments

GUI Actor: Coordinate-Free Visual Grounding for GUI Agents

https://github.com/microsoft/GUI-Actor
1•BiteCode_dev•14m ago•0 comments

Favorite Things Publishers Are Doing

https://stonemaiergames.com/your-favorite-things-publishers-are-doing-2024-2025/
1•Tomte•14m ago•0 comments

Ask HN: Should development of AI be nationalized?

1•michaelsbradley•14m ago•0 comments

Show HN: Free dashboard for technical analysis signals (forex, crypto, stocks)

https://signal-matrix.com/
1•Calibiri•15m ago•0 comments

Balatro pair strategy (an LLM odyssey)

https://shawn.dev/2025/06/balatro-pair-strategy.html
1•sartak•17m ago•0 comments

The 'OpenAI Files' will help you understand how Sam Altman's company works

https://www.theverge.com/openai/688783/the-openai-files-will-help-you-understand-how-sam-altmans-company-works
2•ecommerceguy•19m ago•0 comments

Breaking the Sorting Barrier for Directed Single-Source Shortest Paths

https://arxiv.org/abs/2504.17033
1•gametorch•22m ago•0 comments

US seeks time in tower dump data grab case after judge calls it unconstitutional

https://www.theregister.com/2025/06/19/us_tower_grab_appeal/
2•rntn•22m ago•0 comments

2048 with only 64 bits of state

https://github.com/izabera/bitwise-challenge-2048
1•todsacerdoti•22m ago•0 comments

The Best DRAMs for Artificial Intelligence

https://semiengineering.com/the-best-drams-for-ai/
1•rbanffy•25m ago•0 comments

16B accounts exposed in one of the largest data breaches in history

https://www.tomshardware.com/tech-industry/cyber-security/16-billion-accounts-exposed-in-one-of-the-largest-data-breaches-in-history-enormous-data-haul-holds-two-accounts-for-every-human-alive
7•ericzawo•26m ago•2 comments

WhatsApp security questioned as Israel remains the only known actor to hack it

https://www.thenationalnews.com/future/technology/2025/06/19/whatsapp-security-questioned-as-israel-remains-the-only-known-actor-to-hack-it/
1•LAC-Tech•31m ago•0 comments
Open in hackernews

What Would a Kubernetes 2.0 Look Like

https://matduggan.com/what-would-a-kubernetes-2-0-look-like/
48•Bogdanp•5h ago

Comments

zdw•2h ago
Related to this, a 2020 take on the topic from the MetalLB dev: https://blog.dave.tf/post/new-kubernetes/
jauntywundrkind•1h ago
152 comments on A Better Kubernetes, from the Ground Up, https://news.ycombinator.com/item?id=25243159
zug_zug•2h ago
Meh, imo this is wrong.

What Kubernetes is missing most is a 10 year track record of simplicity/stability. What it needs most to thrive is a better reputation of being hard to foot-gun yourself with.

It's just not a compelling business case to say "Look at what you can do with kubernetes, and you only need a full-time team of 3 engineers dedicated to this technology at tho cost of a million a year to get bin-packing to the tune of $40k."

For the most part Kubernetes is becoming the common-tongue, despite all the chaotic plugins and customizations that interact with each other in a combinatoric explosion of complexity/risk/overhead. A 2.0 would be what I'd propose if I was trying to kill kuberenetes.

candiddevmike•2h ago
Kubernetes is what happens when you need to support everyone's wants and desires within the core platform. The abstraction facade breaks and ends up exposing all of the underlying pieces because someone needs feature X. So much of Kubernetes' complexity is YAGNI (for most users).

Kubernetes 2.0 should be a boring pod scheduler with some RBAC around it. Let folks swap out the abstractions if they need it instead of having everything so tightly coupled within the core platform.

echelon•2h ago
Kubernetes doesn't need a flipping package manager or charts. It needs to do one single job well: workload scheduling.

Kubernetes clusters shouldn't be bespoke and weird with behaviors that change based on what flavor of plugins you added. That is antithetical to the principal of the workloads you're trying to manage. You should be able to headshot the whole thing with ease.

Service discovery is just one of many things that should be a different layer.

KaiserPro•1h ago
> Service discovery is just one of many things that should be a different layer.

hard agree. Its like jenkins, good idea, but its not portable.

sitkack•2h ago
Kubernetes is when you want to sell complexity because complexity makes money and and naturally gets you vendor lockin even while being ostensibly vendor neutral. Never interrupt the customer while they are foot gunning themselves.

Swiss Army Buggy Whips for Everyone!

wredcoll•1h ago
Not really. Kubernetes is still wildly simpler than what came before, especially accounting for the increased capabilities.
cogman10•1h ago
Yup. Having migrated from a puppet + custom scripts environment and terraform + custom scripts. K8S is a breath of fresh air.

I get that it's not for everyone, I'd not recommend it for everyone. But once you start getting a pretty diverse ecosystem of services, k8s solves a lot of problems while being pretty cheap.

Storage is a mess, though, and something that really needs to be addressed. I typically recommend people wanting persistence to not use k8s.

mdaniel•1h ago
> Storage is a mess, though, and something that really needs to be addressed. I typically recommend people wanting persistence to not use k8s.

I have actually come to wonder if this is actually an AWS problem, and not a Kubernetes problem. I mention this because the CSI controllers seem to behave sanely, but they are only as good as the requests being fulfilled by the IaaS control plane. I secretly suspect that EBS just wasn't designed for such a hot-swap world

Now, I posit this because I haven't had to run clusters in Azure nor GCP to know if my theory has legs

I guess the counter-experiment would be to forego the AWS storage layer and try Ceph or Longhorn but no company I've ever worked at wants to blaze trails about that, so they just build up institutional tribal knowledge about treating PVCs with kid gloves

KaiserPro•1h ago
the fuck it is.

The problem is k8s is both a orchestration system and a service provider.

Grid/batch/tractor/cube are all much much more simple to run at scale. More over they can support complex dependencies. (but mapping storage is harder)

but k8s fucks around with DNS and networking, disables swap.

Making a simple deployment is fairly simple.

But if you want _any_ kind of ci/cd you need flux, any kind of config management you need helm.

JohnMakin•1h ago
> But if you want _any_ kind of ci/cd you need flux, any kind of config management you need helm.

Absurdly wrong on both counts.

jitl•1h ago
K8s has swap now. I am managing a fleet of nodes with 12TB of NVMe swap each. Each container gets (memory limit / node memory) * (total swap) swap limit. No way to specify swap demand on the pod spec yet so needs to be managed “by hand” with taints or some other correlation.
selcuka•2h ago
> Let folks swap out the abstractions if they need it instead of having everything so tightly coupled within the core platform.

Sure, but then one of those third party products (say, X) will catch up, and everyone will start using it. Then job ads will start requiring "10 year of experience in X". Then X will replace the core orchestrator (K8s) with their own implementation. Then we'll start seeing comments like "X is a horribly complex, bloated platform which should have been just a boring orchestrator" on HN.

dijit•2h ago
Honestly; make some blessed standards easier to use and maintain.

Right now running K8S on anything other than cloud providers and toys (k3s/minikube) is disaster waiting to happen unless you're a really seasoned infrastructure engineer.

Storage/state is decidedly not a solved problem, debugging performance issues in your longhorn/ceph deployment is just pain.

Also, I don't think we should be removing YAML, we should instead get better at using it as an ILR (intermediate language representation) and generating the YAML that we want instead of trying to do some weird in-place generation (Argo/Helm templating) - Kubernetes sacrificed a lot of simplicity to be eventually consistent with manifests, and our response was to ensure we use manifests as little as possible, which feels incredibly bizzare.

Also, the design of k8s networking feels like it fits ipv6 really well, but it seems like nobody has noticed somehow.

lawn•1h ago
k3s isn't a toy though.
dijit•1h ago
* Uses Flannel bi-lateral NAT for SDN

* Uses local-only storage provider by default for PVC

* Requires entire cluster to be managed by k3s, meaning no freebsd/macos/windows node support

* Master TLS/SSL Certs not rotated (and not talked about).

k3s is very much a toy - a nice toy though, very fun to play with.

zdc1•1h ago
I like YAML since anything can be used to read/write it. Using Python / JS / yq to generate and patch YAML on-the-fly is quite nifty as part of a pipeline.

My main pain point is, and always has been, helm templating. It's not aware of YAML or k8s schemas and puts the onus of managing whitespace and syntax onto the chart developer. It's pure insanity.

At one point I used a local Ansible playbook for some templating. It was great: it could load resource template YAMLs into a dict, read separately defined resource configs, and then set deeply nested keys in said templates and spit them out as valid YAML. No helm `indent` required.

pm90•1h ago
yaml is just not maintainable if you’re managing lots of apps for eg a midsize company or larger. Upgrades become manual/painful.
jcastro•2h ago
For the confusion around verified publishing, this is something the CNCF encourages artifact authors and their projects to set up. Here are the instructions for verifying your artifact:

https://artifacthub.io/docs/topics/repositories/

You can do the same with just about any K8s related artifact. We always encourage projects to go through the process but sometimes they need help understanding that it exists in the first place.

Artifacthub is itself an incubating project in the CNCF, ideas around making this easier for everyone are always welcome, thanks!

(Disclaimer: CNCF Staff)

calcifer•46m ago
> We always encourage projects to go through the process but sometimes they need help understanding that it exists in the first place.

Including ingress-nginx? Per OP, it's not marked as verified. If even the official components don't bother, it's hard to recommend it to third parties.

johngossman•2h ago
Not a very ambitious wishlist for a 2.0 release. Everyone I talk to complains about the complexity of k8s in production, so I think the big question is could you do a 2.0 with sufficient backward compatibility that it could be adopted incrementally and make it simpler. Back compat almost always mean complexity increases as the new system does its new things and all the old ones.
herval•19m ago
The question is always what part of that complexity can be eliminated. Every “k8s abstraction” I’ve seen to date either only works for a very small subset of stuff (eg the heroku-like wrappers) or eventually develops a full blown dsl that’s as complex as k8s (and now you have to learn that job-specific dsl)
mrweasel•1h ago
What I would add is "sane defaults", as in unless you pick something different, you get a good enough load balancer/network/persistent storage/whatever.

I'd agree that YAML isn't a good choice, but neither is HCL. Ever tried reading Terraform, yeah, that's bad too. Inherently we need a better way to configure Kubernetes clusters and changing out the language only does so much.

IPv6, YES, absolutely. Everything Docker, container and Kubernetes should have been IPv6 only internal from the start. Want IPv4? That should be handle by a special case ingress controller.

zdw•1h ago
Sane defaults is in conflict with "turning you into a customer of cloud provider managed services".

The longer I look at k8s, the more I see it "batteries not included" around storage, networking, etc, with the result being that the batteries come with a bill attached from AWS, GCP, etc. K8s is less of an open source project, and more as a way encourage dependency on these extremely lucrative gap filler services from the cloud providers.

JeffMcCune•1h ago
Except you can easily install calico, istio, and ceph on used hardware in your garage and get an experience nearly identical to every hyper scaler using entirely free open source software.
zdw•1h ago
Having worked on on-prem K8s deployments, yes, you can do this. But getting it to production grade is very different than a garage-quality proof of concept.
mdaniel•1h ago
I think OP's point was: but how much of that production grade woe is the fault of Kubernetes versus, sure, turns out booting up an PaaS from scratch is hard as nails. I think that k8s pluggable design also blurs that boundary in most people's heads. I can't think of the last time the control plane shit itself, versus everyone and their cousin has a CLBO story for the component controllers installed on top of k8s
zdw•5m ago
CLBO?
mdaniel•4m ago
Crash Loop Back Off
tayo42•1h ago
> where k8s is basically the only etcd customer left.

Is that true. No one is really using it?

I think one thing k8s would need is some obvious answer for stateful systems(at scale, not mysql at a startup). I think there are some ways to do it? Where I work there is basically everything on k8s, then all the databases on their own crazy special systems to support they insist its impossible and costs to much. I work in the worst of all worlds now supporting this.

re: comments about k8s should just schedule pods. mesos with aurora or marathon was basically that. If people wanted that those would have done better. The biggest users of mesos switched to k8s

haiku2077•1h ago
I had to go deep down the etcd rabbit hole several years ago. The problems I ran into:

1. etcd did an fsync on every write and required all nodes to complete a write to report a write as successful. This was not configurable and far higher a guarantee than most use cases actually need - most Kubernetes users are fine with snapshot + restore an older version of the data. But it really severely impacts performance.

2. At the time, etcd had a hard limit of 8GB. Not sure if this is still there.

3. Vanilla etcd was overly cautious about what to do if a majority of nodes went down. I ended up writing a wrapper program to automatically recover from this in most cases, which worked well in practice.

In conclusion there was no situation where I saw etcd used that I wouldn't have preferred a highly available SQL DB. Indeed, k3s got it right using sqlite for small deployments.

nh2•1h ago
For (1), I definitely want my production HA databases to fsync every write.

Of course configurability is good (e.g. for automated fasts tests you don't need it), but safe is a good default here, and if somebody sets up a Kubernetes cluster, they can and should afford enterprise SSDs where fsync of small data is fast and reliable (e.g. 1000 fsyncs/second).

haiku2077•1h ago
> I definitely want my production HA databases to fsync every write.

I didn't! Our business DR plan only called for us to restore to an older version with short downtime, so fsync on every write was a reduction in performance for no actual business purpose or benefit.

> if somebody sets up a Kubernetes cluster, they can and should afford enterprise SSDs where fsync of small data is fast and reliable

At the time one of the problems I ran into was that public cloud regions in southeast asia had significantly worse SSDs that couldn't keep up. This was on one of the big three cloud providers.

1000 fsyncs/second is a tiny fraction of the real world production load we required. An API that only accepts 1000 writes a second is very slow!

Also, plenty of people run k8s clusters on commodity hardware. I ran one on an old gaming PC with a budget SSD for a while in my basement. Great use case for k3s.

dilyevsky•1h ago
1 and 2 can be overridden via flag. 3 is practically the whole point of the software
haiku2077•1h ago
With 3 I mean that in cases where there was an unambiguously correct way to recover from the situation, etcd did not automatically recover. My wrapper program would always recover from thise situations. (It's been a number of years and the exact details are hazy now, though.)
dilyevsky•49m ago
If the majority of quorum is truly down, then you’re down. That is by design. There’s no good way to recover from this without potentially losing state so the system correctly does nothing at this point. Sure you can force it into working state with external intervention but that’s up to you
dilyevsky•1h ago
That is decisively not true. A number of very large companies use etcd directly for various needs
rwmj•1h ago
Make there be one, sane way to install it, and make that method work if you just want to try it on a single node or single VM running on a laptop.
mdaniel•1h ago
My day job makes this request of my team right now, and yet when trying to apply this logic to a container and cloud-native control plane, there are a lot more devils hiding in those details. Use MetalLB for everything, even if NLBs are available? Use Ceph for storage even if EBS is available? Definitely don't use Ceph on someone's 8GB laptop. I can keep listing "yes, but" items that make doing such a thing impossible to troubleshoot because there's not one consumer

So, to circle back to your original point: rke2 (Apache 2) is a fantastic, airgap-friendly, intelligence community approved distribution, and pairs fantastic with rancher desktop (also Apache 2). It's not the kubernetes part of that story which is hard, it's the "yes, but" part of the lego build

- https://github.com/rancher/rke2/tree/v1.33.1%2Brke2r1#quick-...

- https://github.com/rancher-sandbox/rancher-desktop/releases

fatbird•1h ago
How many places are running k8s without OpenShift to wrap it and manage a lot of the complexity?
jitl•50m ago
I’ve never used OpenShift nor do I know anyone irl who uses it. Sample from SF where most people I know are on AWS or GCP.
raincom•24m ago
Openshift, if IBM and Redhat want to milk the license and support contracts. There are other vendors that sell k8s: Rancher, for instance. SuSe bought Rancher.
Melatonic•1h ago
MicroVM's
geoctl•1h ago
I would say k8s 2.0 needs: 1. gRPC/proto3-based APIs to make controlling k8s clusters easier using any programming language not just practically Golang as is the case currently and this can even make dealing with k8s controllers easier and more manageable, even though it admittedly might actually complicates things at the API server-side when it comes CRDs. 2. PostgreSQL or pluggable storage backend by default instead of etcd. 3. Clear identity-based, L7-aware ABAC-based access control interface that can be implemented by CNIs for example. 4. Applying userns by default 5. Easier pluggable per-pod CRI system where microVMs and container-based runtimes can easily co-exist based on the workload type.
jitl•1h ago
All the APIs, including CRDs, already have a well described public & introspectable OpenAPI schema you can use to generate clients. I use the TypeScript client generated and maintained by Kubernetes organization. I don’t see what advantage adding a binary serialization wire format has. I think gRPC makes sense when there’s some savings to be had with latency, multiplexing, streams etc but control plane things like Kubernetes don’t seem to me like it’s necessary.
geoctl•22m ago
I haven't used CRDs myself for a few years now (probably since 2021), but I still remember developing CRDs was an ugly and hairy experience to say the least, partly due to the flaws of Golang itself (e.g. no traits like in Rust, no macros, no enums, etc...). With protobuf you can easily compile your definitions to any language with clear enum, oneof implementations, you can use the standard protobuf libraries to do deepCopy, merge, etc... for you and you can also add basic validations in the protobuf definitions and so on. gRPC/protobuf will basically allow you to develop k8s controllers very easily in any language.
dilyevsky•23m ago
1. The built-in types are already protos. Imo gRPC wouldn't be a good fit - actually will make the system harder to use. 2. Already can be achieved today via kine[0] 3. Couldn't you build this today via regular CNI? Cilium NetworkPolicies and others basically do this already

4,5 probably don't require 2.0 - can be easily added within existing API via KEP (cri-o already does userns configuration based on annotations)

[0] - https://github.com/k3s-io/kine

pm90•1h ago
Hard disagree with replacing yaml with HCL. Developers find HCL very confusing. It can be hard to read. Does it support imports now? Errors can be confusing to debug.

Why not use protobuf, or similar interface definition languages? Then let users specify the config in whatever language they are comfortable with.

geoctl•1h ago
You can very easily build and serialize/deserialize HCL, JSON, YAML or whatever you can come up with outside Kubernetes from the client-side itself (e.g. kubectl). This has actually nothing to do with Kubernetes itself at all.
dilyevsky•1h ago
Maybe you know this but Kubernetes interface definitions are already protobufs (except for crds)
cmckn•1h ago
Sort of. The hand-written go types are the source of truth and the proto definitions are generated from there, solely for the purpose of generating protobuf serializers for the hand-written go types. The proto definition is used more as an intermediate representation than an “API spec”. Still useful, but the ecosystem remains centered on the go types and their associated machinery.
dilyevsky•53m ago
Given that i can just take generated.proto and ingest it in my software then marshal any built-in type and apply it via standard k8s api, why would I even need all the boilerplate crap from apimachinery? Perfectly happy with existing rest-y semantics - full grpc would be going too far
darkwater•1h ago
I totally dig the HCL request. To be honest I'm still mad at Github that initially used HCL for Github Actions and then ditched it for YAML when they went stable.
carlhjerpe•58m ago
I detest HCL, the module system is pathetic. It's not composable at all and you keep doing gymnastics to make sure everything is known at plan time (like using lists where you should use dictionaries) and other anti-patterns.

I use Terranix to make config.tf.json which means I have the NixOS module system that's composable enough to build a Linux distro at my fingertips to compose a great Terraform "state"/project/whatever.

It's great to be able to run some Python to fetch some data, dump it in JSON, read it with Terranix, generate config.tf.json and then apply :)

jitl•45m ago
What’s the list vs dictionary issue in Terraform? I use a lot of dictionaries (maps in tf speak), terraform things like for_each expect a map and throw if handed a list.
carlhjerpe•34m ago
Internally a lot of modules cast dictionaries to lists of the same length because the keys of the dict might not be known at plan time or something. The "Terraform AWS VPC module does this internally for many things.

I couldn't tell you exactly, but modules always end up either not exposing enough or exposing too much. If I were to write my module with Terranix I can easily replace any value in any resource from the module I'm importing using "resource.type.name.parameter = lib.mkForce "overridenValue";" without having to expose that parameter in the module "API".

The nice thing is that it generates "Terraform"(config.tf.json) so the supremely awesome state engine and all API domain knowledge bound in providers work just the same and I don't have to reach for something as involved as Pulumi.

You can even mix Terranix with normal HCL since config.tf.json is valid in the same project as HCL. A great way to get started is to generate your provider config and other things where you'd reach to Terragrunt/friends. Then you can start making options that makes resources at your own pace.

The terraform LSP sadly doesn't read config.tf.json yet so you'll get warnings regarding undeclared locals and such but for me it's worth it, I generally write tf/tfnix with the provider docs open and the language (Nix and HCL) are easy enough to write without full LSP.

https://terranix.org/ says it better than me, but by doing it with Nix you get programatical access to the biggest package library in the world to use at your discretion (Build scripts to fetch values from weird places, run impure scripts with null_resource or it's replacements) and an expressive functional programming language where you can do recursion and stuff, you can use derivations to run any language to transform strings with ANY tool.

It's like Terraform "unleashed" :) Forget "dynamic" blocks, bad module APIs and hacks (While still being able to use existing modules too if you feel the urge).

mdaniel•1h ago
> Allow etcd swap-out

From your lips to God's ears. And, as they correctly pointed out, this work is already done, so I just do not understand the holdup. Folks can continue using etcd if it's their favorite, but mandating it is weird. And I can already hear the butwhataboutism yet there is already a CNCF certification process and a whole subproject just for testing Kubernetes itself, so do they believe in the tests or not?

> The Go templates are tricky to debug, often containing complex logic that results in really confusing error scenarios. The error messages you get from those scenarios are often gibberish

And they left off that it is crazypants to use a textual templating language for a whitespace sensitive, structured file format. But, just like the rest of the complaints, it's not like we don't already have replacements, but the network effect is very real and very hard to overcome

That barrier of "we have nicer things, but inertia is real" applies to so many domains, it just so happens that helm impacts a much larger audience

jonenst•1h ago
What about kustomize and kpt ? I'm using them (instead of helm) but but:

* kpt is still not 1.0

* both kustomize and kpt require complex setups to programatically generate configs (even for simple things like replicas = replicasx2)

jitl•58m ago
I feel like I’m already living in the Kubernetes 2.0 world because I manage my clusters & its applications with Terraform.

- I get HCL, types, resource dependencies, data structure manipulation for free

- I use a single `tf apply` to create the cluster, its underlying compute nodes, related cloud stuff like S3 buckets, etc; as well as all the stuff running on the cluster

- We use terraform modules for re-use and de-duplication, including integration with non-K8s infrastructure. For example, we have a module that sets up a Cloudflare ZeroTrust tunnel to a K8s service, so with 5 lines of code I can get a unique public HTTPS endpoint protected by SSO for whatever running in K8s. The module creates a Deployment running cloudflared as well as configures the tunnel in the Cloudflare API.

- Many infrastructure providers ship signed well documented Terraform modules, and Terraform does reasonable dependency management for the modules & providers themselves with lockfiles.

- I can compose Helm charts just fine via the Helm terraform provider if necessary. Many times I see Helm charts that are just “create namespace, create foo-operator deployment, create custom resource from chart values” (like Datadog). For these I opt to just install the operator & manage the CRD from terraform directly or via a thin Helm pass through chat that just echos whatever HCL/YAML I put in from Terraform values.

Terraform’s main weakness is orchestrating the apply process itself, similar to k8s with YAML or whatever else. We use Spacelift for this.

moomin•53m ago
Let me add one more: give controllers/operators a defined execution order. Don’t let changes flow both ways. Provide better ways for building things that don’t step on everyone else’s toes. Make whatever replaces helm actually maintain stuff rather than just splatting it out.
recursivedoubts•48m ago
please make it look like old heroku for us normies
dzonga•44m ago
I thought this would be written along the lines of an lllm going through your code - spinning up a railway file. then say have tf for few of the manual dependencies etc that can't be easily inferred.

& get automatic scaling out of the box etc. a more simplified flow rather than wrangling yaml or hcl

in short imaging if k8s was a 2-3 max 5 line docker compose like file

singularity2001•23m ago
More like wasm?