frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
594•klaussilveira•11h ago•176 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
901•xnx•17h ago•545 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
22•helloplanets•4d ago•17 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
95•matheusalmeida•1d ago•22 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
28•videotopia•4d ago•0 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
203•isitcontent•11h ago•24 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
199•dmpetrov•12h ago•91 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
313•vecti•13h ago•137 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
353•aktau•18h ago•176 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
355•ostacke•17h ago•92 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
459•todsacerdoti•19h ago•231 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
24•romes•4d ago•3 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
259•eljojo•14h ago•155 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
80•quibono•4d ago•19 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
392•lstoll•18h ago•266 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
7•bikenaga•3d ago•1 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
53•kmm•4d ago•3 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
3•jesperordrup•1h ago•0 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
235•i5heu•14h ago•178 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
46•gfortaine•9h ago•13 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
122•SerCe•7h ago•103 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
136•vmatsiiako•16h ago•60 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
68•phreda4•11h ago•12 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
271•surprisetalk•3d ago•37 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
25•gmays•6h ago•7 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1044•cdrnsf•21h ago•431 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
13•neogoose•4h ago•9 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
171•limoce•3d ago•92 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
60•rescrv•19h ago•22 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
89•antves•1d ago•66 comments
Open in hackernews

How I think about Kubernetes

https://garnaudov.com/writings/how-i-think-about-kubernetes/
92•todsacerdoti•1mo ago

Comments

NewJazz•1mo ago
Love the HN title mod here lol
stavros•1mo ago
HN removes a "how" if the post starts with it, sometimes making it hilarious.
dkdcio•1mo ago
I’ve seen this a few times now, what’s the context/background on why this is done?
NewJazz•1mo ago
I think maybe they check for "How to [...]" and other variations? "How I broke TLS 1.3" -> "I broke TLS 1.3"
dkdcio•1mo ago
yeah but why?
JCattheATM•1mo ago
Over-engineering to solve a problem that doesn't exist, thereby making one.
dkdcio•1mo ago
sure…what’s the motivation? is this seriously that difficult of a question to answer? this “solution” was put in place for a reason, what’s the reason? does nobody actually know?
NewJazz•1mo ago
Someone was annoyed by certain cliche titles I guess. They also remove numbers.

So if you post "1850 Bloody Island Massacre" it removes the number. But it is meant to remove eg. "13 reasons why you should impeac[...]" B/c apparently the number of reasons why is too much of a cliche or bother? Idk.

Email dang haha

zem•1mo ago
yeah, really wish they would fix that one!
frisovv•1mo ago
Tbh the missing how is probably why I followed the link. And I appreciated the post, so net positive outcome here.
DonHopkins•1mo ago
Otherwise you might confuse it with the HN mod tomhow.

Same reason they remove "dang" if the post starts with it, like the discussion about "Dang! Who ate the middle out of the daddy longlegs".

https://ifunny.co/picture/dang-who-ate-the-middle-out-of-the...

websiteapi•1mo ago
I always wonder if things can be simpler. When you think of a really simple DB you think of SQLite. What's the really simple K8s? Even doing a single node deployment these days seems complicate with Prometheus, Grafana, etc. etc. docker/podman compose up with quadlets and all of this stuff just seems so eh.

I really like the idea of something like Firebase, but it never seems to work out or just move the complexity to the vendor, which is fine, but I like knowing I can roll my own.

eyeris•1mo ago
Big question is which feature subset you want to replicate.

Kubernetes means everything to everyone. At its core, I think it’s being able to read/write distributed state (which doesn’t need to be etcd) and being able for all the components (especially container hosts) to follow said state. But the ecosystem has expanded significantly beyond that.

jauntywundrkind•1mo ago
IMO this is what keeps people from building systems that might challenge kubernetes. Everyone wants to say Kuberentes is too complex, so we built something that does much less. I respect that! But I think it usually fails to grok what Kubernetes is and why it's such an interesting and vital rallying point, that's so thoroughly captured our systems-making. Let's look at the premise:

> That’s why I like to think of Kubernetes as a runtime for declarative infrastructure with a type system.

You can go build a simple way to deploy containers or ship apps: but you are missing what I think allows Kubernetes to be such a big tent, thats a core useful platform for so many. Kubernetes works the same for all types, for everything you want to manage. It's the same desired state management + autonomic systems patterns, whatever you are doing. An extensible platform with a very simple common core.

There are other takes and other tries, but managing desired state for any kind of type is a huge win that allows many people to find their own uses for kube, that is absolutely the cornerstone to it's popularity.

If you do want less, the one project I'd point to that is kubernetes without the kubernetes complexity is KCP. It's just the control plane. It doesn't do anything at all. This to me is much simpler. It's not finding a narrowly defined use case to focus on, it's distilling out the general system into it's simplest parts. Rebuilding a good simple bespoke app container launching platform around KCP would be doable, and maintain the overarching principles that make Kube actually interesting.

I seriously think there is something deeply rotten with our striving for simplicity. I know we've all been burned, and there's so often we want to throw up our hands, and I get it. But the way out is through. I'd rather dance the dance & try to scout for better further futures, than reject & try to walk back.

zsoltkacsandi•1mo ago
Everything in infrastructure is a set of trade-offs that work in both directions.

If you want better monitoring, metrics, availability, orchestration, logging, and so on, you pay for it with time, money, and complexity.

If you can't justify that cost, you're free to use simpler tools.

Just because everyone sets up a Kubernetes / Prometheus / ELK stack to host a web app that would happily run on a single VPS doesn't mean you need to do the same, or that nowadays this is the baseline for running something.

vbezhenar•1mo ago
Of course things can be simpler.

Remove abstractions like CNI, CRI, just make these things built-in.

Remove unnecessary things like Ingress, etc, you can always just deploy nginx or whatever reverse proxy directly. Also probably remove persistent volumes, they add a lot of complexity.

Use some automatically working database, not separate etcd installation.

Get rid of control plane. Every node should be both control plane and worker node. Or may be 3 worker nodes should be control plane, whatever, deployer should not think about it.

Add stuff that everyone needs. Centralised log storage, centralised metric scrapping and storage, some simple web UI, central authentication. It's reimplemented in every Kubernetes cluster.

The problem is that it won't be serious enough and people will choose Kubernetes over simpler solutions.

NewJazz•1mo ago
Some people want their k8s logs to be centralized with non k8s logs. Standardizing log storage seems like a challenging problem. Perhaps they could add built in log shipping. But even then, the transfer format needs to be specified.

Adding an idp is pretty standard in k8s... What do you want to actually do different?

vbezhenar•1mo ago
I want to add users via manifests, so these users could use logins/passwords/pubkeys, and that's out of the box, without installing dex, keycloak or delegating to other systems.

Think about Linux installation. I don't need to add IDP to create unix users for various people.

Right now it's super complicated in Kubernetes and even requires third-party extensions for kubectl.

NewJazz•1mo ago
You can create service accounts and tokens... Although long lived tokens are discouraged, that's as simple as it gets.

Sorry I think you're in the minority here. Most people don't want what you are talking about, they want to use SSO. Even with plain Linux machines, they want SSO.

vbezhenar•1mo ago
Service accounts can't belong to groups, so they are super not convenient for human operators. You can't just create group "developers", assign roles for this group and add service accounts to this group. You must assign role for every user in every namespace, etc.

Having SSO is fine as long as it's built-in. Installing and configuring separate SSO software is not fine.

bigstrat2003•1mo ago
> What's the really simple K8s?

It's k3s. You drop a single binary onto the node, run it, and you have a fully functional one-node k8s cluster.

welliebobs•1mo ago
You can find even more simplicity in Talos Linux[1]. Drop an ISO onto a USB, run a handful of commands[2] to generate and apply a configuration, and you've got a cluster up and running.

[1] https://www.talos.dev/

[2] https://docs.siderolabs.com/talos/v1.12/getting-started/gett...

hbogert•1mo ago
That's just making it easier, not simpler. I think the parent means, really a drop-in but simpler k8s
welliebobs•1mo ago
Sure, but I was mostly concerning myself with the comment I replied to, having used K3s often enough myself.
NewJazz•1mo ago
What about k0s? Also single binary.

https://k0sproject.io/

mdhb•1mo ago
And microk8s from Canonical.
whytevuhuni•1mo ago
> What's the really simple K8s?

I think K8s couples two concepts: the declarative-style cluster management, and infrastructure + container orchestration. Keep CRDs, remove everything else, and implement the business-specific stuff on top of the CRD-only layer.

This would give something like DBus, except cluster-wide, with declarative features. Then, container orchestration would be an application you install on top of that.

Edit: I see a sibling mentioned KCP. I’ve never heard of it before, but I think that’s probably exactly what I’d like.

KronisLV•1mo ago
In ascending order of functionality and how much complexity you need:

  - Docker Compose running on a single server
  - Docker Swarm cluster (typically multiple nodes, can be one)
  - Hashicorp Nomad or K3s or other light Kubernetes distros
liampulles•1mo ago
One of the issues is that the open source Helm charts (or whatever) for something like Grafana do not come out-of-the-box with good config. I spent a significant amount of time a little while ago reading blogs to get Grafana to use up to date indexing algorithms, and better settings, etc.

Considering these companies make money when you use their hosted solution, this is not surprising, and it just goes to show TANSTAFL.

EdwardDiego•1mo ago
I like to use operators for intra-cluster infra, they tend to offer a "sorta-managed" experience. I'll use a Helm chart deployed by ArgoCD to provision the operator, then go from there - mainly because I try to limit Helm usage as much as possible.
liampulles•1mo ago
Its a tradeoff, because operators will use some of your cluster's resources of course, but I get you.
zsoltkacsandi•1mo ago
> Thinking of Kubernetes as a runtime for declarative infrastructure instead of a mere orchestrator results in very practical approaches to operate your cluster.

Unpopular opinion, but the source of most of the problems I've seen with infrastructures using Kubernetes came from exactly this kind of approach.

Problems usually come when we use tools to solve things that they weren't made for. That is why - in my opinion - it is super important to treat a container orchestrator a container orchestrator.

szundi•1mo ago
It would have helped if you tell us why you don’t like this approach.
zsoltkacsandi•1mo ago
It's right there:

> the source of most of the problems I've seen with infrastructures using Kubernetes came from exactly this kind of approach

But some more concrete stories:

Once, while I was on call, I got paged because a Kubernetes node was running out of disk space. The root cause was the logging pipeline. Normally, debugging a "no space left on device" issue in a logging pipeline is fairly straightforward, if the tools are used as intended. This time, they weren't.

The entire pipeline was managed by a custom-built logging operator, designed to let teams describe logging pipelines declaratively. The problem? The resource definitions alone were around 20,000 lines of YAML. In the middle of the night, I had to reverse-engineer how the operator translated that declarative configuration into an actual pipeline. It took three days and multiple SREs to fully understand and fix the issue. Without such a declarative magic it takes usually 1 hour to solve such an issue.

Another example: external-dns. It's commonly used to manage DNS declaratively in Kubernetes. We had multiple clusters using Route 53 in the same AWS account. Route 53 has a global API request limit per account. When two or more clusters tried to reconcile DNS records at the same time, one would hit the quota. The others would partially fail, drift out of sync, and trigger retries - creating one of the messiest cross-cluster race conditions I've ever dealt with.

And I have plenty more stories like these.

antonvs•1mo ago
You mention a questionably designed custom operator and an add-on from a SIG. This is like blaming Linux for the UI in Gimp.
jbaiter•1mo ago
Also not like logging setups outside of k8s can't be a horror show too. Like, have you ever had to troubleshoot a rsyslog based ELK setup? I'll forever have nightmares from debugging RainerScript mixed with the declarative config and having to read the source code to find out why all of our logs were getting dropped in the middle of the night.
NewJazz•1mo ago
I'd also argue the whole external DNS thing could have happened with any dynamic DNS automation... And yes it is a completely optional add-on!
zsoltkacsandi•1mo ago
> a questionably designed custom operator

This is the logging operator, the most used logging operator in the cloud native ecosystem (we built it).

> This is like blaming Linux for the UI in Gimp.

I never blamed anything, read my comment again. I only pointed out that problems arise when you use something to do something that is not built for. Like a container orchestrator managing infrastructure (DNS, logging pipelines). That is why I wrote to "it is super important to treat a container orchestrator a container orchestrator". Not a logging pipeline orchestrator, or a control plane for Route 53 DNS.

This has nothing to do with Kubernetes, but with the people who choose to do everything with it (managing the whole infrastructure).

NewJazz•1mo ago
I feel like the author has a good grasp of the Kubernetes design... What about the approach is problematic? And why don't you think that is how Kubernetes was designed to be used?
zsoltkacsandi•1mo ago
I wrote some personal stories below in this thread as a response to another user.
k8ssskhltl•1mo ago
But then you need two different provisioning tools, one for infra in k8s, and one for infra outside k8s. Or perhaps using non-native tools or wrappers.
zsoltkacsandi•1mo ago
> But then you need two different provisioning tools, one for infra in k8s, and one for infra outside k8s.

Yes, and 99% of the companies do this. It is quite common to use Terraform/AWS CDK/Pulumi/etc to provision the infrastructure, and ArgoCD/Helm/etc to manage the resources on Kubernetes. There is nothing wrong with it.

antonvs•1mo ago
Kubernetes is explicitly designed to do what the article describes. In that respect the article is just describing what you can find in the standard Kubernetes docs.

> it is super important to treat a container orchestrator a container orchestrator.

Which products do you think are only “container orchestrators”? Even Docker Compose is designed to achieve a desired state from a declarative infrastructure definition.

zsoltkacsandi•1mo ago
> Which products do you think are only “container orchestrators”? Even Docker Compose is designed to achieve a desired state from a declarative infrastructure definition.

The way how something describes the desired state (declaratively for example) has nothing to do with if it is a container orchestrator or not.

If you open the Kubernetes website, do you know what is the first thing you will see? "Production-Grade Container Orchestration". Even according to their own docs, Kubernetes is a container orchestrator.

btown•1mo ago
One approach if "dang it, someone/I needed to use kubectl during the outage, how do we get gitops/poor-mans-gitops back in place to match reality" is, either agentically-looping or artisanally-looping, to try simple gitops configurations (or diffs to current gitops configurations) until a dry-run diff with your live configuration results in no changes.

For instance, with Helm, I've had success using Helmfile's diffs (which in turn use https://github.com/databus23/helm-diff) to do this.

There's more of a spectrum between these than you think, in a way that can be agile for small teams without dedicated investment in gitops. Even with the messes that can occur, I'd take it over the Heroku CLI any day.

blackjack_•1mo ago
Yes, there is a term for a system that handles a declarative state of infrastructure and does reconciliation versus current state; a control plane. We have been talking about control planes in devops/ SRE for a number of years now! Welcome to the conversation.
anymouse123456•1mo ago
The allure of declarative approaches to complex problem solving has finally been worn down to nothing for me and Kubernetes was the last straw, nearly 10 years ago.

The mental gymnastics required to express oneself in yaml, rather than, say, literally anything else, invariably generates a horror show of extremely verbose boilerplate, duplication, bloat, delays and pain.

If you're not Google, please for the love of god, please consider just launching a monolith and database on a Linux box (or two) in the corner and see how beautifully simple life can be.

They'll hum along quietly serving many thousands of actual customers and likely cost less to purchase than a single month (or at worst, quarter) of today's cloud-based muggings.

When you pay, you'll pay for bandwidth and that's real value that also happens to make your work environment more efficient.

themgt•1mo ago
If you're not Google, please for the love of god, please consider just launching a monolith and database on a Linux box (or two) in the corner and see how beautifully simple life can be.

You can literally get a Linux box (or two) in the corner and run:

  curl -sfL https://get.k3s.io | sh -
  cat <<EOF | kubectl apply -f -
  ...(json/yaml here)
  EOF
How am I installing a monolith and a database on this Linux box without Kubernetes? Be specific, just show the commands for me to run. Kubernetes that will work for ~anything. HNers spend more tokens complaining about the complexity than it takes to setup.

The mental gymnastics required to express oneself in yaml, rather than, say, literally anything else

Like, brainfuck? Like bash? Like Terraform HCL puppet chef ansible pile-o-scripts? The effort required to output your desired infrastructure's definition as JSON shouldn't really be that gargantuan. You express yourself in anything else but it can't be dumped to JSON?

hbogert•1mo ago
I'm saying this as Kubernetes certified service provider:

Just because you can install it with 1 command doesn't mean it's not complex, it's just made easier, not simpler.

anymouse123456•1mo ago
Yah, also there is a huge difference between a minimal demo and actual, recommended, canonical deployments.

I’ve seen teams waste many months refining k8s deployments only to find that local development isn’t even possible anymore.

This massive investment often happens before any business value has been uncovered.

My assertion, having spent 3 decades building startups, is that these big co infra tools are functionally a psyop to squash potential competitors before they can find PMF.

themgt•1mo ago
When you're comparing Kubernetes "recommended, canonical deployments" to "just launching a monolith and database on a Linux box (or two) in the corner" the latter is obviously going to seem simpler. The point is the k8s analogue of that isn't actually complicated. If you've seen teams waste months making it complicated, that was their choice.
anymouse123456•1mo ago
No argument here.

If you’re running things differently and getting tons of value with little investment, kudos! Keep on keeping on!

What I’ve seen is that the vast majority of teams that pick up k8s also drink the micro service kool-aid and build a mountain of bullshit that costs far more than it creates.

paddw•1mo ago
> Thinking of Kubernetes as a runtime for declarative infrastructure instead of a mere orchestrator results in very practical approaches to operate your cluster.

This is a pretty good definition.

I think part of the challenge is the evolution of K8s over time sometimes makes it feel less like a coherent runtime and more like a pile of glue amalgamated from several different components all stuck together. That and you will have to be aware of how those abstractions stick together with the abstractions from your cloud provider, etc...

NewJazz•1mo ago
Sometimes the interfaces are clear and consistent (loadbalancers, for example) and the line between, e.g., an AWS NLB and your ingress gateway, is clear-cut.

Other times there is a significant degree of portability pains and sparse feature matrices (CSI, IME).

tbrownaw•1mo ago
It's an application server for multi-part containerized applications, like Tomcat is an application server for applications that can be turned into .war files.
liampulles•1mo ago
On the use of GitOps for k8s, I think it makes sense for application workloads, and less sense for raw infrastructure definitions (unless you are running at such a scale that your infrastructure is often scaled like an application).

For my infrastructure definition repo, I will apply it in my terminal with kubectl, watch, and then merge the PR/commit to master. I often need to do this progressively just to roll back if I see resource consumption or other issues, it would be quite dangerous to let the CI pipeline apply everything and then for me to try and change declarations whilst the control plane API is totally starved for resources.

Also (and maybe this is me not doing "proper devops", I don't care), I will often want to tinker a bit with the declaration, trying a bunch of little changes, and then commiting once all is satisfactory. That "dev loop" is less productive if I have to wait for a CI pipeline for every step.

thiagoeh•1mo ago
This more direct interaction by using kubectl happens on a development cluster? Or is it on the same that runs production workloads?
liampulles•1mo ago
Both, though I will get it working in dev first (and yes, I've tended to separate clusters by environment).
EdwardDiego•1mo ago
When you say infrastructure in this context, are you referring to the actual K8s cluster infra?

Or is it what I tend to call "intra-cluster infra" - DBs / Prometheus / Kafka etc. Infra that support apps?

liampulles•1mo ago
Sort of both - I tend to use a managed Kubernetes cluster (like EKS), so the infra repo has YAML for operators to deal with logging, monitoring, storage, secret management, etc. Anything that is not an application workload.
oogali•1mo ago
I sometimes joke that Kubernetes is a mass experiment in teaching people how to write Go via YAML.

The giant nested YAML you come across is the input (pre-deserialization)/output (post-serialization) for the declared types:

https://github.com/kubernetes/api/blob/master/core/v1/types....

Fortunately, or unfortunately, I am the only person that finds humor in this.

hbogert•1mo ago
Writing go in yaml and forgetting everything else we learned software engineering. Proper ide's, being able to make abstractions, not copy pasting, structured templating and thus not string based templating, should I go on?
postalrat•1mo ago
I think many people have the wrong idea what a container is (or i do) and make it sound more exotic than it is. Sure they have some level of isolation but for someone learning this stuff its better to think of them as just a process, like all the other processes running on your computer. And kubernetes as a system that runs and networks processes running on multiple computers.
NewJazz•1mo ago
Well, a group of processes... With some separation between them. Like you can't listen on localhost and call from another pod. But with processes you can do that. Same thing with UDS.
postalrat•1mo ago
Yea they have forms of isolation. Like all processes. Trying to explain all that just adds complexity.
NewJazz•1mo ago
Ignoring it isn't helpful IMO.

Traditional Unix processes don't have isolation mechanics equivalent to that of containers/namespaces.

wlonkly•1mo ago
The author repeats:

> that runtime continuously works to make the infrastructure match your intent.

The flipside of that is that the infrastructure, at any given time, might not match your intent, or might be continuously working to try to match your intent, which means the state of the infrastructure often does not match the state of its configuration, which is hell during an incident.

It makes me wonder if declarative, converging systems are actually what we want, or if they're what we ended up with, or if all the alternatives are just worse.