It comes down to how much you use Kubernetes. At my company, just about everything is in Kubernetes except for databases which are hosted by Azure. So having random VMs means we need to get Ansible, SSH Keys and SOC2 compliance annoyance. So the workload effort to get VMs running may be higher than Kubernetes even if you have to put in extra hacks.
At the end of the day, K8s only takes care of scheduling containers, and provides a super basic networking proxy layer for convenience. But there’s absolutely nothing in k8s that requires you use that proxy layer, or any other network overlay.
You can easily setup pods that directly expose their ports on the node they’re running on, and have k8s services just provide the IPs of nodes running associated pods as a list. Then rely on either on clients to handle multiple addresses themselves (by picking an address at random, and failing over to another random address if needed), configure k8s DNS to provide DNS round robin, or put an NLB or something in front of it all.
Everyone uses network overlays with k8s because it makes it easy for services in k8s to talk to other services in k8s. But there’s no requirement to force all your external inbound traffic through that layer. You can just use k8s to handle nodes, and collect needed meta-data for upstream clients to connect directly to services running on nodes with nothing but the container layer between the client and the running service.
I'm confused about one point. A k8s Service sends traffic to pods matching the selector that are in "Ready" state, so wouldn't you accomplish HA without the pseudocontroller by just putting both pods in the Service? The Mosquitto bridge mechanism is bi-directional so you're already getting data re-sync no matter where a client writes.
edit: I'm also curious if you could use a headless service and use an init container on the secondary to set up the bridge to the primary by selecting the IP that isn't it's own.
I'm not sure how fast that would be, the extra controller container is needed for the almost instant failover.
Answering your second question, why not an init container in the secondary, because now we can scale that failover controller up over multiple nodes, if the node where the (fairly stateless) controller runs goes down, we'd still have to wait until k8s schedules another pod instead of almost instantly.
oulipo•6h ago
seized•5h ago
oulipo•5h ago
bo0tzz•5h ago
zrail•5h ago
Edit: it's not a paywall. It's the standard BSL with a 4 year Apache revert. I personally have zero issue with this.
casper14•4h ago
zrail•4h ago
Projects can add additional license grants to the base BSL. EMQX, for example, adds a grant for commercial production use of single-node installations, as well as production use for non-commercial applications.
bo0tzz•3h ago
zrail•1h ago
jandeboevrie•5h ago
jpgvm•4h ago
A lot more work than Mosquitto but obviously HA/distributed and some tradeoffs w.r.t features. Worth it if you want to run Pulsar anyway for other reasons.
oulipo•23m ago