frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Concept Artists Say Generative AI References Only Make Their Jobs Harder

https://thisweekinvideogames.com/feature/concept-artists-in-games-say-generative-ai-references-on...
1•KittenInABox•1m ago•0 comments

Show HN: PaySentry – Open-source control plane for AI agent payments

https://github.com/mkmkkkkk/paysentry
1•mkyang•3m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
1•ShinyaKoyano•12m ago•0 comments

The Crumbling Workflow Moat: Aggregation Theory's Final Chapter

https://twitter.com/nicbstme/status/2019149771706102022
1•SubiculumCode•17m ago•0 comments

Pax Historia – User and AI powered gaming platform

https://www.ycombinator.com/launches/PMu-pax-historia-user-ai-powered-gaming-platform
2•Osiris30•17m ago•0 comments

Show HN: I built a RAG engine to search Singaporean laws

https://github.com/adityaprasad-sudo/Explore-Singapore
1•ambitious_potat•23m ago•0 comments

Scams, Fraud, and Fake Apps: How to Protect Your Money in a Mobile-First Economy

https://blog.afrowallet.co/en_GB/tiers-app/scams-fraud-and-fake-apps-in-africa
1•jonatask•23m ago•0 comments

Porting Doom to My WebAssembly VM

https://irreducible.io/blog/porting-doom-to-wasm/
1•irreducible•24m ago•0 comments

Cognitive Style and Visual Attention in Multimodal Museum Exhibitions

https://www.mdpi.com/2075-5309/15/16/2968
1•rbanffy•26m ago•0 comments

Full-Blown Cross-Assembler in a Bash Script

https://hackaday.com/2026/02/06/full-blown-cross-assembler-in-a-bash-script/
1•grajmanu•30m ago•0 comments

Logic Puzzles: Why the Liar Is the Helpful One

https://blog.szczepan.org/blog/knights-and-knaves/
1•wasabi991011•42m ago•0 comments

Optical Combs Help Radio Telescopes Work Together

https://hackaday.com/2026/02/03/optical-combs-help-radio-telescopes-work-together/
2•toomuchtodo•47m ago•1 comments

Show HN: Myanon – fast, deterministic MySQL dump anonymizer

https://github.com/ppomes/myanon
1•pierrepomes•53m ago•0 comments

The Tao of Programming

http://www.canonical.org/~kragen/tao-of-programming.html
1•alexjplant•54m ago•0 comments

Forcing Rust: How Big Tech Lobbied the Government into a Language Mandate

https://medium.com/@ognian.milanov/forcing-rust-how-big-tech-lobbied-the-government-into-a-langua...
3•akagusu•54m ago•0 comments

PanelBench: We evaluated Cursor's Visual Editor on 89 test cases. 43 fail

https://www.tryinspector.com/blog/code-first-design-tools
2•quentinrl•57m ago•2 comments

Can You Draw Every Flag in PowerPoint? (Part 2) [video]

https://www.youtube.com/watch?v=BztF7MODsKI
1•fgclue•1h ago•0 comments

Show HN: MCP-baepsae – MCP server for iOS Simulator automation

https://github.com/oozoofrog/mcp-baepsae
1•oozoofrog•1h ago•0 comments

Make Trust Irrelevant: A Gamer's Take on Agentic AI Safety

https://github.com/Deso-PK/make-trust-irrelevant
7•DesoPK•1h ago•3 comments

Show HN: Sem – Semantic diffs and patches for Git

https://ataraxy-labs.github.io/sem/
1•rs545837•1h ago•1 comments

Hello world does not compile

https://github.com/anthropics/claudes-c-compiler/issues/1
35•mfiguiere•1h ago•20 comments

Show HN: ZigZag – A Bubble Tea-Inspired TUI Framework for Zig

https://github.com/meszmate/zigzag
3•meszmate•1h ago•0 comments

Metaphor+Metonymy: "To love that well which thou must leave ere long"(Sonnet73)

https://www.huckgutman.com/blog-1/shakespeare-sonnet-73
1•gsf_emergency_6•1h ago•0 comments

Show HN: Django N+1 Queries Checker

https://github.com/richardhapb/django-check
1•richardhapb•1h ago•1 comments

Emacs-tramp-RPC: High-performance TRAMP back end using JSON-RPC instead of shell

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•todsacerdoti•1h ago•0 comments

Protocol Validation with Affine MPST in Rust

https://hibanaworks.dev
1•o8vm•1h ago•1 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
5•gmays•1h ago•0 comments

Show HN: Zest – A hands-on simulator for Staff+ system design scenarios

https://staff-engineering-simulator-880284904082.us-west1.run.app/
1•chanip0114•1h ago•1 comments

Show HN: DeSync – Decentralized Economic Realm with Blockchain-Based Governance

https://github.com/MelzLabs/DeSync
1•0xUnavailable•1h ago•0 comments

Automatic Programming Returns

https://cyber-omelette.com/posts/the-abstraction-rises.html
1•benrules2•1h ago•1 comments
Open in hackernews

PG Auto Upgrade – Docker (and K8s) container to auto upgrade your database

https://github.com/pgautoupgrade/docker-pgautoupgrade
27•justinclift•5mo ago

Comments

justinclift•5mo ago
Supports Kubernetes and Bitnami images now too. :)
mdaniel•5mo ago
Relevant if you didn't already see it: https://news.ycombinator.com/item?id=44608856

Fuck them

curt15•5mo ago
So do people containerize databases in production these days? I thought a couple of years ago DBs were the classic example of applications that don't belong in containers.
qskousen•5mo ago
I also would like to know this, I was just told that databases should be outside the cluster a couple days ago by someone with a decade of K8s experience.
Atotalnoob•5mo ago
Generall, yes.

Unless you have a dedicated team to do the stuff for you.

Crunchydata is a good starting point

5Qn8mNbc2FNCiVV•5mo ago
Well, CloudnativePG exists and it works really really well. At some point if you can afford to have someone manage your databases separately from your applications, you can think about putting it outside the cluster but I'd wager at some point you've got enough experience with running your DB with an operator that you can keep running it in the cluster.
imcritic•5mo ago
Depends on the scale. Something small is okay to keep in containers. If you want to push performance to the limits - you definitely will run your DBMS outside a container.
tekno45•5mo ago
To host it in an orchestrator your cluster has to be more available than your DB.

you want 3 9s of availability for your DBs maybe more.

Then you need 4 9s for your cluster/orchestrator.

If your team can make that cluster, then it makes more sense to put all under one roof then develop a whole new infrastructure with the same level of reliability or more.

GauntletWizard•5mo ago
This is a persistent myth that is just flat out wrong. Your k8s cluster orchestrator does not need to be online very often at all. The kube proxies will gladly continue proxying traffic as last they best know. Your containers will still continue to run. Hiccups, or outright outages, in the kubi API server do not cause downtime, unless you are using certain terrible, awful, no good, very bad proxies within the cluster (istio, linkerd).
tekno45•5mo ago
Your CONTROL PLANE doesn't immediately cause outages if it goes down.

But if your workloads stop and can't be started on the same node you've got a degradation if not an outage.

GauntletWizard•5mo ago
Yes, but that's workloads || operator, not workloads && operator - you don't need four nines for your control plane just to keep your workloads alive. Your control plane can be significantly less reliable than your workloads, and the workloads will keep serving fine.

In real practice, it's so cheap to keep your operator running redundantly, that it's probably going to have more nines than your workloads, but it doesn't need to be

tekno45•5mo ago
You're assuming a static cluster.

In my world scaling is required. Meaning new nodes and new pods. Meaning you need a control plane.

Even in development, no control plane means no updates.

In production, no scaling means im going to have a user facing issue at the next traffic spike

GauntletWizard•5mo ago
I am 100% certain I live more in that world than you; You can check my resume if you want to get into a dick waving contest.

What I'm saying is that the two probabilities are independent, possibly correlated, but not dependent. You need some number of nines in your control plane for scaling operations. You need some number of nines in your control plane for updates. These are very few, and they don't overly affect the serving plane, so long as the serving plane is itself resilient to the errors that happen even when a control plane is running, like sudden node failure.

Proper modeling of these failure conditions is not as simple as multiplying probabilities. The chance of failures in your serving path goes up as the time between control plane readiness goes up. You calculate (Really, only ever guesstimate, but you can get some good information for those guesses) the probability of a failure in the serving plane (incl. increases in traffic to the point of overload) before the control plane has had a chance to take actions again, and you worry about MTTF and MTBR of the control plane more than the "Reliability" - You can have a control plane with 75% or less "uptime" by action failure rate but that still takes actions on a regular cadence and never notice.

You can build reliable infrastructure out of unreliable components. The control plane itself is an unreliable component, and you can serve traffic at massive scale with control planes faulty or down completely - Without affecting serving traffic. You don't need more nines in your control plane than your serving cluster - That is the only point I am addressing/contesting. You can have many, many less and still be doing right fine.

lukaslalinsky•5mo ago
What alternatives do you have? No matter which system you are using, database failovers will require external coordination. We are talking about PostgreSQL, so that normally means something like Patroni with an external service (unless you mean something manual). I find it easier to manage just one such service, Kubernetes, and using it for both running the database process as well as coordinating failovers via Patroni.
fillest•5mo ago
Services should be decoupled from OS distro dependencies as much as possible, otherwise you will be bitten at an unexpected moment (e.g. upgrading your distro packages) by some problem like this https://wiki.postgresql.org/wiki/Locale_data_changes

This can be solved by building statically (or using something like Nix) or by at least using containers.

lukaslalinsky•5mo ago
I do, but I take a very cautious approach. I run a custom image with PostgreSQL and Patroni on Kubernetes, no operator, each replica has it's own StatefulSet tied to a specific node. There is very little automation, but it still better than running PostgreSQL outside of Kubernetes. I get the benefit of simplified monitoring, log handling, request routing, while still having very static resource assignments.