frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
510•klaussilveira•8h ago•141 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
848•xnx•14h ago•507 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
61•matheusalmeida•1d ago•12 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
168•isitcontent•9h ago•20 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
171•dmpetrov•9h ago•77 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
282•vecti•11h ago•127 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
64•quibono•4d ago•11 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
340•aktau•15h ago•165 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
228•eljojo•11h ago•142 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
333•ostacke•14h ago•90 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
425•todsacerdoti•16h ago•221 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
4•videotopia•3d ago•0 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
365•lstoll•15h ago•253 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
35•kmm•4d ago•2 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
11•romes•4d ago•1 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
12•denuoweb•1d ago•1 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
85•SerCe•4h ago•66 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
214•i5heu•11h ago•160 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
59•phreda4•8h ago•11 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
35•gfortaine•6h ago•9 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
16•gmays•4h ago•2 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
123•vmatsiiako•13h ago•51 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
160•limoce•3d ago•80 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
258•surprisetalk•3d ago•34 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1022•cdrnsf•18h ago•425 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
53•rescrv•16h ago•17 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
44•lebovic•1d ago•13 comments

WebView performance significantly slower than PWA

https://issues.chromium.org/issues/40817676
14•denysonique•5h ago•1 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
98•ray__•5h ago•49 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
81•antves•1d ago•59 comments
Open in hackernews

Benchmarking Postgres 17 vs. 18

https://planetscale.com/blog/benchmarking-postgres-17-vs-18
184•bddicken•3mo ago

Comments

alberth•3mo ago
Am I interrupting the data correctly in that, if you’re running on NVMe - it’s just so fast, that it doesn’t make a difference what mode you pick.
6r17•3mo ago
typo *interpreting i guess ?
cientifico•3mo ago
That was the same conclusion I got by playing with the graphs.

I concluded that better IO planning it's only worth it for "slow" I/O in 18.

Pretty sure it will bring a lot of learnings. Postgress devs are pretty awesome.

anarazel•3mo ago
Afaict nothing in this benchmark will actually use AIO in 18. As of 18 there is aio reads for seq scans, bitmap scans, vacuum, and a few other utility commands. But the queries being run should normally be planned as index range scans. We're hoping to the the work for using AIO for index scans into 19, but it could work end up in 20, it's nontrivial.

It's also worth noting that the default for data checksums has changed, with some overhead due to that.

mebcitto•3mo ago
That explains why `sync` and `worker` have so similar results in almost all runs. The benchmarks from Tomas Vondra (https://vondra.me/posts/tuning-aio-in-postgresql-18/) showed some significant differences.
nopurpose•3mo ago
Then io_uring AIO mode underperformance is even more curious.
anarazel•3mo ago
It is. I tried to repro it without success.

I wonder if it's just being executed on a different VMs with slightly different performance characteristics. I can't tell based on the formulation in the post whether all the runs for one test are executed on the same VM or not.

ozgune•3mo ago
If the benchmark doesn’t use AIO, why the performance difference between PG 17 and 18 in the blog post (sync, worker, and io_uring)?

Is it because remote storage in the cloud always introduces some variance & the benchmark just picks that up?

For reference, anarazel had a presentation at pgconf.eu yesterday about AIO. anarazel mentioned that remote cloud storage always introduced variance making the benchmark results hard to interpret. His solution was to introduce synthetic latency on local NVMes for benchmarks.

p_zuckerman•3mo ago
Thanks for posting this interesting article! Do we know if timescale extension is available as well?
travisgriggs•3mo ago
As in timescaledb? Or something else…?
p_zuckerman•3mo ago
Yes, as in timescaledb. Sorry for not be specific.
samlambert•3mo ago
We are working on it.
rastignack•3mo ago
Is there now a way to avoid double buffering and use direct IO in postgresql ?

Has anybody seriously benchmarked this ?

I don’t think io uring would make a difference with this setting but I’m curious, as it’s the default for oracle and sybase.

hans_castorp•3mo ago
Direct I/O is being worked on, but is not yet available.

See e.g. here: https://www.cybertec-postgresql.com/en/postgresql-18-and-bey...

DicIfTEx•3mo ago
I was expecting `pg_dumpall` to get the `--format` option in v18,[0] but at the moment the docs say it's still only available in the development branch.[1]

Is anyone familiar with Postgres development able to give an update on the state of the feature? Is it planned for a future (18 or 19) release?

[0]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit...

[1]: https://www.postgresql.org/docs/devel/app-pgdump.html#:~:tex...

anarazel•3mo ago
The docs for 18 also show it, where do you get from that it's not available for 18?
DicIfTEx•3mo ago
Ah my mistake, I linked to the docs for `pg_dump` (which has long had the `format` option) rather than `pg_dumpall` (which lacks it).

Before Postgres 18 was released, the docs listed `format` as an option for `pg_dumpall` in the upcoming version 18 (e.g. Wayback Machine from Jun 2025 https://web.archive.org/web/20250624230110/https://www.postg... ). The relevant commit is from Apr 2025 (see link #0 in my original comment). But now all mention has been scrubbed, even from the Devel branch docs.

anarazel•3mo ago
It got reverted for now: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit...
cheema33•3mo ago
The primary lesson I learned here was this:

If you care about performance, don't use network storage.

If you are using local nvme disk, then it does not matter if you are using Postgres 17 or 18. Performance is about the same. And significantly faster than network storage.

saxenaabhi•3mo ago
But ephemeral and non-redundant.

Am I correct in that using local disk on any VPS has durability concerns?

CodesInChaos•3mo ago
Using a single disk has durability concerns. But I don't see why VPS vs dedicated server should matter much.
inapis•3mo ago
Sure. Till an extent. And if you run some mission-critical application, definitely.

But most applications run fine from local storage and can tolerate some downtime. They might even benefit from the improved performance. You can also fix the durability and disaster recovery concerns by setting up on RAID/ZFS and maintaining proper backups.

fabian2k•3mo ago
Databases like Postgres have well established ways to handle that. And if you're setting up the DB yourself, you absolutely need to do backups anyway. And a replica on a different server.
saxenaabhi•3mo ago
Backups don't alleviate durability concerns. Read replicas(async) neither.

I think only way it could work was if I implemented sync replication like planetscale, but that arduous.

XCSme•3mo ago
On some providers (e.g. Hetzner), the dedicated servers come by default with 2x RAID 1 disks, so it's a lot less likely to fail (unless the datacenter burns down).
whizzter•3mo ago
You have a call from France, some company called OVH on the line!
BonoboIO•3mo ago
And your backup goes up in flames too.

I would never ever trust OVH with any important data or servers, I mean we saw how they secured their datacenters where it took 3h to cut the power while the datacenter was burning.

jascha_eng•3mo ago
yeh planetscale loves to flex how fast they are but the main reason they are fast is because they run a full abstraction less than any other cloud provider and this does in fact have trade-offs.
samlambert•3mo ago
What is wrong with running without lots of abstractions? We are clear about the downsides. The results are clear, you can see the customers love it. We run insane amounts of state safely on ephemeral compute. It's a flex. All I've seen from Timescale people is qqing. Write some code or be quiet.
jascha_eng•3mo ago
I'm not criticizing your engineering approach at all. Running everything in one box has its merits as your benchmarks show but it is also just not apples to apples there are other trade-offs and I am just appreciating that the community calls that out.

Also hey this is HN not Twitter I think we can be a bit more civilized. Not a good look imo for a CEO to get that upset over a harmless comment.

samlambert•3mo ago
We run 3 nodes not 1. Your comment is not in isolation we get constant shade from Timescale people when we don't even think about you.
samlambert•3mo ago
we have mitigated the durability concerns in multiple ways.
sgarland•3mo ago
Yes, it’s the ephemerality that’s the biggest issue. Enterprise-grade SSDs are quite reliable, and typically have PLP so even in the event of a sudden power loss, any queued writes that the drive has accepted - and thus ack’d the fsync() - will be written. Presumably you’d be running some kind of redundancy, likely some flavor of RAID or zRAID (assuming purely local storage here, not a distributed system like Ceph, nor synchronous replication).

But in the cloud, if the physical server backing your instance dies, or even if someone accidentally issues a shutdown command, you don’t get that same drive back when the new instance comes up. So a problem that is normally solved by basic local redundancy suddenly becomes impossible, and thus you must either run synchronous or semi-sync replication (the latter is what PlanetScale Metal does), accepting the latency hit from distributed storage, or asynchronous replication and accept some amount of data loss, which is rarely acceptable.

samlambert•3mo ago
Agreed on these trade offs. We do both synchronous and semi-synchronous depending on Postgres or MySQL.
pas•3mo ago
... sounds like a trivial job for bare metal instances

and that EC2 local NVMe encryption keys are ephemeral is nice against leaks, but not a necessity for other clouds (and not great for resumability, which can really downgrade business continuity scores), and I expect for all the money they ask for it, to be able to keep it relatively secure even across reboots

BonoboIO•3mo ago
Or even a bare metal simple server that just does databases with redundant nvme ssd
rcrowley•3mo ago
Yes, a single disk in a VPS or cloud provider has durability concerns. That's why EBS and products like it that pretend to be a single disk are actually several. Instead of relying on multiple block devices, though, we create that redundancy at a higher level by relying on multiple MySQL or Postgres servers for durability, each with a local NVMe drive for performance.
rcrowley•3mo ago
RAID isn't the answer, either, for the record. In AWS and GCP, the CPU or RAM blowing up will cost you access to that local NVMe drive, too, no matter how much RAID you throw at it.
samlambert•3mo ago
Correct. Network storage is flexible for a variety of use cases that's why PlanetScale supports both.
jackdoe•3mo ago

    > IOPS: 3,000
    > IOPS: 300,000 for 551$ per month
the cloud is ridiculous.

just for reference with 4 consumer nvmes and raid10 and pciex16 you can easily do 3m IOPS for one time cost of like 1000$

in my current job we constantly have to rethink db queries/design because of cloud IOPS, and of course not having control over RDS page cache and numa.

every time I am woken up at night because a seemingly normal query all of the sudden goes beyond our IOPS budget and the WAL starts trashing, I seriously question my choices.

the whole cloud situation is just ridiculous.

Hrun0•3mo ago
But now you need someone to deal with the hardware.
jackdoe•3mo ago
oh no! this is proven to be impossible, no man can tell a computer what to do

lspci is only written in the old alchemy books, in the whispers of the thrice great Hermes.

PS: I have personally put down actual fires in a datacenter, and I prefer it to this 3000 IOPS crap.

layoric•3mo ago
Working at IT places in the late 2000s, it was still pretty common place for there to be a server rooms. Even for a large org with multiple sites 100s of kms a part, you could manage it with a pretty small team. And it is a lot easier to build resilient applications now than it was back then from what I remember.

Cloud costs are getting large enough that I know I’ve got one foot out the door and a long term plan to move back to having our own servers and spend the money we save on people. I can only see cloud getting even more expensive, not less.

ralusek•3mo ago
And it’ll be so good and cheap that you’ll figure “hell, I could sell our excess compute resources for a fraction of AWS.” And then I’ll buy them, you’ll be the new cloud. And then more people will, and eventually this server infrastructure business will dwarf your actual business. And then some person in 10 years will complain about your IOPS pricing, and start their own server room.
hylaride•3mo ago
There is currently a bit of an early shift back to physical infra. Some of this is driven by costs(1), some by geopolitical concerns, and some by performance. However, dealing with physical equipment does introduce a different set (old fashioned, but somewhat atrophied) set of skills and costs that companies need to deal with.

(1) It is shocking how much of a move to the cloud was driven by accountants wanting opex instead of capex, but are now concerned with actual cashflow and are thinking of going back. The cloud is really good at serving web content and storing gobs of data, but once you start wanting to crunch numbers or move that data, it gets expensive fast.

unregistereddev•3mo ago
In some orgs the move to the cloud was driven by accountants. In my org it was driven by lawyers. With GDPR on the horizon and murmurs of other data privacy laws that might (but didn't) require data to be stored in that customer's jurisdiction, we needed to host in additional regions.

We had a couple rather large datacenters, but both were in the US. The only infrastructure we had in the EU was one small server closet. We had no hosting capacity in Brazil, China, etc. Multi-region availability drove us to the cloud - just not in the "high availability" sense of the term.

mbesto•3mo ago
> I can only see cloud getting even more expensive, not less.

When you have three major hyperscalers competing for your dollars this is basically not true and not how markets work...unless they start colluding on prices.

We've already seen reduction in prices of web services costs across the three major providers due to this competitive nature.

lossolo•3mo ago
You don't need to, just rent dedicated servers, still 20-50x cheaper, problem solved.
jaza•3mo ago
You don't pay for RDS because you care about IOPS. You pay for it because you want backups and replication to be somebody else's problem. And because you (by which I mean probably the MBA management above you, rather than you yourself) care about it being an opex rather than capex cost, a lot more than you care about how much the cost is. And because ISO audit boxes get ticked.
thyristan•3mo ago
If you want your own hardware to be OPEX, just do leasing. Every enterprise hardware seller will make you an offer for that.
benjiro•3mo ago
> You pay for it because you want backups and replication to be somebody else's problem.

Or you just use something like CockroachDB, YugabyteDB etc that auto replicate, auto rebalance if a node goes down, and have build in support for backups to and from S3...

Or if your a bit more hands on, multigress seems to be closing to completion ( https://github.com/multigres/multigres ) from the guy that make Vitess for Mysql.

The idea that managing hardware and software is hard, is silly yet, people (mostly managers it seems ) think its the best solution.

manacit•3mo ago
I wouldn't say it's closing to completion - it looks like it's in the very early stages development according to their repo. I don't see any evidence they've gotten as far as even running a single query through it.

Even when it's done, it's going to be a lot of work to run. sure, it's not guaranteed to be hard, but if it's not your core business and you're making money, having someone else do it gives you time to focus on what matters.

makkes•3mo ago
Comparing monthly cloud cost with one-time hardware purchasing cost completely dismisses the latter's long-time cost like people, replacement parts, power, housing, accessories. While I do believe you can run your own hardware much cheaper, there's a lot to consider before making the decision.
vbezhenar•3mo ago
Most clouds I've used allow you to create VM with local disk, and that might be cheaper that network disk.
jackdoe•3mo ago
This is the 500$ per month option they are describing in the post, the magnificent 300,000 IOPS, the network disk is 1500$ for 16000 IOPS

not to mention then you have all the issues people are discussing managing your own backups, snapshots and replication and etc

cowsandmilk•3mo ago
Where are the error bars? I don’t get why people run all these tests and don’t give me an idea of standard deviation or whether the differences are actually statistically significant.
novoreorx•3mo ago
The charts looks beautiful, I wonder which library it uses.
miklosz•3mo ago
Seems it's Recharts.
nodesocket•3mo ago
I'm currently running PostgreSQL in docker containers using bitnami/postgresql:17.6.0-debian-12-r4. As I understand it, Bitnami is no longer supporting or updating their Docker containers. Any recommendations on a upgrade path to PostgreSQL 18 in Docker?

A quick glance of swapping to the official postgres container shows POSTGRESQL_DATABASE is renamed to POSTGRESQL_DB. The other issue is the volume mount path is currently /bitnami/postgresql.

makkes•3mo ago
Either do a proper upgrade with backup/restore or use `PGDATA`[1] and `pg_upgrade`[2].

[1] https://hub.docker.com/_/postgres#pgdata

[2] https://www.postgresql.org/docs/current/upgrading.html#UPGRA...

samlambert•3mo ago
While this post is here I'd like to call out that Vitess for Postgres is coming https://www.neki.dev/
fourseventy•3mo ago
I'm literally in the middle of upgrading my prod db to pg18. Its about 6tb, has a few thousand queries per second, should I be considering running in 'worker' mode instead of 'io_uring'?
parthdesai•3mo ago
Why would you migrate your prod db if you aren't sure of all the changes and which config params to use?
spprashant•3mo ago
For upgrades which have enough risks as it is, I would keep the number of variables low. Once upgraded and stable, you can replicate to a secondary instance with io_method switched and test on it before switching over.