RDS uses block replication. Aurora uses it's own SAN replication layer.
DMS maybe?
Useful for metric ingestion. Not useful for bank ledgers or whatever.
But only last month did they officially release it as open source to the community https://aws-news.com/article/2025-06-09-announcing-open-sour...
I don't think that is used for cross region replication
This is not a way to get better performance or scalability in general.
zknill•2h ago
https://github.com/aws/pgactive/tree/main/docs
kosolam•1h ago
ForHackernews•1h ago
As I understand it, this is a wrapper on top of Postgres' native logical replication features. Writes are committed locally and then published via a replication slot to subscriber nodes. You have ACID guarantees locally, but not across the entire distributed system.
https://www.postgresql.org/docs/current/logical-replication....
gritzko•1h ago
It all feels like they expect developers to sift through the conflict log to resolve things manually or something. If a transaction did not go through on some of the nodes, what are the others doing then? What if they can not roll it back safely?
Such a rabbit hole.
zknill•1h ago
Given this is targeted at replication of postgres nodes, perhaps the nodes are deployed across different regions of the globe.
By using active-active replication, all the participating nodes are capable of accepting writes, which simplifies the deployment and querying of postgres (you can read and write to your region-local postgres node).
Now that doesn't mean that all the reads and writes will be on conflicting data. Take the regional example, perhaps the majority of the writes affecting one region's data are made _in that region_. In this case, the region local postgres would be performing all the conflict resolution locally, and sharing the updates with the other nodes.
The reason this simplifies things, is that you can treat all your postgres connections as-if they are just a single postgres. Writes are fast, because they are accepted in the local region, and reads are replicated without you having to have a dedicated read-replica.
Ofc you're still going to have to design around the conflict resolution (i.e. writes for the same data issued against different instances), and the possibility of stale reads as the data is replicated cross-node. But for some applications, this design might be a significant benefit, even with the extra things you need to do.
gritzko•1h ago
ForHackernews•1h ago
zozbot234•38m ago
It doesn't look like you'd need multi master replication in that case? You could simply partition tables by site and rely on logical replication.
ForHackernews•18m ago
There's a requirement that during outages each site continue operating independently and might* need to make writes to data "outside" its normal partition. By having active-active replication the hope is that the whole thing recovers "automatically" (famous last words) to a consistent state once the network comes back.
shermantanktop•1h ago
For someone who has these requirements out of the gate, another datastore might be better. But if someone is already deeply tied to Postgres and perhaps doing their own half assed version of this, this option could be great.
ForHackernews•45m ago
whizzter•36m ago
Ideal? Not entirely but it should still give most query benefits of regular SQL and allows one to to benefit from good indexes (the proper indexes of an SQL database will also help contain the costs of an updated datamodel).
I think this is more interesting for someone building something social media like perhaps rather than anything involving accounting.
rjbwork•29m ago
zozbot234•35m ago
In principle you could use CRDTs to end up with a "not quite random" outcome that simply takes the conflict into account - it doesn't really attempt to "resolve" it. That's quite good for some cases.
okigan•1h ago
Seems the same is playing out out in Postgres with this extension, maybe will take it another 20 years
zozbot234•43m ago