I’ve been experimenting with an idea that combines a database and a message bus into one system — built specifically for Edge AI and real-time applications that need to scale across 100+ nodes.
Most databases write to a WAL (Write-Ahead Log) for recovery.
UnisonDB treats the log as the database itself — making replication, streaming, and durability all part of the same mechanism.
Every write is:
* Stored durably (WAL-first design)
* Streamed instantly (no separate CDC or Kafka)
* Synced globally across replicas
It’s built in Go and uses a B+Tree storage engine on top of a streaming WAL, so edge nodes can read locally while syncing in real time with upstream hubs.
No external brokers, no double-pipeline — just a single source of truth that streams.
Writes on one node replicate like a message bus, yet remain queryable like a database — instantly and durably.
GitHub: github.com/ankur-anand/unisondb
Deployment Topologies
UnisonDB supports multiple replication setups out of the box:
* Hub-and-Spoke – for edge rollouts where a central hub fans out data to 100+ edge nodes
* Peer-to-Peer – for regional datacenters that replicate changes between each other
* Follower/Relay – for read-only replicas that tail logs directly for analytics or caching
Each node maintains its own offset in the WAL, so replicas can catch up from any position without re-syncing the entire dataset.
UnisonDB’s goal is to make log-native databases practical for both the core and the edge — combining replication, storage, and event propagation in one Go-based system.
I’m still exploring how far this log-native approach can go. Would love to hear your thoughts, feedback, or any edge cases you think might be interesting to test.
ankuranand•2h ago
I’ve been experimenting with an idea that combines a database and a message bus into one system — built specifically for Edge AI and real-time applications that need to scale across 100+ nodes.
Most databases write to a WAL (Write-Ahead Log) for recovery.
UnisonDB treats the log as the database itself — making replication, streaming, and durability all part of the same mechanism.
Every write is: * Stored durably (WAL-first design) * Streamed instantly (no separate CDC or Kafka) * Synced globally across replicas
It’s built in Go and uses a B+Tree storage engine on top of a streaming WAL, so edge nodes can read locally while syncing in real time with upstream hubs.
No external brokers, no double-pipeline — just a single source of truth that streams.
Writes on one node replicate like a message bus, yet remain queryable like a database — instantly and durably.
GitHub: github.com/ankur-anand/unisondb
Deployment Topologies
UnisonDB supports multiple replication setups out of the box:
* Hub-and-Spoke – for edge rollouts where a central hub fans out data to 100+ edge nodes
* Peer-to-Peer – for regional datacenters that replicate changes between each other
* Follower/Relay – for read-only replicas that tail logs directly for analytics or caching
Each node maintains its own offset in the WAL, so replicas can catch up from any position without re-syncing the entire dataset.
UnisonDB’s goal is to make log-native databases practical for both the core and the edge — combining replication, storage, and event propagation in one Go-based system.
I’m still exploring how far this log-native approach can go. Would love to hear your thoughts, feedback, or any edge cases you think might be interesting to test.