Also, are there a minimum number of nodes required for PayG (not on-prem)?
I had to write off Xata (Now called Xata Lite) in the past due to the complicated pricing/plans while not being sure it did what I needed. But the new pricing is understandable and with Neon's acquisition I want to know my options.
EDIT: I missed this section at the bottom since I was already off reading other parts of the website:
> Are you looking for a simple, serverless, no-frills Postgres hosting for your side project, prototype, non-profit or vibe coded app? Xata Lite offers a generous free tier and per-storage pricing.
> Are you a Startup, Scaleup, or Enterprise that is running Postgres at scale? Then the new Xata Postgres platform brings you all the benefits outlined by this blog post.
Hmm, as much as I hate being lumped in with "vibe coded apps" I think Xata would only want me on Xata Lite. I'm not running Postgres "at scale". I do want a no-frills Postgres hosting but Xata Lite's pricing is annoying and hard to guesstimate.
The app now moved to Prisma's postgres hosting and it works like a charm, only thing that changed is the db.
/s
Just to make sure, I suppose this is on the Xata Lite free tier, not on the Postgres at scale platform that this blog post talks about.
I'm curious if split brain cases already experienced. At scale, it should be so https://github.com/cloudnative-pg/cloudnative-pg/issues/7407
My understanding after looking into it, it seems that Xata+SimplyBlock is expected to use ReadWriteOnce persistent volume access modes. This means the claim can only be bound to one node.
I think this solves the split-brain problem because any new postgres readwrite pods on new nodes will fail to bind the volume claim, but it means there's no high-availability possible in the event the node fails. At least, I think that's how kubernetes handles it - I couldn't find too much explaining the failure modes of persistent volumes, but I don't see many other solutions.
At Neon, we solve this issue by having our storage nodes form a consensus protocol with the postgres node. If a new postgres node comes online, they will both contend for multi-paxos leadership. I assume the loser will crash-backoff to reset the in-memory tables so there's no inconsistency if it tries to reclaim the leadership again and wins. In the normal mode with no split-brain and one leader, multi-paxos has low overhead for WAL committing.
paulryanrogers•8mo ago