Even to take a case in point where durability is irrelevant - people building caches in Postgres (so as to only have one datastore / not need Redis as well). Not a big deal if the cache blows up - just force everyone to login again. Would love to see the vendor reduce complexity on their end and pass through the savings to the customer.
edit: per your other reply re. using replication to handle resizing, maybe being upfront with customers about additional latency / downtime being necessary with single-node discounts, then for resizing you could break connections, take a backup, then restore the backup on a resized node?
If your or another customer's workload grows and needs to size up we launch three whole new database servers of the appropriate size (whether that's more CPU+RAM, more storage, or both), restore the most recent backups there, catch up on replication, and then orchestrate changing the primary.
Downtime when you resize typically amounts to needing to reconnect i.e. it's negligible.
Would be curious to know what the underlying aws ec2 instance is.
Is each DB on a dedicated instance?
If not, are there per-customer iops bounds?
From what I can tell, the 'Metal' offering runs on nodes with directly attached NVMe rather than network-attached storage. That means there isn't a per-customer IOPS cap – they actually market it as 'unlimited I/O' because you hit CPU before saturating the disk. The new $50 M-class clusters are essentially smaller versions of those nodes with adjustable CPU and RAM in AWS and GCP .
RE: EC2 shapes, it's not a shared EBS volume but a dedicated instance with local storage. BUT you'll still want to monitor capacity since the storage doesn't autoscale.
ALSO this pricing makes high-throughput Postgres accessible for indie projects, which is pretty neat.
Just want to add that you don't necessarily need to invest in fancy disk-usage monitoring as we always display it in the app and we start emailing database owners at 60% full to make sure no one misses it.
So in the M-10 case, wouldn't this actually be somewhat misleading as I imagine hitting "1/8 vCPU" wouldn't be difficult at all?
You can get a lot more out of that CPU allocation with the fast I/O of a local NVMe drive than from the slow I/O of an EBS volume.
> $50
Looks like US only. Choosing Europe is +$10, Australia is +$20.Wouldn't this introduce additional latency among other issues?
If you aren’t hosting the app in the same AWS/GCP region then I still have the same question.
yes and no. In my AWS account I can explicitly pick an AZ (us-east-2a, us-east-2b or us-east-2c) but Availability Zones are not consistent between AWS accounts.
See https://docs.aws.amazon.com/ram/latest/userguide/working-wit...
I ask because we see it more often than not, and for that situation sharding the workflow is the best answer. Why have one MySQL instance responding to request when you could have 2,4,8...128, etc MySQL instances responding as a single database instance? They also have the ability to vertically scale each of the shards in that database as it's needed.
edit: my bad that's the price for 256GB RAM.
The reality most databases are tiny as shit and most apps can tolerate the massive latency that the cloud provider dbs offer.
It is why it is sorta funny we are rediscovering non network attached storage is faster.
That's $54,348/year, not including the cost of benefits, not including stock compensation. Let's say you reserve 20% for benefits and that comes out to $43,478.40 in salary.
Besides the benefit of not needing the management / communication overhead of hiring somebody, do you know any DBAs willing to take a full-time job for $43,478.40 in salary?
Also, this is a shared server, not a truly dedicated one like you’d get with bare-metal providers. So, calling it "Metal" might be misleading marketing trick, but if you want someone to always blame and don’t mind overpaying for that comfort, then the managed option might be the right thing.
Apparently there are people who find this offering compelling. The lack of value is quite stunning to me.
- Aurora storage scales with your needs, meaning that you don't need to worry about running out of space as your data grows. - Aurora will auto-scale CPU and memory based on the needs of your application, within the bounds you set. It does this without any downtime, or even dropping connections. You don't have to worry about choosing the right CPU and memory up-front, and for most applications you can simply adjust your limits as you go. This is great for applications that are growing over time, or for applications with daily or weekly cycles of usage.
The other Aurora option is Aurora DSQL. The advantages of picking DSQL are:
- A generous free tier to get you going with development. - Scale-to-zero and scale-up, on storage, CPU, and memory. If you aren't sending any traffic to your database it costs you nothing (except storage), and you can scale up to millions of transactions per second with no changes. - No infrastructure to configure or manage, no updates, no thinking about replicas, etc. You don't have to understand CPU or memory ratios, think about software versions, think about primaries and secondaries, or any of that stuff. High availability, scaling of reads and writes, patching, etc is all built-in.
You're still sharing nvme IO, cpu, memory bandwidth, etc. Not having a VM isn't really the point. (EDIT: and could have been done with non-metal aws instances with direct-attached nvme anyway)
How does cross data center nodes work?
*Edit:* It also fails to load other pages if you have JavaScript or XHR disabled.
It feels it went from "professional Stripe level design that you admire and it inspires you" to just "hard to read black website", not sure what for.
(not fully functional) https://web.archive.org/web/20240811142248/https://planetsca...
asking for a friend that liked this space
skeptrune•4h ago