Maybe one can transform slow code from high level languages to low level language via LLMs in future. That can be nice performance boost for those who don't have Amazon engineers and budgets
I think you are describing a compiler?
1) Kotlin code --> Java byte code --> JVM execution (slow)
vs
2) Kotin code --> Rust/Zig code --> Zig compiler --> native execution (fast)
Compiler is involved in both cases but I was thinking of 2) where slower code in high level lang is converted to another lang code. The compiler of which is known to produce fast runinng code.
This is one of the areas I'm most excited for LLM developer tooling. Choosing a language, database, or framework is a really expensive up-front decision for a lot of teams, made when they have the least information about what they're building, and very expensive to take back.
If LLM-powered tools could take 10-100x off the cost of these migrations, it would significantly reduce the risk of early decisions, and make it a ton easier to make software more reliable and cheaper to run.
It's very believable to me that, even with today's model capabilities, that 10-100x is achievable.
Now in today's time some experts like above could create base transformer for high level language and frameworks to low level language and frameworks and this all get exposed via llm interfaces.
One can say why all this instead of generating fast binary directly from high level code. But generating textual transformation would give developers opportunity to understand, tweak and adjust transformed code which generating direct binary would not.
Probably looks a lot like
Pseudocode -> C -> Assembly
Although the first is easier to run tests on and compare the outputs.
> But after a few weeks, it compiled and the results surprised us. The code was 10x faster than our carefully tuned Kotlin implementation – despite no attempt to make it faster. To put this in perspective, we had spent years incrementally improving the Kotlin version from 2,000 to 3,000 transactions per second (TPS). The Rust version, written by Java developers who were new to the language, clocked 30,000 TPS.
I feel like there is more to this, like some kind of a bottleneck, memory footprint, some IO overhead?
> Our conclusion was to rewrite our data plane entirely in Rust.
The point is well taken, figuring it out is not worth it, if you can just "rewrote" or have green field projects.
> These extension points are part of Postgres’ public API, allowing you to modify behavior without changing core code
Also, interesting. So PostgreSQL evolved to the point that it has a stable API for extensibility? This great for the project, maintain a modular design, and some stable APIs and, you can let people mix and match and reduce duplication of effort.
Not across major versions, no. I seriously doubt we will ever make promises around that. It would hamper development way too much.
blocking/nonblocking IO can explain this numbers
Reasonable people will see if cost makes it worthwhile.
I understand that AWS did one TPC-C 95/5 read/write benchmark and got 700k transactions for 100k DPUs, but that’s not nearly enough context.
There either needs to be a selection of other benchmark-based pricing (especially for a primarily 50/50 read/write load), actual information on how a DPU is calculated or a way to return DPU per query executed, not just an aggregate CloudWatch figure.
We were promised DSQL pricing similar to DynamoDB and insofar as it’s truly serverless (i.e. no committed pricing) they’ve succeeded, but one of the best parts of DynamoDB is absolute certainty on cost, even if that can sometimes be high.
That depends if its On Demand or Provisioned, even if they recently added On Demand limits.
It then just becomes a modeling problem allowing you to determine your costs upfront during design. That’s one of the most powerful features of the truly serverless products in AWS in my opinion.
Would you be willing to pay more for certainty? E.g. rent the full server at peak + 20% and run at 15% utilization some of the time? Provisioned capacity or pre-committed spend seem like reasonable, but perhaps more costly, ways to get certainty.
https://brooker.co.za/blog/2025/04/17/decomposing.html (includes talk)
https://brooker.co.za/blog/2024/12/03/aurora-dsql.html
https://brooker.co.za/blog/2024/12/04/inside-dsql.html
https://brooker.co.za/blog/2024/12/05/inside-dsql-writes.htm...
https://brooker.co.za/blog/2024/12/06/inside-dsql-cap.html
https://brooker.co.za/blog/2024/12/17/occ-and-isolation.html
glzone1•1d ago
Marbling4581•1d ago
At launch, they limited the number of affected tuples to 10000, including tuples in secondary indexes. They recently changed this limit to:
> A transaction cannot modify more than 3,000 rows. The number of secondary indexes does not influence this number. This limit applies to all DML statements (INSERT, UPDATE, DELETE).
There are a lot of other (IMO prohibitive) restrictions listed in their docs.
https://docs.aws.amazon.com/aurora-dsql/latest/userguide/wor...
pzduniak•1d ago
EwanToo•1d ago
https://aws.amazon.com/blogs/aws/amazon-aurora-dsql-is-now-g...
mjb•1d ago
Most of the limitations you can see in the documentation are things we haven't gotten to building yet, and it's super helpful to know what folks need so we can prioritize the backlog.
avereveard•1d ago
note, doesn't have to be pg_vector pg_trgm or PostGIS, just the index component even if it's a clean room implementation would make this way more useful.
loginatnine•1d ago
mjb•1d ago
loginatnine•1d ago
tigy32•23h ago
edit from the launch: "With today’s launch, we’ve added support for AWS Backup, AWS PrivateLink, AWS CloudFormation, AWS CloudTrail, AWS KMS customer managed keys, and PostgreSQL views."
mjb•23h ago
tomComb•21h ago
sgarland•19h ago
jashmatthews•18h ago
sgarland•18h ago
loevborg•1d ago