Seems to be another commercial cloud-hosted thing offering a Postgres API? https://dbdb.io/db/cedardb
Successor to Umbra, I believe.
I know somebody (quite talented) working there. It's likely to kick ass in terms of performance.
But it's hard to get people to pay for a DB these days.
CedarDB is the commercialization of Umbra, the TUM group's in-memory database lead by professor Thomas Neumann. Umbra is a successor to HyPer, so this is the third generation of the system Neumann came up with.
Umbra/CedarDB isn't a completely new way of doing database stuff, but basically a combination of several things that rearchitect the query engine from the ground up for modern systems: A query compiler that generates native code, a buffer pool manager optimized for multi core, push-based DAG execution that divides work into batches ("morsels"), and in-memory Adaptive Radix Tries (never used in a database before, I think).
It also has an advanced query planner that embraces the latest theoretical advances in query optimization, especially some techniques to unnest complex multi-join query plans, especially with queries that have a ton of joins. The TUM group has published some great papers on this.
The part of Umbra I found interesting was the buffer pool, so that's where focused most of my attention when reading though.
I believe Umbra is heavily BTree based, just like its cousin LeanStore.
One of its specific innovations is its buffer pool which uses virtual memory overcommit and multiple possible buffer sizes to squeeze better performance out of page management.
The talk at https://www.youtube.com/watch?v=pS2_AJNIxzU is delightful.
My understanding is the research projects LeanStore & Umbra -- and now I assume the product CedarDB based on the people involved, etc. -- are systems based on the observation that a) existing on-disk systems aren't built well with the characteristics of nVME/SSD drives in mind b) RAM prices up to this year were not dropping at the same rate as they were early in the 2010s, meaning that pure in-memory databases were not so competitive, so it's important to look at how we can squeeze performance out of systems that perform paging. And of course in the last 6 months this has become extremely relevant with the massive spike in RAM prices.
That and the query compilation stuff, I guess, which I know less about.
I am currently writing an on-disk B+tree implementation inspired by LMDB's blazing fast memory-mapped CoW approach, so I'm quite keen to learn what makes TUM's stuff fast.
https://github.com/leanstore/leanstore
Very different approach from lmdb's mmap though. Not limited to single writer. Explicit buffer pool. Big difference from Umbra is fixed sized pages.
I always wondered how good these planners are in practice. The Neumann/Moerkotte papers are top notch (I've implemented several of them myself), but a planner is much more than its theoretical capabilities; you need so much tweaking and tuning to make anything work well, especially in the cost model. Does anyone have any Umbra experience and can say how well it works for things that are not DBT-3?
SQLite has no compression support, MySQL/MariaDB have page-level compression which doesn't work great and I've never seen anyone enable in production, and Postgres has per-value compression which is good for extremely long strings, but useless for short ones.
There are just so many string columns where values and substrings get repeated so much, whether you're storing names, URL's, or just regular text. And I have databases I know would be reduced in size by at least half.
Is it just really really hard to maintain a shared dictionary when constantly adding and deleting values? Is there just no established reference algorithm for it?
It still seems like it would be worth it even if it were something you had to manually set. E.g. wait until your table has 100,000 values, build a dictionary from those, and the dictionary is set in stone and used for the next 10,000,000 rows too unless you rebuild it in the future (which would be an expensive operation).
1, complicates and slows down update, which is typically more important in OLTP than OLAP
2, is generally bad for high cardinality columns, which requires tracking cardinality to make decisions, which further complicates things.
lastly, additional operational complexity (like the table maintenance system you described in last paragraph) could reduce system reliability, and they might decide it's not worth the price or against their philosophy.
Global column dictionary has more complexity than normal. Now you are touching more pages than just the index pages and data page. The dictionary entries are sorted, so you need to worry about page expansion and contraction. They sidestep the problems by making it immutable, presumably building it up front by scanning all the data.
Not sure why using FSST is better than using a standard compression algorithm to compress the dictionary entries.
Storing the strings themselves as dictionary IDs is a good idea, as they can be processed quickly with SIMD.
I believe the reason is that FSST allows access to individual strings in the compressed corpus, which is required for fast random access. This is more important for OLTP than OLAP, I assume. More standard compression algorithms, such as zstd, might decompress very fast, but I don't think they allow that
Enums? Foreign key to a table with (id bigint generated always as identity, text text) ?
> I have databases I know would be reduced in size by at least half.
Most people don't employ these strategies because storage is cheap and compute time is expensive.
In a mini-PC that I have assembled recently, with a medium-priced CPU (an Arrow Lake H Core Ultra 5) the cost of storage has been 60% of the total cost of the computer, and it was only 60% because I have decided to buy much less memory than I would have bought last summer (i.e. I bought 32 GB DRAM + 3 TB SSDs, while I wanted double amounts, but then the price would have become unacceptable).
Moreover, I bought the mini-PC and memories in Europe, but the same exact computer model from ASUS, with the same memories, has in USA, on Newegg, a price about 35% higher than in Europe, and a similar ratio is valid for other products, so US customers are likely to be even more affected by this great increase in the cost of storage.
I’d use a SMALLINT (or in MySQL, a TINYINT UNSIGNED) for a lookup table. The bytes add up in referencing tables.
> Most people don't employ these strategies because storage is cheap and compute time is expensive.
Memory isn’t cheap. If half of your table is low-cardinality strings, you’re severely reducing the rows per page, causing more misses, slowing all queries.
Data was compressed in interior and exterior trees, where interior trees were the data structure inside blocks (similar to B-tree block contents), and exterior trees were the structure between blocks (similar to B-tree block pointers, but it didn't use a B-tree, it was something more exotic for performance).
Each node provided compression context for its children nodes, while also being compressed itself using its parent's context.
As you can imagine, the compression contexts had to be tiny because they were everywhere. But you can compress compression contexts very well :-)
Using compression in this way removed a few special cases that are typically required. For example there was no need for separate large BLOB storage, handling of long keys, or even fixed-size integers, because they fell out naturally from the compression schema instead.
The compression algorithms I explored had some interesting emergent properties, that weren't explicitly coded, although they were expected by design. Some values encoded into close to zero bits, so for example a million rows would take less than a kilobyte, if the pattern in the data allowed. Sequence processing behaved similar to loops over fast, run-length encodings, without that being actually coded, and without any lengths in the stored representation. Fields and metadata could be added to records and ranges everywhere without taking space when not used, not even a single bit for a flag, which meant adding any number of rarely-used fields and metadata was effectively free, and it could be done near instantly to a whole database.
Snapshots were also nearly free, with deltas emerging naturally from the compression context relationships, allowing fast history and branches. Finally, it was able to bulk-delete large time ranges and branches of historical data near instantly.
The design had a lot of things going for it, including IOPS performance for random and sequential access, fast writes as well as reads, and excellent compression.
I'd like to revive the idea when I have time. I'm thinking of adding a neural network this time to see how much it might improve compression efficient, or perhaps implementing a filesystem with it to see how it behaves under those conditions.
an insightful video on the topic: https://www.youtube.com/watch?v=dO4TPJkeaaU
But string interning is what they're doing, isn't it?
> Dictionary compression is a well-known and widely applied technique. The basic idea is that you store all the unique input values within a dictionary, and then you compress the input by substituting the input values with smaller fixed-size integer keys that act as offsets into the dictionary. Building a CedarDB dictionary on our input data and compressing the data would look like this:
That's string interning!!
Is interning just too old a concept now and it has to be rediscovered/reinvented and renamed?
It's not old yet; it'll be old once it's been renamed two or three times...
Interning:
1: "foo"
2: "bar"
my_string = "foo" // stored as ref->1
my_other_string = "foobarbaz" // not found & too long to get interned, stored as "foobarbaz"
Dictionary compression:
1: "foo"
2: "bar"
my_string = "foo" // stored as ref->1
my_other_string = "foobarbaz" // stored as ref->1,ref->2,"baz" (or ref->1,ref->2,ref->3 and "baz" is added to the dict) my_string = "foo" // stored as ref->1
Only if you explicitly intern the string. Interning can be expensive because- the runtime has to check whether the string already is interned.
- you’re permanently creating the string, so if it isn’t reused later, that means your memory usage needlessly goes up.
Both get more expensive the more strings you intern.
I think interning is a hack that very rarely, if ever should be used. It likely won’t work well for large strings, as these tend to be unique, and use cases where it helps for shirt strings often are better handled by using enums or symbols, or by using a custom set of strings. If you do the latter, you have more control over Emory usage; you can do such things such as removing the least recently used strings, or ditching the entire cache when you’re done needing it (prime example: parsing a large XML file with many repeated nodes)
Nowadays lots of runtime will GC interned strings to avoid this, this can mean churn in the interning table but avoids mis-estimations bloating the process too much.
Their own cherry-picked example benchmarks show that there is frequently no benefit, after enabling FSST, to query runtimes and sometimes query runtimes are actually negatively affected. Despite hard empirical evidence against FSST, they decided to enable it across-the-board? Not even "hey, we built this cool thing which may or may not improve your workloads, here's how to opt-in", but pushing straight up performance regressions for their customers with no way of opting-out?
Why would I ever trust my data to engineers who are more interested in their academically-interesting resume-driven-development than in actual customer outcomes?
mbfg•5d ago
speed_spread•5d ago
hcs•5d ago
The paper suggests that you could rework string matching to work on the compressed data but they haven't done it.