Yes, this is by design. SQL is a great general purpose query language for read-heavy variable-length string workloads, but TigerBeetle optimizes for write-heavy transaction processing workloads (essentially debit/credit with fixed-size integers) and specifically with power law contention, which kills SQL row locks.
I spoke about this specific design decision in depth at Systems Distributed this year:
What's it like compared to MVCC?
I'll watch your talk properly at some point and see if it makes sense to me after that. :)
https://www.postgresql.org/docs/current/mvcc-intro.html
Asking because needing a lock for changing a row isn't the only approach that can be taken.
Transactions atomically process one or more Transfers, keeping Account balances correct. Accounts are also records, their core fields (debits_posted, credits_posted, etc).
This gives a good idea of what TigerBeetle might be good for and what it might not be. For anything where latency/throughput and accuracy really, really matters, it could be worth the effort to make your problem fit.
Which databases? SQLite is the one I can think of, but it's designed for that use-case. Others start as single node but will replicate to other nodes, either as master-slave or master-master.
But yes. Postgres remains an amazing choice, especially with modern hardware, until you also have the money available to tackle said write throughput issue.
Or a central bank switch may only have 4 major banks.
Or a system like UPI may have 85% of transactions flowing through 2 hubs. Say 50% through PhonePe. And say 35% through Google Pay.
https://learn.microsoft.com/en-us/azure/azure-sql/database/h...
But honestly, if double-entry really become a thing I foresee traditional DBMS agglutinating it just like they did with vector and object databases, getting the long tail of the market.
a) what Tigerbeetle data looks like in practice? Assuming it doesn't look like a regular table
b) how you use it, if you can't write sql queries?
c) separately, curious what double-entry would look like for stocks, tickets etc. E.g. I'm a venue and I have 1000 tickets in inventory & deferred revenue.. each time I sell a ticket I turn that inventory to cash and the deferred into a performance liability? Or no entries at all until a ticket is sold? Something else?
Errr yes. Without much sweat really.
Just because something started ~30 years ago doesn't mean it hasn't updated with the times, and doesn't mean it was built on bad foundations.
Within a single machine, yeah, relational dbs still work like a charm.
If you were to design an OLGP DBMS like Postgres today, it would look radically different. Same is true for OLTP.
Databases have not been bottlenecked on storage bandwidth in a long time but most databases are designed as if this was still the case. Optimizing for memory bandwidth, the current bottleneck, leads to substantially different architectures than are commonly deployed.
But our customers need separation of concerns in their architecture. While they could put the cash in the general purpose filing cabinet, they actually want the separation of concerns between OLGP system of reference (string database, i.e. PG) in the control plane, and OLTP system of record (integer/counting database, i.e. TB) in the data plane.
But for anyone tempted by Oracle, do remember that the upfront, agreed licence costs are only a fraction of the true price:
You’ll need someone who actually knows Oracle - either already in place or willing to invest a serious amount of time learning it. Without that, you’re almost certainly better off choosing a simpler database that people are comfortable with.
There’s always a non-zero risk of an Oracle rep trying to “upsell” you. And by upsell, I mean threatening legal action unless you cough up for additional bits a new salesperson has suddenly decided you’re using. (A company I worked with sold Oracle licences and had a long, happy relationship with them - until one day they went after us over some development databases. Someone higher up at Oracle smoothed it over, but the whole experience was unnerving enough.)
Incidental and accidental complexity: I’ve worked with Windows from 3.1 through to Server 2008, with Linux from early Red Hat in 2001 through to the latest production distros, plus a fair share of weird and wonderful applications, everything from 1980s/90s radar acquisition running on 2010 operating systems through a wide range of in house, commercial and open source software and up to modern microservices — and none of it comes close to Oracle’s level of pain.
Edit: Installating Delphi 6 with 14 packages came close, I used 3 days when I had to find every package scattered on disks in shelves and drawers and across ancient web paces + posted as abandonware on source forge but I guess I could learn to do that in a day if I had to do it twice a month. Oracle consistently took me 3 days - if I did everything correct on first try and didn't have to start from scratch.
I especially remember one particular feature that was really useful and really easy to enable in Enterprise Manager, but that would cost you at least $10000 at next license review (probably more if you had licensed it for more cores etc).
What I wrote about above wasn't us changing something or using a new feature but some sales guy at their side re-interpreting what our existing agreement meant. (I was not in the discussions, I just happened to work with the guys who dealt with it and it is a long time ago so I cannot be more specific.)
You can run a hell of a lot off of a small fleet of beefy servers fronted with a load balancer, all pointing to one DB cluster.
Without much sweat for general purpose workloads.
But transaction processing tends to have power law contention that kills SQL row locks (cf. Amdahl’s Law).
We put a contention calculator on our homepage to show the theoretical best case limits and they’re lower than one might think: https://tigerbeetle.com/#general-purpose-databases-have-an-o...
In fact large real world systems are not limited to 100-1000 TPS, or even 10 kTPS as the calculator tries to suggest. That's not because Amdahl's law is wrong, the numbers you're plugging in are just wildly off, so the conclusions are equally nonsensical.
There might be some specific workloads where you saw those numbers, and your DB might be a good fit for this particular niche, but you shouldn't misrepresent general purpose workloads to try to prop up your DB. Claiming that SQL databases are limited to "100-1000 TPS" is unserious, it is not conductive to your cause.
> Without much sweat for general purpose workloads.
But writing that traditional SQL databases cannot go above these "100-1000 TPS" numbers due to Amdahl's law is going to raise some eyebrows.
I don't think that's controversial. Amdahl's law applies to all software. Its not a peculiar feature of SQL databases. The comment is well-contextualized, in my view, but reasonable minds may disagree.
TFA contextualizes this better:
> This gets even worse when you consider the problem of hot rows, where many transactions often need to touch the same set of “house accounts”.
I think this is very clear, I don't know why you're saying that tigerbeetle is trying to make a generic claim about general workloads
The comment you're replying to explicitly states that this isn't true for general workloads
It still holds up basically whole of internet.
in most cases SQL is good enough for 90% of workloads.
Plus micropayments, of course :-)
But we couldn't get rid of it because it papered over something important.
config defined in YAML.
when I woke up today, I didn't really expect to be convinced that we live in a relatively good timeline, but...
The combustion engine's fundamental design is pretty damn good and that it had to be updated to handle unleaded gasoline isn't a knock (pun... intended?) against that design.
Sometimes I feel like we software engineers have the worst memory of any engineers.
Actually, load balancers are a great example. The number of times I’ve seen a team re-implementing HAProxy, but poorly, is entirely too high. The reasoning is always something along the lines of “we need this extremely specific feature.” OK, and have you even read the manual for HAProxy? Or if that feature truly doesn’t exist, did you consider implementing that separately?
Some months ago I was re-enlightened when Anders Hejlsberg (creator of C# and TypeScript) explained why they chose Go for reimplementing the TypeScript compiler, instead of using any of those languages or something like Rust.
The way they defined the problem, the tests they did and how they justify their pick is how these kind of decisions should be made if we want to call ourselves engineers.
The relational model has shown itself to be exactly the flexible and powerful model that Codd said it was, even in its relatively-debased from in SQL.
In fact the potential of it as a universal and flexible data model that abstracts away storage still remains to be fully unlocked.
Now as for existing SQL databases, yes, many of them were built on foundational assumptions about the nature of memory and secondary storage that no longer hold true.
Many of us in this industry still have our heads stuck in the past world of spinny spinny magnetic disks on platters with heads and cylinders and grinding noises. Real world hardware has moved on. We have a couple orders of magnitude higher IOPS generally available than we did just 10-15 years ago.
So I'm excited about products like CedarDB and others (Umbra before it, etc) that are being built on a foundation from day one of managing with hybrid in-memory/disk.
Throwing out SQL is not really the recipe for "performance" and the lessons of separating the storage system from the data model is still key, since the 1960s.
I am willing to grant that a specialized transaction/ledger system like TigerBeetle might have its place as an optimization strategy in a specific industry/domain, but we should not make the mistake of characterizing this as a general problem with the relational model and data storage broadly.
We've discovered hacks to work around the limitations of SQL, so you can maintain performance with sufficient hackiness, but you have to give up "purity" to get there. Worse is better applies, I suppose, but if we were starting over today the design would be very different.
Throwing out SQL isn't strictly required, but like Rust isn't strictly required when you already have C, sometimes it is nice to take a step back and look at how you can improve upon things. Unfortunately, NoSQL turning into a story about "document databases" instead of "A better SQL" killed any momentum on that front.
SQL was the first serious attempt to translate this into a working system, and it's full of warts, but the underlying model it is reaching towards retains its elegance.
But most importantly the principle of not tying the structure of data to its storage model is absolutely key. Not least because we can iteratively and flexibly improve the storage model over time, and not be stuck with decisions that tie us to one particular structure which is the problem that 1960s "network" and "hierarchical" databases (and modern day "key value" or "NoSQL" databases) cause.
- Slow code writing.
- DST
- No dependencies
- Distributed by default in prod
- Clock fault tolerance with optimistic locking
- Jepsen claimed that FDB has more rigorous testing than they could do.
- New programming language, Flow, for testing.
You probably could solve the same problems with FDB, but TigerBeetle I imagine is more optimized for its use case (I would hope...).
AFAIK - the only reason FDB isn't massively popular is because no one has bothered to write good layers on top. I do know of a few folks writing a SQS, DynamoDB and SQLite layers.
I started writing this comment:
> It seems interesting, but considering what it's for, why aren't the hyperscalers using it?
And while writing it I started searching for FoundationDB and found this:
> https://github.com/<<apple>>/foundationdb
Ah, all right :-p
Interesting that they didn't release it with an SQL client, is there no way to make it compatible? Even with extensions to SQL, I imagine it would be great for a lot of use cases.
Edit: ah, it's more of a key-value store.
It's still maintained by a sizable team at Apple, GH stats show that the activity is much lower now than it was 3 years ago, but there're about 10 people that contribute on a steady regular basis, which is honestly better than 99% of open source projects out there.
The only reason is Apple. They liked the product that was released in 2013 so much they bought the whole company, and all other FoundationDB users were abandoned and were forced to drop it.
Who would trust a database that can be yanked out of you at any moment? Though a lot of products have license terms like this only a handful were ever discontinued so abruptly. It's under Apache license now but the trust is not coming back.
* We use Cloudflare Workers. TigerBeetle client app is not supported. It might work using Cloudflare Containers, but then the reason we use Cloudflare is for the Workers. --> https://github.com/tigerbeetle/tigerbeetle/issues/3177
* TigerBeetle doesn't support any auth. It means the containing server (e.g. a VPS) must restrict by IP. Problem is, serverless doesn't have fixed IP. --> https://github.com/tigerbeetle/tigerbeetle/issues/3073
* spawning 1000 workers all opening a connection to a db,
* solved by service/proxy in front of db,
* proxy knows how to reach db anyway, let's do private network and not care about auth
C'mon folks, the least you can do is put a guide for adding an auth proxy or auth layer on your site.
Particularly since you don't use HTTP (cant easily tell from the docs, I'm assuming), then folks are going to be left wondering: "well how the hell do I add an auth proxy without HTTP" and just put it on the open internet...
TigerBeetle is our open source contribution. We want to make a technical contribution to the world. And we have priorities on the list of things we want to support, and support properly, with high quality, in time.
At the same time, it's important I think that we encourage ourselves and each other here, you and I, to show respect to projects that care about craftsmanship and doing things properly, so that we don't become entitled and take open source projects and maintainers in general for granted.
Let's keep it positive!
Wait, is it open source?? Since when? I always thought it was proprietary
Our view is that this kind of infrastructure is simply too valuable, too critical, not to be open source.
Since "interesting" is the very last thing that anyone sane wants in their accounting/financial/critical-stuff database.
Never understood why we turn those off. An assert failing in prod is an assert that I desperately want to know about.
(That "never understood" was rhetorical).
Whats trivial for a very small list, may be a no-go for gigabyte-sized lists.
It’s only O(n), but if I check that assertion in my binary search function then it might as well have been linear search.
void process_list(List aList)
in(aList.isSorted, "List must be sorted!")
do
{
// do something
}
However the O() cost of calling isSorted() has an impact on overall cost for process_list() on every call, hence why the way contracts are executed is usually configurable in languages that have them available.However, several factors have to be taken into account regarding performance impact when they get cleverly written, thus in manys cases they can only be fully turned on during debug builds.
It seems to be used for stuff like this, though I'm yet to really look into it properly.
We captured these consideration in the internal docs here: https://github.com/tigerbeetle/tigerbeetle/blob/0.16.60/src/...
Yup.
But nowadays an extra comparison is no biggie, especially if the compiler can hint at which way it's likely to go.
Rather, I would say that the most interesting database is the fastest and safest.
Or as Jim Gray always put it: “correct and fast”.
cf. Durability and the Art of Consensus: https://www.youtube.com/watch?v=tRgvaqpQPwE
It's not novel. That's how hardware (ASIC) testing has been done forever. The novelty is applying it to software.
> TigerBeetle’s VOPR is the single largest DST cluster on the planet. It runs on 1,000 CPU cores
Only if you exclude hardware, otherwise basically every chip design company has a cluster bigger than this.
We typically hear from companies that TigerBeetle is the easier part of their stack. But to be fair, they may have a cleaner architecture.
Did you contact our solutions team for assistance with your business logic modelling? If not, please get in touch! solutions@tigerbeetle.com
I think the paragraph about multi-node is a bit misleading. Contrary to what cloud native folk will tell you, a single beefy DB, well-tuned and with a connection pooler, can serve a dizzying amount of QPS just fine. At a former employer, during a maintenance period, I once accidentally had all traffic pointed to our single MySQL 8 RDS instance, instead of sharing it between its read replicas. That was somewhere around 80-90K QPS, and it didn’t care at all. It wasn’t even a giant instance - r6i.12xlarge - we just had a decent schema, mostly sane queries, and good tuning on both ProxySQL and MySQL. At peak, that writer and two .8xlarge read replicas handled 120K QPS without blinking.
A DB hosted on a server with node-local NVMe (you know, what used to be normal) will likely hit CPU limits before you saturate its I/O capabilities.
For redundancy, all RDBMS designed for networked activity have some form of failover / hot standby capability.
My other mild criticism is in the discussion on TigerBeetle’s consensus: yes, it seems quite clever and has no other dependencies, but it’s also not trying to deal with large rows. When you can fit 8,190 transactions into a 1 MiB packet that takes a single trip to be delivered, you can probably manage what would be impossible for a traditional RDBMS.
None of this should be taken as belittling their accomplishment; I remain extremely impressed by their product.
edit: from the horses mouth is better https://news.ycombinator.com/item?id=45437046
Too be fair, its been something like 10 years iirc.
The database in question was MySQL 8, running on plain old enterprise ssds (RAID 10)
The workload was processing transactions (financial payments)
The database schema was ... Let's call it questionable, with pretty much no normalization because "it's easier when we look at it for debugging", hence extremely long rows with countless updates to the same row throughout the processing, roughly 250-500 writes per row per request/transaction from what I recall. And the application was a unholy combination of a PHP+Java monolith, linked via RPC and transparent class sharing
DB IO was _never_ the problem, no matter how high qps got. I can't quote an exact number, but it was definitely a lot higher then what this claims (something like 40-50k on average "load" days like pre Christmas etc)
Not sure how they're getting this down to ~250qps, it sounds completely implausible.
Heck, I can do single row non-stop updates with >1k qpm on my desktop on a single nvme drive - and that's not even using raid.
Contention is what can be parallelized, right?
So with roughly 100-200 requests/s you end up with 1-0.5 contention if I understood that right.
That moves me even further towards agarlands points, though - if I plug that into the equation, I end up with >50k qps.
The used numbers create an insanely distorted idea wrt real world performance
Oracle has lock free reservations on numeric columns: https://oracle-base.com/articles/23/lock-free-reservations-2...
Isn't that the point? They're saying to separate out transactions workload from other workloads. They're not saying they'll replace your OLGP db, you remove transactionally important data into another db.
It's something similar that we see with another db: https://turbopuffer.com/
One question in case folks who work there see this:
This is the most technically impressive zig project I've seen so far. Do you have a blog post detailing your perspective on zig? i.e what design decisions of ziglang helped you in massive way, what were the warts, and in general any other thoughts too?
I believe that was more like the norm 25+ years ago. Before Google and Facebook brought in the move fast and break things mentality across software industry.
I hope TigerBeetle gets more recognition. Worth reading its Jepsen report as well. https://news.ycombinator.com/item?id=44199592
To be clear, TB's pretty young, only 3, but Jepsen-tested and already migrating some of the largest brokerages, wealth managements and exchanges in various countries. I'm excited to see what can be done in 27.
”Slow is smooth, and smooth is fast”
That said it's good to make sure you're building for requirements that exist. Engineers have a habit of inventing requirements and causing delays unnecessarily. Building something and placing it in the hands of users so they can give you feedback which you can react to is far more valuable than building products in a bubble.
But I've always felt the way they treat normal OLTP (they call OLGP) seems unfair. For example, comparisons using clear sub-optimal interactive SQL transactions for financial workloads, like locking rows rather than using condition checks at commit time, because "that's how OLTP was intended to be used when it was designed ~50(?) years ago".
In their cited https://tigerbeetle.com/#performance the lowest the slider can go is 1% contention. Do you think Stripe has 1% contention directly on an OLTP DB? Definitely not.
You can build systems that _expect contention_, and elegantly handle it at REALLY high throughput. These systems protect the DB from contention, so you can continue to scale. From talking to folks working on these systems, I roughly know the kinds of transactional (financial) throughput of DBs like Stripe's and other systems - they have _many_ more zeros behind them than their performance comparison page proposes they could possibly have at even 0.01% contention.
Their marketing largely ignores this fact, and treats everyone like they just slam the DB with junior engineer-designed interactive transactions. Most developers (I hope) are smarter than that if they're working at a payments company. There's even the title "payments engineer" for the kind of person that's thinks about scaling contention and correctness all day.
TigerBeetle is great, but I find the pattern of being quite misleading about other OLTPs off putting.
Stripe runs on top of MongoDB, which is horrifying in its own right, but in any case comparing them to a shop running an RDBMS is apples to oranges.
I've worked on systems that ran entirely in memory, and continued running during kexec. You can't do syscalls. So yeah :)
Do you think it would be more fair to suggest that OLTP workloads have 0% contention? That debit/credit sets never intersect?
In practice, we're seeing that our customers are already some of the largest brokerages, wealth managements or exchanges, in their jurisdictions, even national projects, and all of them with decades of Postgres etc. experience, some even running installations with 200 Postgres/MySQL machines backing their sharded core ledger.
They're not junior engineers. For example, they know about stored procedures, but the problem of concurrency runs deeper, into the storage engine itself.
At least for our customers, for the seasoned payments engineers, OLTP contention is a killer. They're tired of scaling outside the DBMS, and the complexity of expensive reconciliation systems. Several of them are even leaving their Chief Architect positions and starting TB-related startups—I know of at least 4 personally, from fintech brands you will recognize, and probably use.
I hope the spirit of our talks, trying to get the big ideas across, to make people aware of contention and Amdahl's Law, is clear, and that you take it in good faith, to the extent that we show it ourselves.
I don't think they have 0% contention, and I agree that contention is the bane of a dev's existence, but I don't think it's as cut and dry as only looking at Amdahl's.
Specialized systems have done really well to handle contention: My go-to example is LMAX. I think I remember a talk where the devs said they were pulling 6M orders per second, and ofc the stock market has an aggressive power law. If you design around contention, you can still push silly high perf (as you've shown).
FWIW I think we both agree that Postgres is not the best DB for this either :P
Could you give an example of how the same transaction could be written poorly with "locking rows" and then more optimally with "using condition checks at commit time"?
vs. "lock this row so it can't change while I subtract 50 from 50,000,000,000,000..." you get the point
Also aren't both of those are going to have to lock the row to modify it? Even if you don't explicitly take out a lock on a row the DBMS will do its own concurrency control to provide transaction isolation, which will require a write lock for the row.
Edit: Oh are you saying the "poorly written" case is an interactive transaction where the logic happens in the application code and involves multiple round trips while holding a lock? Instead of a transaction (or maybe even stored proc) where the logic happens in the DB, so less time holding the lock. Less contention. Ah ok. Yeah interactive transactions like that aren't great.
For anyone who wants to play the walking sim (excuse the pun!): https://sim.tigerbeetle.com
[1] - https://spacetimedb.com/ [2] - https://www.youtube.com/watch?v=kzDnA_EVhTU&
It says: “TigerBeetle uses a single core by design and uses a single leader node to process events. Adding more nodes can therefore increase reliability, but not throughput.”
How does this work with multi-region? Does this mean that regardless of where in the world your users might live, they need to make a request to the leader node for a write to occur?
Is distribution purely for redundancy?
Apparently there is a game about it https://tigerbeetle.com/blog/2023-07-11-we-put-a-distributed...
https://softwareengineeringdaily.com/2024/09/12/building-a-f...
The resistance to a seemingly obvious correction seems a little shady. Expecting people to dig around a site to find out your position of financial interest is not reasonable.
Such behavior is disappointing, especially from a top HN account holder.
It's too bad, because otherwise there are some interesting ideas, but TFA is lacking in good faith.
* Got paid a referral fee if I signed up
* Owned shares in the company
* Even was roommates with the founders/CEO but failed to mention it
I would trust that person less going forward.
If you want to be perceived as trustworthy, then you shouldn't say things that you have a hidden interest in saying.
Our investors (Spark, Amplify, Coil) are different to most, in that our partners are all highly technical.
Engineers, coders, CTOs who read the same research papers and attend the same technical conferences (CIDR, VLDB, SIGMOD, HYTRADBOI etc.).
In fact, that's how we met.
It’s on said investment company’s website under the tag “Portfolio Spotlight”.
Use that precious mind space for understanding basic logic instead.
- single entry point, near-zero deps
- ci locally and tested, one command to runs tests, coverage, lint etc
- property/snapshot/swarm testing, I love writing simulations now and letting the assertions crash
- fast/slow split + everything is deterministic with a seed
- explicit upper bounds + pool of resources. I still dynamically allocate but it makes code simpler to reason about
Thanks to the TB team for the videos and docs they been putting out lately.
It is just too risky as there is a hard limit to scalability and while it might look it is high enough for foreseeable future, what am I supposed to do once I reach this limit? Financial database has to be planned with at least 15-20 years of growth in mind.
For all of Rust’s talk about “safety”, assuming the above assertion is true, than perhaps Zig is a better choice to augment or replace C in embedded, safety-critical software engineering, where static memory allocation is a core principle.
Some points:
In general, in very tangible terms, what are the real benefits for choosing TigerBeetle over another distributed database? What are the target use cases? Most of the article is pontificating about academic details in a way that's putting the cart before the horse. (IE, all these details don't matter when a traditional database with traditional backups is "good enough" and comes with no technology risk.)
Who's using TigerBeetle? For what kind of applications?
https://news.ycombinator.com/item?id=45436926 states, "TigerBeetle doesn't support any auth". Poor security is an unprofessional oversight. For all the ambitious things TFA describes, it's shocking that they overlooked "secure by design."
The above-linked post is the only top-level post in this thread that claims real-world experience with TigerBeetle.
But, here are some quotes from the article that don't pass scrutiny:
> Traditional databases assume that if disks fail, they do so predictably with a nice error message. For example, even SQLite’s docs are clear that:
SQLite is a file on disk. It's not something that would work in the same space as TigerDB. Furthermore, this statement needs proof: Do you mean to say that Oracle, MSSQL, MariaDB, Postgress, ect, can't detect when a file is corrupted?
> All in all, you’re looking at 10-20 SQL queries back and forth, while holding row locks across the network roundtrip time, for each transaction.
These can often be solved with stored procedures. In this case, the problem lies somewhere between programmers implementing suboptimal solutions, and databases being too hard to work with.
> Instead of investing in the technology of 30 years ago – when the most popular relational databases today were built
Don't assume that because technology is old, that it's bad. New databases come with higher, not lower, risk.
> They say databases take a decade to build.
Prove it. No really, prove it.
> Biodigital jazz.
It seems like someone is getting so obsessed with the code that they're forgetting the purpose and use case.
---
IMO:
Figure out the niche where TigerBeetle excels and traditional databases fall flat. Is it because your query model is easier to program with? Is it because, in a particular situation, TigerBeetle is more performant or cheaper? Is there a niche where existing databases corrupt data more than is tolerable.
Once TigerBeetle excels in a niche, expand outward. (This is the basic plan in "Crossing the Chasm." https://en.wikipedia.org/wiki/Crossing_the_Chasm)
Otherwise, this smells like an academic exercise to experiment with different programming techniques, but with no tangible deliverable that has a demonstrated market.
Transaction processing at scale. The world doesn't need another string database. TigerBeetle is an integer database designed for (double-entry) counting, even under extreme write contention.
See also our 1000x talk going into what only TB can do (and why stored procedures still suffer from concurrency control in the internal storage engine): https://www.youtube.com/watch?v=yKgfk8lTQuE
> Do you mean to say that Oracle, MSSQL, MariaDB, Postgress, ect, can't detect when a file is corrupted?
Yes. cf. https://www.usenix.org/conference/atc20/presentation/rebello and https://www.usenix.org/conference/fast18/presentation/alagap...
Also read the Jepsen report on TigerBeetle (to see the new storage fault injectors that Kyle Kingsbury added): https://jepsen.io/analyses/tigerbeetle-0.16.11
> with no tangible deliverable that has a demonstrated market.
TigerBeetle is already being integrated into national payment systems. We also have a few enterprise customers. Some pretty large brokerages, wealth managements, exchanges and energy utilities. Granted, the company is only 3 years old, so we still have some market to demonstrate.
Is there an extension for sqlite or simple command line tool that do that?
I know about the ledger cli tool but it a bit much since it is a full fledge double-entry accounting system.
(TBH, I'm pretty sure the answer is no - if you want super-accurate time, I think you buy your own very expensive NTP server [1].)
Two gaps I keep hearing in this thread that could unblock adoption:
1. Reference architectures: serverless (Workers/Lambda) patterns, auth/VPN/stunnel/WireGuard blueprints, and examples for “OLGP control plane + TB data plane.”
2. Scaling roadmap: the single-core, single-leader design is philosophically clean—what’s the long-term story when a shard/ledger outgrows a core or a region’s latency budget?
Also +1 to publishing contentious, real-world case studies (e.g., “fee siphon to a hot account at 80–90% contention”) with end-to-end SLOs and failure drills. That would defuse the “100–1,000 TPS” debate and make the tradeoffs legible next to Postgres, FDB, and Redis.
itunpredictable•4mo ago
jorangreef•4mo ago
criddell•4mo ago
lioeters•4mo ago