This is the correct explanation. The purpose is to detect partial writes, not to detect arbitrary data corruption. If detecting corruption was the goal, then checksumming the WAL without also checksumming the database itself would be fairly pointless.
In fact, it's not accurate to say "SQLite does not do checksums by default, but it has checksums in WAL mode." SQLite always uses checksums for its journal, regardless of whether that's a rollback journal or a write-ahead log. [1]
For the purpose of tolerating and recovering from crashes/power failures, writes to the database file itself are effectively idempotent. It doesn't matter if only a subset of the DB writes are persisted before a crash, and you don't need to know which ones succeeded, because you can just roll all of them forward or backward (depending on the mode). But for the journal itself, distinguishing partial journal entries from complete ones matters.
No matter what order the disk physically writes out pages, the instant when the checksum matches the data is the instant at which the transaction can be unambiguously said to commit.
Imagine the power goes out while sqlite is in the middle of writing a transaction to the WAL (before the write has been confirmed to the application). What do you want to happen when power comes back, and you reload the database?
If the transaction was fully written, then you'd probably like to keep it. But if it was not complete, you want to roll it back.
How does sqlite know if the transaction was complete? It needs to see two things:
1. The transaction ends with a commit frame, indicating the application did in fact perform a `COMMIT TRANSACTION`.
2. All the checksums are correct, indicating the data was fully synced to disk when it was committed.
If the checksums are wrong, the assumption is that the transaction wasn't fully written out. Therefore, it should be rolled back. That's exactly what sqlite does.
This is not "data loss", because the transaction was not ever fully committed. The power failure happened before the commit was confirmed to the application, so there's no way anyone should have expected that the transaction is durable.
The checksum is NOT intended to detect when the data was corrupted by some other means, like damage to the disk or a buggy app overwriting bytes. Myriad other mechanisms should be protecting against those already, and sqlite is assuming those other mechanisms are working, because if not, there's very little sqlite can do about it.
https://imaginovation.net/case-study/cree/
At least what I could turn up with a quick web search.
> Local devices also have a characteristic which is critical for enabling database management software to be designed to ensure ACID behavior: When all process writes to the device have completed, (when POSIX fsync() or Windows FlushFileBuffers() calls return), the filesystem then either has stored the "written" data or will do so before storing any subsequently written data.
Say more? I've heard people say that ZFS is somewhat slower than, say, ext4, but I've personally had zero issues running postgres on zfs, nor have I heard any well-reasoned reasons not to.
> What filesystems in the wild typically provide for this is weaker than what is advisable for a database, so databases should bring their own implementation.
Sorry, what? Just yesterday matrix.org had a post about how they (using ext4 + postgres) had disk corruption which led to postgres returning garbage data: https://matrix.org/blog/2025/07/postgres-corruption-postmort...
The corruption was likely present for months or years, and postgres didn't notice.
ZFS, on the other hand, would have noticed during a weekly scrub and complained loudly, letting you know a disk had an error, letting you attempt to repair it if you used RAID, etc.
It's stuff like in that post that are exactly why I run postgres on ZFS.
If you've got specifics about what you mean by "databases should bring their own implementation", I'd be happy to hear it, but I'm having trouble thinking of any sorta technically sound reason for "databases actually prefer it if filesystems can silently corrupt data lol" being true.
Btrfs is a better choice for SQLite.
zfs set sync=disabled mydata/mydb001
* https://openzfs.github.io/openzfs-docs/man/master/7/zfsprops...Ext4 uses 16-/32-bit CRCs, which is very weak for storage integrity in 2025. Many popular filesystems for databases are similarly weak. Even if they have a strong option, the strong option is not enabled by default. In real-world Linux environments, the assumption that the filesystem has weak checksums usually true.
Postgres has (IIRC) 32-bit CRCs but they are not enabled by default. That is also much weaker than you would expect from a modern database. Open source databases do not have a good track record of providing robust corruption detection generally nor the filesystems they often run on. It is a systemic problem.
ZFS doesn't support features that high-performance database kernels use and is slow, particularly on high-performance storage. Postgres does not use any of those features, so it matters less if that is your database. XFS has traditionally been the preferred filesystem for databases on Linux and Ext4 will work. Increasingly, databases don't use external filesystems at all.
In fairness, 32-bit CRCs were the standard 20+ years ago. That is why all the old software uses them and CPUs have hardware support for computing them. It is a legacy thing that just isn't a great choice in 2025.
EDIT: It seems they're opt-in for PostgreSQL, too: https://www.postgresql.org/docs/current/checksums.html
bad news is, most databases don't do checksums by default.
Redundantly performing the same performance-intensive tasks on multiple layers makes latency less predictable and just generally wastes resources.
This is a major reason databases implement their own checksums. Unfortunately, many open source databases have weak or non-existent checksums too. It is sort of an indefensible oversight.
At 32 bits you're well into the realm of tail risks which include things like massive solar flares or the data center itself being flattened in an explosion or natural disaster.
Edit: I just checked a local drive for concrete numbers. It's part of a btrfs array. Relevant statistics since it was added are 11k power on hours, 24 TiB written, 108 TiB read, and 32 corruption events at the fs level (all attributable to the same power failure, no corruption before or since). I can't be bothered to compute the exact number but at absolute minimum it will be multiple decades of operation before I would expect even a single corruption event to go unnoticed. I'm fairly certain that my house is far more likely to burn down in that time frame.
Because filesystems, too, mainly use them to detect inconsistencies introduced by partial or reordered writes, not random bit flips. That's also why most file systems only have them on metadata, not data.
One possible instance of that is a database providing its own data checksumming, but another perfectly valid one is running one that does not on a lower layer with a sufficiently low data corruption rate.
On the other hand, I've heard people recommend running Postgres on ZFS so you can enable on the fly compression. This increases CPU utilization on the postgres server by quite a bit, read latency of uncached data a bit, but it decreases necessary write IOPS a lot. And as long as the compression is happening a lot in parallel (which it should, if your database has many parallel queries), it's much easier to throw more compute threads at it than to speed up the write-speed of a drive.
And after a certain size, you start to need atomic filesystem snapshots to be able to get a backup of a very large and busy database without everything exploding. We already have the more efficient backup strategies from replicas struggle on some systems and are at our wits end how to create proper backups and archives without reducing the backup freqency to weeks. ZFS has mature mechanisms and zfs-send to move this data around with limited impact ot the production dataflow.
For Postgres specifically you may also want to look at using hot_standby_feedback, as described in this recent HN article: https://news.ycombinator.com/item?id=44633933
We also have decently sized clusters with very active data on them, and rather spicy recovery targets. On some of them, a full backup from the sync standby takes 4 hours, we need to pull an incremental backup at most 2 hours afterwards, but the long-term archiving process needs 2-3 hours to move the full backup to the archive. This is the first point in which filesystem snapshots, admittedly, of the pgbackrest repo, become necessary to adhere to SLOs as well as system function.
We do all of the high-complexity, high-throughput things recommended by postgres, and it's barely enough on the big systems. These things are getting to the point of needing a lot more storage and network bandwidth.
Btrfs is a better choice for sqlite, haven’t seen that issue there.
Which you can do on a per dataset ('directory') basis very easily:
zfs set sync=disabled mydata/mydb001
* https://openzfs.github.io/openzfs-docs/man/master/7/zfsprops...Meanwhile all the rest of your pools / datasets can keep the default POSIX behaviour.
If your job is to make sure your file system and your database—SQLite, Pg, My/MariDB, etc—are tuned together, and you don't tune it, then you should be called into a meeting. Or at least the no-fault RCA should bring up remediation methods to make sure it's part of the SOP so that it won't happen again.
The alternative the GP suggests is using Btrfs, which I find even more irresponsible than your non-tuning situation. (Heck, if someone on my sysadmin team suggested we start using Btrfs for anything I would think they were going senile.)
You cannot have SQLite keep your data and run well on ZFS unless you make a zvol and format it as btrfs or ext4 so they solve the problem for you.
The latest comment seems to be a nice summary of the root cause, with earlier in the thread pointing to ftruncate instead of fsync being a trigger:
>amotin
>I see. So ZFS tries to drop some data from pagecache, but there seems to be some dirty pages, which are held by ZFS till them either written into ZIL, or to disk at the end of TXG. And if those dirty page writes were asynchronous, it seems there is nothing that would nudge ZFS to actually do something about it earlier than zfs_txg_timeout. Somewhat similar problem was recently spotted on FreeBSD after #17445, which is why newer version of the code in #17533 does not keep references on asynchronously written pages.
Might be worth testing zfs_txg_timeout=1 or 0
What you're describing sounds like a bug specific to whichever OS you're using that has a port of ZFS.
I've encountered this bug both on illumos, specifically OpenIndiana, and Linux (Arch Linux).
> [...] The checkpoint does not normally truncate the WAL file (unless the journal_size_limit pragma is set). Instead, it merely causes SQLite to start overwriting the WAL file from the beginning. This is done because it is normally faster to overwrite an existing file than to append.
Without the checksum, a new WAL entry might cleanly overwrite an existing longer one in a way that still looks valid (e.g. "A|B" -> "C|B" instead of "AB" -> "C|data corruption"), at least without doing an (expensive) scheme of overwriting B with invalid data, fsyncing, and then overwriting A with C and fsyncing again.
In other words, the checksum allows an optimized write path with fewer expensive fsync/truncate operations; it's not a sudden expression of mistrust of lower layers that doesn't exist in the non-WAL path.
If you stop at the first failure, the database is restored to the last good state. That's the best outcome that can be achieved under the circumstances. Some data could be lost, but there wasn't anything sensible you could do with it anyway.
I would like it to raise an error and then provide an option to continue or stop. Since continuing is the default, we need a way to opt in to stopping on checksum failure.
Not all checksum errors are impossible to recover from. Also, as the post mentions, only some non important pages could be corrupt too.
My main complaint is that it doesn't give developers an option.
If what we're really interested in is the log part of a write ahead log - where we could safely recover data after a corruption, then a better tool might be just a log file, instead of SQLite.
Attempt to recover! Again, not all checksum errors are impossible to recover. I hold the view that even if there is a 1% chance of recovery, we should attempt it. This may be done by SQLite, an external tool, or even manually. Since WAL corruption issues are silent, we cannot do that now.
There is a smoll demo in the post. In it, I corrupt an old frame that is not needed by the database at all. Now, one approach would be to continue the recovery and then present both states: one where the WAL is dropped, and another showing whatever we have recovered. If I had such an option, I would almost always pick the latter.
You do have backups, right?
Instead of that, I'd prefer for it to fail fast
It seems like you're focusing on a very specific failure mode here.
Also, what if the data corruption error happens during the write to the actual database file (i.e. at WAL checkpointing time)? That's still 50% of all your writes, and there's no checksum there!
I do see your point of wanting an option to refuse to delete the wal so a developer can investigate the wal and manually recover... But the the typical user probably wants the database to come back up with a consistent, valid state if power is lost. They do not want have the database refuse to operate because it found uncommitted transactions in a scratchpad file...
As a SQL-first developer, I don't pick apart write-ahead logs trying to save a few bytes from the great hard drive in the sky, I just want the database to give me the current state of my data and never be in an invalid state.
Yes, that is a very valid choice. Hence, I want databases to give me an option, so that you can choose to ignore the checksum errors and I can choose to stop the app and try to recover.
If you attempt to do the former in a system that by design uses checksums only for the latter, you'll actually introduce corrupted data from some non-corrupted WAL files.
1. How does the database know that.
2. In your example Alice gets the money from nowhere. What if another user had sent the money to Alice and that frame get corrupted. Then you just created 10,000,000 from nowhere.
At the very least, rolling back to a good point gives you an exact moment of time where transactions can be applied from. Your example is very contrived and in a database where several transactions can be happening, doing a partial recovery will destroy the consistency of the database.
I've written more about this here: https://news.ycombinator.com/item?id=44673991
Yes! But I am happy to accept that overhead with the corruption detection.
As I see it, either you have a lower layer you can trust, and then this would just be extra overhead, or you don't, in which case you'll also need error correction (not just detection!) for the database file itself.
The checksums are not going to fail unless there was disk corruption or a partial write.
In the former, thank your lucky stars it was in the WAL file and you just lose some data but have a functioning database still.
In the latter, you didn't fsync, so it couldn't have been that important. If you care about not losing data, you need to fsync on every transaction commit. If you don't care enough to do that, why do you care about checksums, it's missing the point.
I wonder what that code would look like. My sense is that it’ll look exactly like the code that would run as if the transactions never occurred to begin with, which is why the SQLite design makes sense.
For example, I have a database of todos that sync locally from the cloud. The WAL gets corrupted. The WAL gets truncated the next time the DB is opened. The app logic then checks the last update timestamp in the DB and syncs with the cloud.
I don’t see what the app would do differently if it were notified about the WAL corruption.
> I want to correct errors that the DB wizard who implemented SQLite chose not to
When there's a design decision in such a high profile project that you disagree with, it's either
1. You don't understand why it was done like this.
2. You can (and probably will) submit a change that would solve it.
If you find yourself in the situation of understanding, yet not doing anything about it, you're the Schrodinger's developer: you're right and wrong until you collapse the mouth function by putting money on it.
It's very rarely an easy to fix mistake.
SQLite is not open to contribution - https://www.sqlite.org/copyright.html
> 1. You don't understand why it was done like this.
sure, I would like to understand it. That's why the post!
> In order to keep SQLite completely free and unencumbered by copyright, the project does not accept patches. If you would like to suggest a change and you include a patch as a proof-of-concept, that would be great. However, please do not be offended if we rewrite your patch from scratch.
Propose it.
Merkle hashes would probably be better.
google/trillian adds Merkle hashes to table rows.
sqlite-parquet-vtable would workaround broken WAL checksums.
sqlite-wasm-http is almost a replication system
Re: "Migration of the [sqlite] build system to autosetup" https://news.ycombinator.com/item?id=41921992 :
> There are many extensions of SQLite; rqlite (Raft in Go,), cr-sqlite (CRDT in C), postlite (Postgres wire protocol for SQLite), electricsql (Postgres), sqledge (Postgres), and also WASM: sqlite-wasm, sqlite-wasm-http, dqlite (Raft in Rust),
> awesome-sqlite
From "Adding concurrent read/write to DuckDB with Arrow Flight" https://news.ycombinator.com/item?id=42871219 :
> cosmos/iavl is a Merkleized AVL tree. https://github.com/cosmos/iavl
/? Merkle hashes for sqlite: https://www.google.com/search?q=Merkle+hashes+for+SQlite
A git commit hash is basically a Merkle tree root, as it depends upon the previous hashes before it.
Merkle tree: https://en.wikipedia.org/wiki/Merkle_tree
(How) Should merkle hashes be added to sqlite for consistency? How would merkle hashes in sqlite differ from WAL checksums?
- there is an official check sum VFS shim, but I never used it and don't know how good it is. The difference between it and WAL checksum is that it works on a per page level and you seem to need manually run the checksum checks and then yourself decide what to do
- check sums (as used by SQLite WAL) aren't meant for backup, redundancy or data recovery (there are error recovery codes focused on allowing recovering a limited set of bits, but they have way more overhead then the kind of checksum used here)
- I also believe SQLite should indicate such checksum errors (e.g. so that you might engage out of band data recovery, i.e. fetch a backup from somewhere), but I'm not fully sure how you would integrate it in a backward compatible way? Like return it as an error which otherwise acts like a SQLITE_BUSY??
Data in the WAL should be considered to be of "reduced durability".
- Said person was apparently employed due to his good understanding of databases and distributed systems concepts (there's a HN thread about how he found an issue in the paper describing an algorithm); yet makes fundamental mistakes in understanding what the WAL does and how it's possible not to "partly" apply a WAL.
- Said person expects a SQL database to expose WAL level errors to the user breaking transactional semantics (if you want that level of control, consider simpler file-based key-value stores that expose such semantics?)
- Said person maligns SQLite as being impossible to contribute; whereas the actual project only mentions that they may rewrite the proposed patch to avoid copyright implications.
- Said person again maligns SQLite as "limping along" in the face of disk errors (while making the opposite claim a few paragraphs ago); while ignoring that the checksum VFS exists when on-disk data corruption is a concern.
> yet makes fundamental mistakes in understanding what the WAL does and how it's possible not to "partly" apply a WAL.
Please provide citation on where I said that. You can't partly apply WAL always, but there are very valid cases where you can do that to recover. Recovery doesn't have to automatic. It can be done by SQLite, or some recovery tool or with manual intervention.
> - Said person maligns SQLite as being impossible to contribute; whereas the actual project only mentions that they may rewrite the proposed patch to avoid copyright implications.
Please provide citation on where I said that. Someone asked me to send a patch to SQLite, I linked them to the SQLite's page.
Without mentioning the exact set of cases where recovery is possible and it isn't, going "PSA: SQLite is unreliable!!1one" is highly irresponsible. I think there's quite a bit of criticism going around though, you could add them to your blog article :)
Please also consider the fact that SQLite being a transactional database, it is usually not possible to expose a WAL level error to the user. The correct way to address it is to probably come up with a list of cases where it is possible, and then send in a patch, or at least a proposal, of how to address it.
> Please provide citation on where I said that [SQLite is impossible to contribute].
On SQLite contribution, I did not say it's "impossible." I said it's not open to contribution. This is the exact phrase from the linked page.
You must be new to the site.
1) Insert new subscription for "foobar @ 123 Fake St." 2) Insert new subscription for "�#�#�xD�{.��t��3Axu:!" 3) Insert new subscription for "barbaz @ 742 Evergreen Terrace"
A human could probably grab two subscriptions out of that data loss incident. I think that's what they're saying. If you're very lucky and want to do a lot of manual work, you could maybe restore some of the data. Obviously both of the "obviously correct" records could just be random bitflips that happen to look right to humans. There's no way of knowing.
The database should absolutely not be performing guesswork about the meaning of its contents during recovery. If you want mongodb, go use mongodb.
I think SQLite assumes that a failing checksum occurs due to a crash during a write which never finished. A corrupt WAL frame before a valid frame can only occur if the underlying storage is corrupt, but it makes no sense for SQLite to start handling that during replay as it has no way to recover. You could maybe argue that it should emit a warning
If the OP consulted with Turso on this blogpost, then Turso probably believes the reported behavior is indeed a failure or a flaw, which they think a local db should be responsible for.
The confusion is that Limbo, their solution to this presumed problem, is not mentioned in the article which means that everyone has to figure out where this post is coming from.
That's really all there is to it.
SQLite has very deliberate and well-documented assumptions (see for example [1], [2]) about the lower layers it supports. One of them is that data corruption is handled by these lower layers, except if stated otherwise.
Not relying on this assumption would require introducing checksums (or redundancy/using an ECC, really) on both the WAL/rollback journal and on the main database file. This would make SQLite significantly more complex.
I believe TFA is mistaken about how SQLite uses checksums. They primarily serve as a way to avoid some extra write barriers/fsync operations, and maybe to catch incomplete out-of-order writes, but never to detect actual data corruption: https://news.ycombinator.com/item?id=44671373
1. If the WAL is incomplete, then "failing" silently is the correct thing to do here, and is the natural function of the WAL. The WAL had an incomplete write, nothing should have been communicated back the application and the application should assume the write never completed.
2. If the WAL is corrupt (due to the reasons he mentioned), then sqlite says that is that's your problem, not sqlite's. I think this is the default behavior for other databases as well. If a bit flips on disk, it's not guaranteed the database will catch it.
This article is framed almost like a CVE, but to me this is kind of like saying "PSA: If your hard drive dies you may lose data". If you care about data integrity (because your friend is sending you sqlite files) you should be handling that.
Doing what the author suggests would actually introduce data corruption errors when "restoring a WAL with a broken checksum".
> When the last connection to a database closes, that connection does one last checkpoint and then deletes the WAL and its associated shared-memory file, to clean up the disk.
Skipping a frames but processing later ones would corrupt the database.
> SQLite doesn’t throw any error on detection of corruption
I don’t think it’s actually a corruption detection feature though. I think it’s to prevent a physical failure while writing (like power loss) from corrupting the database. A corruption detection feature would work differently. E.g., it would cover the whole database, not just the WAL. Throwing an error here doesn’t make sense.
Honestly this sounds out of scope for normal usage of sqlite and not realistic. I had a hard time reading past this. If I read that correctly, they're saying sqlite doesn't work if one of the database files disappears from under it.
I guess if you had filesystem corruption it's possible that .db-shm disappears without notice and that's a problem. But that isn't sqlite's fault.
> If the last client using the database shuts down cleanly by calling sqlite3_close(), then a checkpoint is run automatically in order to transfer all information from the wal file over into the main database, and both the shm file and the wal file are unlinked.
cwillu•1d ago
HelloNurse•1d ago
AlotOfReading•1d ago
hobs•1d ago
This is just basically how a WAL works, if you have an inconsistent state the transaction is rolled back - at that point you need to redo your work.
daneel_w•1d ago
avinassh•1d ago
Not all frames in the WAL are important. Sure, recovery may be impossible in some cases, but not all checksum failures are impossible to recover from.
pests•1d ago
Which failures are possible to recover from?
dec0dedab0de•1d ago
kstrauser•1d ago
daneel_w•1d ago
jandrewrogers•1d ago
First, force a re-read of the corrupted page from disk. A significant fraction of data corruption occurs while it is being moved between storage and memory due to weak error detection in that part of the system. A clean read the second time would indicate this is what happened.
Second, do a brute-force search for single or double bit flips. This involves systematically flipping every bit in the corrupted page, recomputing the checksum, and seeing if corruption is detected.
daneel_w•1d ago
Surely you mean on the memory bus specifically? SATA and PCIe both have some error correction methods for securing transfers between storage and host controller. I'm not sure about old parallel ATA. While I understand it can happen under conditions similar to non-ECC RAM being corrupted, I don't think I've ever heard or read about a case where a storage device randomly returned erroneous data, short of a legitimate hardware error.
jandrewrogers•1d ago
The bare minimum you want these days is a 64-bit CRC. A strong 128-bit hash would be ideal. Even if you just apply these at the I/O boundaries then you'll catch most corruption. The places it can realistically occur are shrinking but most software makes minimal effort to detect this corruption even though it is a fairly well-bounded problem.
daneel_w•1d ago
daneel_w•1d ago
avinassh•1d ago
nemothekid•1d ago
I can't imagine picking the latter unless you were treating sqlite like a filesystem of completely unrelated blobs.
If I run three transactions where:
1. John gives $100 to Sue.
2. Sue gives $100 to Mark.
3. Mark $100 money to Paul.
If sqlite, just erases transaction (2), then Mark materializes $100 from nowhere. The rest of your database is potentially completely corrupted. At that point your database is no longer consistent - I can't see how you would "almost always" prefer this.
If (2) is corrupt, then the restore stops at (1), and you are guaranteed consistency.
avinassh•1d ago
AlotOfReading•1d ago
lxgr•1d ago
CRCs as used in SQLite are not intended to detect data corruption due to bit rot, and are certainly not ECCs.
AlotOfReading•1d ago
Sure, the benefits to the incomplete write use case are limited, but there's basically no reason to ever use a fletcher these days.
It's also worth mentioning that the VFS checksums are explicitly documented as guarding against storage device bitrot and use the same fletcher algorithm.
lxgr•1d ago
AlotOfReading•1d ago
There's no harm to having redundant checksums and it's not truly redundant for small messages. It's pretty common for systems not to have lower level checksumming either. Lots of people are still running NTFS/EXT4 on hardware that doesn't do granular checksums or protect data in transit.
Of course this is all a moot point because sqlite does WAL checksums, it just does them with an obsolete algorithm.
lxgr•1d ago
There sure is: Redundant checksums need extra storage and extra processing. SQLite often runs on embedded systems, where both can come at a premium.
> Of course this is all a moot point because sqlite does WAL checksums, it just does them with an obsolete algorithm.
That's not nearly the only thing missing for SQLite to provide full resistance to lower-level data corruption. At a very minimum, you'd also need checksums of the actual database file.
AlotOfReading•1d ago
lxgr•1d ago
At the database level (i.e. not just the WAL)? Are you sure?
> What I'm saying is that a fletcher is strictly worse than a CRC here.
I can't speak to the performance differences, but the only thing SQLite really needs the checksum to do is to expose partial writes, both due to reordered sector writes and partial intra-sector writes. (The former could also be solved by just using an epoch counter, but the latter would require some tricky write formats, and a checksum nicely addresses both).
In both cases, there's really nothing to recover: CRC won't catch an entire missing sector, and almost no partially written sectors (i.e. unless the failure somehow happens in the very last bytes of it, so that the ratio of "flipped" bits is low enough).
AlotOfReading•1d ago
teraflop•1d ago
For instance, say you have a node A which has a child B:
* Transaction 1 wants to add a value to B, but it's already full, so B is split into new nodes C and D. Correspondingly, the pointer in A that points to B is removed, and replaced with pointers to C and D.
* Transaction 2 makes an unrelated change to A.
If you skip the updates from transaction 1, and apply the updates from transaction 2, then suddenly A's data is overwritten with a new version that points to nodes C and D, but those nodes haven't been written. The pointers just point to uninitialized garbage.