> A production-grade WAL isn't just code, it's a contract.
I hate that I'm now suspicious of this formulation.
the language technique of negative parallel construction is a classic signal for AI writing
This, along with RAID-1, is probably sufficient to catch the majority of errors. But realize that these are just probabilities - if the failure can happen on the first drive, it can also happen on the second. A merkle tree is commonly used to also protect against these scenarios.
Notice that using something like RAID-5 can result in data corruption migrating throughout the stripe when using certain write algorithms
https://www.usenix.org/legacy/event/fast08/tech/full_papers/...
The most vexing storage failure is phantom writes. A disk read returns a "valid" page, just not the last written/fsync-ed version of that page. Reliably detecting this case is very expensive, particularly on large storage volumes, so it is rarely done for storage where performance is paramount.
It would suddenly become blank. You have an OS and some data today, and tomorrow you wake up and everything claims it is empty. It would still work, though. You could still install a new OS and keep going, and it would work until next time.
What a friendly surprise on exam week.
Sold it to a friend for really cheap with a warning about what had been happening. It surprise wiped itself for him too.
I have been passing my anxieties about hardrives to junior engineers for a decade now.
I don't know. Now async I/O is all the rage and that is the same idea.
sync; sync; halthttps://www.usenix.org/legacy/event/fast08/tech/full_papers/...
https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&d...
The OS page cache is not a "problem"; it's a basic feature with well-documented properties that you need to learn if you want to persist data. The writing style seems off in general (e.g. "you're lying to yourself").
AFAIK fsync is the best practice not O_DIRECT + O_DSYNC. The article mentions O_DSYNC in some places and fsync in others which is confusing. You don't need both.
Personally I would prefer to use the filesystem (RAID or ditto) to handle latent sector errors (LSEs) rather than duplicating files at the app level. A case could be made for dual WALs if you don't know or control what filesystem will be used.
Due to the page cache, attempting to verify writes by reading the data back won't verify anything. Maaaybe this will work when using O_DIRECT.
> Link fsync to that write (IOSQE_IO_LINK)
> The fsync's completion queue entry only arrives after the write completes
> Repeat for secondary file
Wait, so the OS can re-order the fsync() to happen before the write request it is supposed to be syncing? Is there a citation or link to some code for that? It seems too ridiculous to be real.
> O_DSYNC: Synchronous writes. Don't return from write() until the data is actually stable on the disk.
If you call fsync() this isn't needed correct? And if you use this, then fsync() isn't needed right?
This is an io_uring-specific thing. It doesn't guarantee any ordering between operations submitted at the same time, unless you explicitly ask it to with the `IOSQE_IO_LINK` they mentioned.
Otherwise it's as if you called write() from one thread and fsync() from another, before waiting for the write() call to return. That obviously defeats the point of using fsync() so you wouldn't do that.
> If you call fsync(), [O_DSYNC] isn't needed correct? And if you use [O_DSYNC], then fsync() isn't needed right?
I believe you're right.
Related: I would think that grouping your writes and then fsyncing rather than fsyncing every time would be more efficient but it looks like a previous commenter did some testing and that isn't always the case https://news.ycombinator.com/item?id=15535814
compressedgas•2d ago
jtregunna•2d ago