frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Discuss – Do AI agents deserve all the hype they are getting?

2•MicroWagie•46m ago•0 comments

Ask HN: Anyone Using a Mac Studio for Local AI/LLM?

47•UmYeahNo•1d ago•29 comments

LLMs are powerful, but enterprises are deterministic by nature

3•prateekdalal•4h ago•3 comments

Ask HN: Non AI-obsessed tech forums

26•nanocat•16h ago•21 comments

Ask HN: Ideas for small ways to make the world a better place

15•jlmcgraw•18h ago•19 comments

Ask HN: 10 months since the Llama-4 release: what happened to Meta AI?

44•Invictus0•1d ago•11 comments

Ask HN: Who wants to be hired? (February 2026)

139•whoishiring•4d ago•516 comments

Ask HN: Who is hiring? (February 2026)

313•whoishiring•4d ago•512 comments

Ask HN: Non-profit, volunteers run org needs CRM. Is Odoo Community a good sol.?

2•netfortius•13h ago•1 comments

AI Regex Scientist: A self-improving regex solver

7•PranoyP•20h ago•1 comments

Tell HN: Another round of Zendesk email spam

104•Philpax•2d ago•54 comments

Ask HN: Is Connecting via SSH Risky?

19•atrevbot•2d ago•37 comments

Ask HN: Has your whole engineering team gone big into AI coding? How's it going?

18•jchung•2d ago•13 comments

Ask HN: Why LLM providers sell access instead of consulting services?

5•pera•1d ago•13 comments

Ask HN: What is the most complicated Algorithm you came up with yourself?

3•meffmadd•1d ago•7 comments

Ask HN: How does ChatGPT decide which websites to recommend?

5•nworley•1d ago•11 comments

Ask HN: Is it just me or are most businesses insane?

8•justenough•1d ago•7 comments

Ask HN: Mem0 stores memories, but doesn't learn user patterns

9•fliellerjulian•2d ago•6 comments

Ask HN: Is there anyone here who still uses slide rules?

123•blenderob•4d ago•122 comments

Kernighan on Programming

170•chrisjj•4d ago•61 comments

Ask HN: Any International Job Boards for International Workers?

2•15charslong•15h ago•2 comments

Ask HN: Anyone Seeing YT ads related to chats on ChatGPT?

2•guhsnamih•1d ago•4 comments

Ask HN: Does global decoupling from the USA signal comeback of the desktop app?

5•wewewedxfgdf•1d ago•3 comments

We built a serverless GPU inference platform with predictable latency

5•QubridAI•2d ago•1 comments

Ask HN: Does a good "read it later" app exist?

8•buchanae•3d ago•18 comments

Ask HN: How Did You Validate?

4•haute_cuisine•1d ago•6 comments

Ask HN: Have you been fired because of AI?

17•s-stude•4d ago•15 comments

Ask HN: Cheap laptop for Linux without GUI (for writing)

15•locusofself•3d ago•16 comments

Ask HN: Anyone have a "sovereign" solution for phone calls?

12•kldg•4d ago•1 comments

Ask HN: OpenClaw users, what is your token spend?

14•8cvor6j844qw_d6•4d ago•6 comments
Open in hackernews

A reverse-delta backup strategy – obvious idea or bad idea?

12•datastack•7mo ago
I recently came up with a backup strategy that seems so simple I assume it must already exist — but I haven’t seen it in any mainstream tools.

The idea is:

The latest backup (timestamped) always contains a full copy of the current source state.

Any previous backups are stored as deltas: files that were deleted or modified compared to the next (newer) version.

There are no version numbers — just timestamps. New versions can be inserted naturally.

Each time you back up:

1. Compare the current source with the latest backup.

2. For files that changed or were deleted: move them into a new delta folder (timestamped).

3. For new/changed files: copy them into the latest snapshot folder (only as needed).

4. Optionally rotate old deltas to keep history manageable.

This means:

The latest backup is always a usable full snapshot (fast restore).

Previous versions can be reconstructed by applying reverse deltas.

If the source is intact, the system self-heals: corrupted backups are replaced on the next run.

Only one full copy is needed, like a versioned rsync mirror.

As time goes by, losing old versions is low-impact.

It's user friendly since the latest backup can be browsed through with regular file explorers.

Example:

Initial backup:

latest/ a.txt # "Hello" b.txt # "World"

Next day, a.txt is changed and b.txt is deleted:

latest/ a.txt # "Hi" backup-2024-06-27T14:00:00/ a.txt # "Hello" b.txt # "World"

The newest version is always in latest/, and previous versions can be reconstructed by applying the deltas in reverse.

I'm curious: has this been done before under another name? Are there edge cases I’m overlooking that make it impractical in real-world tools?

Would love your thoughts.

Comments

compressedgas•7mo ago
It works. Already implemented: https://rdiff-backup.net/ https://github.com/rdiff-backup/rdiff-backup

There are also other tools which have implemented reverse incremental backup or backup with reverse deduplication which store the most recent backup in contiguous form and fragment the older backups.

datastack•7mo ago
Thank you for bringing this to my attention. Knowing that there is a working product using this approach gives me confidence. I'm working on a simple backup app for my personal/family use, so good to know I'm not heading in the wrong direction
trod1234•7mo ago
These type of projects can easily get sidetracked without a overarching goal. Are you looking to do something specific?

An app (that requires remote infrastructure), seems a bit overkill and if your going through the hassle of doing that you might as well set up the equivalent of what MS used to call the Modern Desktop Experience which is how many enterprise level customers have their systems configured now.

The core parts are cloud-based IDp, storage, and a slipstreamed deployment image which with network connectivity will pull down the config and sets the desired state, replicating the workspace down as needed (with OneDrive).

Backup data layout/strategy/BCDR plan can then be automated from the workspace/IDp/cloud-storage backend with no user interaction/learning curve.

If hardware fails, you use the deployment image to enroll new hardware, login and replicate the user related state down, etc. Automation for recurring tasks can be matched up to the device lifecycle phases (Provision, Enrollment, Recovery, Migration, Retirement). This is basically done in a professional setup with EntraID/Autopilot MDM with MSO365 plans. You can easily set up equivalents but you have to write your own glue.

Most of that structure was taken from Linux grey beards ages ago, MS just made a lot of glue and put it in a nice package.

wmf•7mo ago
It seems like ZFS/Btrfs snapshots would do this.
HumanOstrich•7mo ago
No, they work the opposite way using copy-on-write.
wmf•7mo ago
"For files that changed or were deleted: move them into a new delta folder. For new/changed files: copy them into the latest snapshot folder." is just redneck copy-on-write. It's the same result but less efficient under the hood.
datastack•7mo ago
Nice to realize that this boils down to copy on write. Makes it easier to explain.
sandreas•7mo ago
Is there a reason NOT to use ZFS or BTRFS?

I mean the idea sounds cool but what are you missing? ZFS even works on Windows these days and with tools like zrepl you can configure time based snapshotting, auto-sync and auto-cleanup

codingdave•7mo ago
The low likelihood / high impact edge case this does not handle is: "Oops, our data center blew up." An extreme scenario, but one that this method does not handle. It instead turns your most recent backup into a single point of failure because you cannot restore from other backups.
datastack•7mo ago
This sounds more like a downside of single site backups
codingdave•7mo ago
Totally. Which is exactly what your post outlines. You said it yourself: "Only one full copy is needed." You would need to update your logic to have a 2nd copy pushed offsite at some point if you wanted to resolve this edge case.
ahazred8ta•7mo ago
For reference: a comprehensive backup + security plan for individuals https://nau.github.io/triplesec/
datastack•7mo ago
Great resource in general, will look into it if it describes how to implement this backup scheme
dr_kiszonka•7mo ago
It sounds like this method is I/O intensive as you are writing the complete image at every backup time. Theoretically, it could be problematic when dealing with large backups in terms of speed, hardware longevity, and write errors, and I am not sure how you would recover from such errors without also storing the first image. (Or I might be misunderstanding your idea. It is not my area.)
datastack•7mo ago
You can see in step 2 and 3 that no full copy is written every time. It's only move operations to create the delta, and copy of new or changes files, so quite minimal on IO.
rawgabbit•7mo ago
What happens if in the process of all this read write rewrite, data is corrupted?
datastack•7mo ago
In this algo nothing is rewritten. A diff between source and latest is made, the changed or deleted files archives to a folder and the latest folder updated with source, like r sync. No more IO than any other backup tool. Versions other than the last one are never touched again
jiggawatts•7mo ago
The more common approach now is incrementals forever with occasional synthetic full backups computed at the storage end. This minimises backup time and data movement.
datastack•7mo ago
I agree it seems more common. However back-up time and data movement should be equivalent if you follow the algo steps.

According to chat GPT the forward delta approach is common because it can be implemented purely append only, whereas reverse deltas require the last snapshot to be mutable. This doesn't work well for backup tapes.

Do you also think that the forward delta approach is a mere historical artifact?

Although perhaps backup tapes are still widely used, I have no idea, I am not in this field. If so the reverse delta approach would not work in industrial settings.

jiggawatts•7mo ago
Nobody[1] backs up directly to tape any more. It’s typically SSD to cheap disk with a copy to tape hours later.

This is more-or-less how most cloud backups work. You copy your “premium” SSD to something like a shingled spinning rust (SMR) that behaves almost like tape for writes but like a disk for reads. Then monthly this is compacted and/or archived to tape.

[1] For some values of nobody.

vrighter•7mo ago
I used to work on backup software. Our first version did exactly that. It was a selling point. We later switched approach to a deduplication based one.
datastack•7mo ago
Exciting!

Yes, the deduplicated approach is superior, if you can accept requiring dedicated software to read the data or can rely on a file system that supports it (like Unix with hard links).

I'm looking for a cross-platform solution that is simple and can restore files without any app (in case I didn't maintain my app for the next twenty years).

I'm curious if the software you were working on used proprietary format, was relying on Linux, or used some other method of duplication.

vrighter•7mo ago
The deduplication in the product I worked on was implemented by me and a colleague of mine, in a custom format. The point of it was to do inline deduplication on a best-effort basis. I.e. handling the case where the system does NOT have enough memory to store hashes for every single block. This might have resulted in some duplicated data if you didn't have enough memory, instead of slowed down to a crawl by hitting the disk (spinning rust, at the time) for each block we wanted to deduplicate.
tacostakohashi•7mo ago
Sounds a bit like the netapp .snapshot directory thing (which is no bad thing).
brudgers•7mo ago
In principle, deleting archived data is the opposite of backing up.

It is not clear what problem with existing backup strategies this solves.

I mean you can use a traditional delta backup tool and make one full copy of the current data separately with less chance for errors.

It seems too clever by half and it is not clear to me from the question what problem it solves. Good luck.

SoberPingu•7mo ago
That's how Plesk does the sites backups