I was already working toward this policy when I worked at a place where an entire batch of computers came with defective hard drives that died between 24 and 30 months of first power-on. We had 6 people rebuilding their dev environments from scratch in about a 4 month period. By the time mine died more than half the setup time was just initializing whole disk encryption. Everything else was in version control or the wiki, with turn-by-turn instructions that had been tested four times already.
Even secure systems like Tails have an option for persistence for that very reason.
Lack of session management is in fact annoying in the OSes, X11 protocol is generally unsupported anyway.
True persistence, however, is indeed in storing the scripts and advanced things in a backup archive, properly labelled. Sadly there is no good site to share these to reduce the unneeded effort.
Distributed archive, for that matter.
Similarly with Git, I rarely use stashes. If I have to switch contexts, anything I care about gets committed to a branch (and ideally pushed to a remote) or I blow it away.
How does this differ from the deliberate saving mentioned in the article? I can't reliably tell what piece of data it is that will be important, out of the whole collection maybe a couple percent has ever been called upon, but those few percent are very, very valuable.
How long should one maintain the copies then? Well the oldest record to still save a bit over $10K in cost is well over 30 years old data, while archiving it has only cost an aggregate of a few dozen bucks. So I'd say just don't get rid of it.
Aeolun•9mo ago
throwaway290•9mo ago