You are not logged in.

#1 2014-01-23 00:30:46

ParanoidAndroid
Member
Registered: 2012-10-14
Posts: 114

from-scratch minimalist backup implementation?

I have done backups since I first used a linux distro, and I've been using a self-designed python script that leverages rsync and LUKS to perform serial encrypted backups to several external drives. I derived the python code from a BASH script I wrote several years previous, and it seemed to work pretty well after initial testing. However, upon recent inspection, it appears it isn't doing its job properly; also, looking back at the code I've realized that it's sloppy, inefficient, and fails to meet my standards of minimalism.

That said, my problem is this: I need to design an effective backup solution within the following parameters:

- it must be encrypted
- it must support multiple disks
- it must be rsync-based
- it should preferably use file compression, both for transfer (which I already do using the --compress option for rsync), and on the actual storage medium, and that if so:
    - the compression algorithm should preferably be space efficient over CPU-efficient

- it must be minimalist, ideally using separate, simple, standard-issue and widely available Linux utilities united by a simple script (ideally BASH)

I have some idea as to where to begin, but I'd like the input of the smart masses on this. I'm by no means a newbie to arch linux or linux in general, but I'd rather not overlook and/or bungle anything.

Also, if this is the wrong category for this thread, I apologize and request that someone more educated in forum topography move it to a more appropriate location. It seemed a good fit, but the various fine distinctions between some forum categories continue to elude me.

UPDATE:

This is what I have so far, though it does seem inefficient:

compression algorithm:
- tar.xz (are there any methods of compressing a tar archive with better compression rate? this was the best my research gave me.)

rsync to transfer files

GPG to encrypt compressed archives

----

my question now is how to make these work together. I thought of compressing the folders to be backed up using tar + xz, then rsyncing them to a backup drive connected over USB.

I have 3 external drives I plan to use: one 1TB (to host full system mirror), one 500GB (also for full system mirror), one 64GB (home folder mirror), and one 2GB (documents mirror). This is the general disk scheme my old setup used.

the main problem I forsee is that my complete system mirror and my home folder mirror would both be very, very large (~13GB and ~10GB, respectively), and I don't see how rsync would be a practical solution to copying such large archives to an external disks. Correct me if I'm wrong, but doesn't rsync copy files that have changed, instead of the delta of that file? Wouldn't that mean that rsync would just copy the whole archive over again if it changed even in the slightest? Even if that wasn't the case, where would I fit the archived files before they are transferred? I have an SSD as my main drive, so space is limited and so are R/W-cycles, so holding the archives temporarily on disk before they are copied makes no sense.

If my interpretation of rsync's functioning is correct, is there any way to make the archive once, transfer it to the external disk, and then simply sync the changes to the files on my main drive to the archive on the external ones? Or is there a way to tar-xz-encrypt the folders being copied (/, /home, and /home/me/docs) during the transfer without storing the archives on disk, and while maintaining rsync's ability to copy only altered files?

if that scheme seems inefficient (as it does to me), does anyone have a better idea? My old system rsync'd the files on my main drive to encrypted partitions on my external drives, and while this solved the problem I mention above it also forwent the portability and space-saving nature of compressed tar archives.

Last edited by ParanoidAndroid (2014-01-23 01:58:55)

Offline

#2 2014-01-23 02:44:06

jasonwryan
Anarchist
From: .nz
Registered: 2009-05-09
Posts: 30,424
Website

Re: from-scratch minimalist backup implementation?

I'm not sure what it is exactly you are looking for: a conceptual schema or a working script.

In any event, I'll respond piecemeal:

* Use LUKS for the external drives

* man rsync:

Rsync  is  a  fast  and  extraordinarily versatile file copying tool.  It can copy locally, to/from another host over any remote shell, or to/from a
       remote rsync daemon.  It offers a large number of options that control every aspect of its behavior and permit very flexible  specification  of  the
       set  of  files  to be copied.  It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only
       the differences between the source files and the existing files in the destination.

* the single biggest flaw in this scheme, irrespective of your focus on security, is that you are not making backups. Backups must be offsite, otherwise they are more properly described as "a crushing sense of despair filed for future delivery".


Arch + dwm   •   Mercurial repos  •   Surfraw

Registered Linux User #482438

Offline

#3 2014-01-23 04:37:17

ParanoidAndroid
Member
Registered: 2012-10-14
Posts: 114

Re: from-scratch minimalist backup implementation?

You're quite right that it's not a true backup. However, because of a number of constraints, it is difficult if not impossible for me to send that much data over a local connection without some serious compression. And by serious I mean squeezing a gigabyte into the space of a few megabytes. While that might (might) be possible with a gigabyte of text or program binary files, the vast majority of my data is ogg music and an mp3 audiobook collection. Most of my work for both home and school uses Unix plain-text files, but I'd like to copy more than just my documents.

I could, I suppose, copy my media files to an external medium like a high-density DVD or flash card (I have enough space), but I'd still have 6 GB as the total size of my system. Speaking as one who has downloaded some very large files in his time, even over BitTorrent this is no quick feat, unless I'm greatly mistaken.

I like the idea of LUKS volumes because they are by definition highly secure with the right key. However, I like the idea of being able to wrap the entire backup in one neat tarball, and save some space with compression. Would rsync's delta algorithm be able to sync two compressed tar archives by transferring just the altered bytes? I assume from the man page snippet that is so, but explicit confirmation is always reassuring. That still leaves the problem of how to create the new compressed tar in a time and disk efficient manner to be synced. Perhaps one could generate a diff of all the changes made to the files on one's drive, and then simple use rsync to merge it with the original backup? Or maybe there's some kind of wizardry one could perform with the tar -u command to do something similar?

If this line of inquiry proves fruitless or ineffective, I'll simply regress to a simplified version of my older model of backups. Even then, I still haven't worked out a simple way of efficiently performing backups to multiple external disks.

Last edited by ParanoidAndroid (2014-01-23 04:42:29)

Offline

#4 2014-01-23 04:47:24

jasonwryan
Anarchist
From: .nz
Registered: 2009-05-09
Posts: 30,424
Website

Re: from-scratch minimalist backup implementation?

For offsite encrypted backups, look at tarsnap: it is brilliant (although costly if you are regularly moving a tonne of data). I'd use that for all the really personal/private stuff, and for files that are less so (music, books) go with your original plan.

Given the sizes you are talking about, rsync seems (to me) the best option but it is not likely to be that efficient--just more efficient that other methods.


Arch + dwm   •   Mercurial repos  •   Surfraw

Registered Linux User #482438

Offline

#5 2014-01-23 05:43:04

ParanoidAndroid
Member
Registered: 2012-10-14
Posts: 114

Re: from-scratch minimalist backup implementation?

tarsnap looks like a dream come true. The only problem is... college. I'm effectively broke in terms of having money to spend on anything other than room, board, and textbooks.

At this point, my only truly available option is to store backups on local drives separate from my computer. Given that setup, what would be the most effective method? As I say, I like the idea of xzipped tarballs (although I'm still not sure if it's a good algorithm for backups), and I've found a few nifty things about the tar format that make it very nice indeed, such as the --diff and -u options. The question now becomes how to leverage them along with the power of rsync. My currently sleepy brain is mumbling something about pipes and standard input from one program to another to minimize disk usage... I have a surefit of RAM (8GB) so compression even for massive 7GB backups shouldn't prove too extraneous... I think. Maybe something like

tar / | xz - | rsync - /mnt/somedisk?

and then just make and apply diffs for all future backups? But I don't suppose diff-ing would work on a compressed tar archive...

I won't get a coherent idea from it until morn. I'll post an update when my brain isn't being clocked down to 768MHz by the lateness of the hour (if the light of tomorrow reveals anything useful, that is).

Offline

#6 2014-01-23 07:17:37

jasonwryan
Anarchist
From: .nz
Registered: 2009-05-09
Posts: 30,424
Website

Re: from-scratch minimalist backup implementation?

Using tar with rsync screws the delta option. Use rsync's native compression, live with a huge first transfer, and reap the benefits of the delta transfers thereafter.

Just my view.


As for tarsnap, it's worth considering for your text files: less than USD 10/year is a bargain to have guaranteed encrypted backup of your critical system and personal files (assuming that you keep all that info in plain text).


Arch + dwm   •   Mercurial repos  •   Surfraw

Registered Linux User #482438

Offline

#7 2014-01-23 09:27:25

Runiq
Member
From: Germany
Registered: 2008-10-29
Posts: 1,053

Re: from-scratch minimalist backup implementation?

ParanoidAndroid wrote:

At this point, my only truly available option is to store backups on local drives separate from my computer. Given that setup, what would be the most effective method? As I say, I like the idea of xzipped tarballs (although I'm still not sure if it's a good algorithm for backups), and I've found a few nifty things about the tar format that make it very nice indeed, such as the --diff and -u options. The question now becomes how to leverage them along with the power of rsync. My currently sleepy brain is mumbling something about pipes and standard input from one program to another to minimize disk usage... I have a surefit of RAM (8GB) so compression even for massive 7GB backups shouldn't prove too extraneous... I think. Maybe something like

tar / | xz - | rsync - /mnt/somedisk?

and then just make and apply diffs for all future backups? But I don't suppose diff-ing would work on a compressed tar archive...

Goddammit, I just wrote up an elaborate answer and the BBS crapped out on me. Excellent.

Here's the gist of what I wrote:

You mentioned that space isn't a problem. From that I glean that compression is optional to you, not a requirement.

If you are willing to omit the compression requirement, you could simply use rsync with the --link-dest option. Your backup workflow (for backing up /home, for example) would look something like this:

  1. Create first snapshot with rsync in the usual way, let's say in /mnt/backup/snapshot1:
       

        rsync --archive --hard-links --verbose --compress /home /mnt/backup/snapshot1
        
  2. When it's time to create the second snapshot (say, in /mnt/backup/snapshot2), you use the following tricks:
       

    • Compare /home's current state with the state in /mnt/backup/snapshot1

    • Sync only files which have changed between snapshot1 and the current state

    • Hardlink the unchanged files from /mnt/backup/snapshot1 to /mnt/backup/snapshot2

    This would look something like this:

    rsync --archive --hard-links --verbose --compress --link-dest=/mnt/backup/snapshot1 /home /mnt/backup/snapshot2
  3. Repeat step 2 ad nauseam.

This gives you the advantage of being able to treat each snapshot like a normal folder (which it is, just with a whole bunch of hardlinks). You can use the --link-dest option multiple times for when you have multiple snapshots.

Edit: Added rsync invocation with --link-dest.

Last edited by Runiq (2014-01-23 09:32:24)

Offline

#8 2014-01-23 09:36:31

Slithery
Administrator
From: Norfolk, UK
Registered: 2013-12-01
Posts: 5,776

Re: from-scratch minimalist backup implementation?

@Runiq

Isn't this exactly what rsnapshot does?


No, it didn't "fix" anything. It just shifted the brokeness one space to the right. - jasonwryan
Closing -- for deletion; Banning -- for muppetry. - jasonwryan

aur - dotfiles

Offline

#9 2014-01-23 09:52:04

graysky
Wiki Maintainer
From: :wq
Registered: 2008-12-01
Posts: 10,591
Website

Re: from-scratch minimalist backup implementation?

...or backintime in the AUR.


CPU-optimized Linux-ck packages @ Repo-ck  • AUR packagesZsh and other configs

Offline

#10 2014-01-23 09:56:35

Runiq
Member
From: Germany
Registered: 2008-10-29
Posts: 1,053

Re: from-scratch minimalist backup implementation?

slithery wrote:

Isn't this exactly what rsnapshot does?

True, and with more bells and whistles. Thanks. Sorry, it's early in the morning here. sad

Offline

#11 2014-01-23 15:47:43

drcouzelis
Member
From: Connecticut, USA
Registered: 2009-11-09
Posts: 4,092
Website

Re: from-scratch minimalist backup implementation?

ParanoidAndroid wrote:

I have 3 external drives I plan to use: one 1TB (to host full system mirror), one 500GB (also for full system mirror)...

jasonwryan wrote:

the single biggest flaw in this scheme, irrespective of your focus on security, is that you are not making backups. Backups must be offsite, otherwise they are more properly described as "a crushing sense of despair filed for future delivery".

ParanoidAndroid wrote:

The only problem is... college. I'm effectively broke in terms of having money to spend on anything other than room, board, and textbooks.

How about this: Do what you're already doing (but modify it to make it more "minimalist" or whatever), and keep one of those full system mirrors at your parents' house. Then just try to swap it with the other full system mirror ever month or two.

Offline

#12 2014-01-23 16:21:31

firecat53
Member
From: Lake Stevens, WA, USA
Registered: 2007-05-14
Posts: 1,542
Website

Re: from-scratch minimalist backup implementation?

For offsite encrypted backups, duplicity wraps gpg encryption around rdiff-backup to give you the best of gpg encrypted and incremental backups all wrapped up in one. I'm not as sure about the multiple disk support, but you can (and I have...) take several external drives and make them into one LVM volume and use that as your backup destination if one drive by itself isn't big enough. As long as you keep them together, of course smile

Duplicity takes a lot of the scripting and hackery out of backup solutions. Most of the script I used was just making sure SSH and GPG keys were loaded correctly! Here it is if you're interested.

Scott

Edit: the script backs up to a local directory (then I used rsync to copy the encrypted backup to a remote location...for other reasons), but this can easily be a remote directory via SSH or SFTP, S3, etc.

Last edited by firecat53 (2014-01-23 16:23:33)

Offline

#13 2014-01-23 16:23:05

ParanoidAndroid
Member
Registered: 2012-10-14
Posts: 114

Re: from-scratch minimalist backup implementation?

I like the hardliked snapshot idea. It combines the transfer efficiency of rsync with the back-in-time nature of diff-style backups.

I realize that space isn't an issue (I have a total combined capacity of 1.5066TB while my main drive has a total limit of about 230GB), but I'd like to use compression if possible just because saving space seems like a useful concept to me. Also, fitting the entirety of the backup into a tarball would be nice for the sake of being able to move everything in one lump. Would it be possible to mount a compressed tar archive like a filesystem and then copy data to it?

I take your point though. No matter how nice the idea seems, it's not practical as far as I can tell.

Even using the rsynced LUKS container setup, the question remains as to how to get it to work with multiple disks at once. I could simply have it loop through a list of disk UUIDs, mount each one, sync an assigned folder to it, unmount, close, and then move on to the next, but I'd like to know if there's a more elegant solution.

Offline

#14 2014-01-23 18:39:35

ball
Member
From: Germany
Registered: 2011-12-23
Posts: 164

Re: from-scratch minimalist backup implementation?

ParanoidAndroid wrote:

I realize that space isn't an issue [..] but I'd like to use compression if possible just because saving space seems like a useful concept to me.

I think you contradict yourself. If space isn't an issue, why bother thinking about it?

Either have the pleasent setup with rsync and hardlinks via the --link-dest option and pay a nowerdays little price for more disk consumption. Or use old style incremental tar archives, doing full backups every week or so.

Maybe new filesystem technologies like btrfs or ZFS offer more sophisticated ways to deal with your requirements (I did not yet check them out).

As for offsite backups: I'd go with drcouzelis' suggestion and have another disk at someones else's or work place and swap the disks regularily or just bring your laptop to the offsite backup and add a new snapshot.

EDIT: Ok, full snapshots using tar and compression while only transferring the delta is indeed possible and already implemented as tarsnap proves. (I did not know that...) But accessing uncompressed timestamped directories like

[...]
2014-01-22-000101
2014-01-23-000101
2014-01-23-000101

and finding all the snapshotted data therein is something I find very convenient.

Last edited by ball (2014-01-23 20:03:50)

Offline

#15 2014-01-23 20:37:27

Runiq
Member
From: Germany
Registered: 2008-10-29
Posts: 1,053

Re: from-scratch minimalist backup implementation?

ball wrote:

Either have the pleasent setup with rsync and hardlinks via the --link-dest option and pay a nowerdays little price for more disk consumption. Or use old style incremental tar archives, doing full backups every week or so.

Maybe new filesystem technologies like btrfs or ZFS offer more sophisticated ways to deal with your requirements (I did not yet check them out).

I didn't go full sermon about the virtues of ZFS/btrfs snapshotting because a) you never go full sermon, and b) OP specifically requested rsync. However, OP, if you're willing and able to use ZFS/btrfs on both your laptop and your backup drives, you should definitely look into their COW snapshotting facilities.

Offline

#16 2014-01-23 21:10:24

ParanoidAndroid
Member
Registered: 2012-10-14
Posts: 114

Re: from-scratch minimalist backup implementation?

I use JFS on my computer because of its efficient use of processor time. I looked at and even used BTRFS for a time and liked it, but in the interest of resource conservation I forwent it in favor of JFS. My only computer is a samsung series 9 ultrabook, and the system I designed for it uses bspwm window manager and urxvtd as its graphical interface. All my applications are based on raw text interfaces, such as s-nail and the basic ls, mv, etc. for file manipulation. The only graphical programs I run are my terminal emulator, uzbl-browser, and a few ncurses apps like vim and less. I mention that because I'm dedicated to the idea of minimalism. While I have more than enough processing power and memory to run a fully graphical, eye-candy-filled, bells-and-whistles M$-Windows-type system, I choose not to in favor of wicked fast command-line apps and an under-clocked CPU running at 800MHz. I get crazy long battery life that way, as a bonus.

I understand the non-issue of storage space, and it's a good point to make. That said, I'd like to see just how far I can push resource conservation; hence, my concern about compression (although, compression takes a toll on CPU resources and RAM... so there's that).

My other reason for this whole tarball-compression-rsync merry-go-round is that I would like to be able to make an entirely self-contained backup. That is, a single file containing my entire system or various subdirectories thereof which can be encrypted in and of itself, rather than storing it on an encrypted partition as I used to. If the method I've been talking about (or something whose net result is the sought-after goal I've outlined) can be achieved, then a specially-set up drive would not be necessary. I could simply tell my script to use any old drive with the requisite storage capacity. I like LUKS and all that, but I don't enjoy setting up each drive by hand. Also, the idea I've outlined, if possible, would be more portable in the sense that I could grab a flash drive out of my drawer and pop it in without worrying whether it's encrypted or not. Being able to just encrypt the backup itself on a non-encrypted medium would be great.

Last edited by ParanoidAndroid (2014-01-23 21:12:25)

Offline

#17 2014-01-23 22:47:51

firecat53
Member
From: Lake Stevens, WA, USA
Registered: 2007-05-14
Posts: 1,542
Website

Re: from-scratch minimalist backup implementation?

FWIW, using the duplicity script I posted above, I did a successful test backup and restore to/from a VFAT formatted flash drive.

Scott

Offline

#18 2014-01-23 23:14:40

ParanoidAndroid
Member
Registered: 2012-10-14
Posts: 114

Re: from-scratch minimalist backup implementation?

@firecat53:
Woah! I didn't see that post for some reason. I haven't checked out the script yet, but I've heard of duplicity and even used it previously (in the form of deja-dup). However, I remember having issues with it and it never worked quite the way I wanted.

I've been thinking about this, and as you all have suggested I have concluded that with the paradigm I proposed rsync-style snapshots aren't a good fit. So, here is my modified proposal:

and rdiff-type backup to a compressed tar archive. for a whole system backup, for example:

= initial backup =
/ is tarred, encrypted, compressed, then transferred to the backup disk all in one piece with something like

tar - / | <GPG> | lrzip -b > /mnt/my_disk/orig.bak

= future backups =
a diff file is generated against /, which is then appended to the original tar archive. To restore a point in time, one would simply reverse the first step and apply the diff for that point.

The problem with this setup is that I don't know if you can append data to an encrypted tar archive and have that data encrypted in the process. If not, I suppose one could encrypt the diffs with the same key as the original tar and append them that way. Also, I don't know if compression works that way, either. That is, I'm not sure if one can append to a compressed tar archive and have that data compressed. If not, I suppose one could individually compress each diff; or one could leave the original tarball uncompressed and only compress the diffs. Encryption still poses a problem as postulated above.

EDIT:

Looking back, this is something like duplicity, the only difference being that this setup would create a single archive, instead of a folder with regular files and a bunch of diffs. Also, my idea would store the files future as diffs, instead of its past (which, as I understand it, is what duplicity does).

Since I don't really need true diff-style backup behavior (the "back in time" functionality), would it be possible to overwrite the last diff appended to the original archive file, so that only the most recent diff would be added?

EDIT2:

After a little more research on the thrice-blessed ArchWiki, I found System Tar & Restore, btar, and rdup. The latter two look great, but ideally I'd like to implement this myself. These programs still don't do quite what I'm asking for.

The biggest problem I can forsee (even if the abovementioned issues are solved), is how to do this while the system is running. As I understand it, tarring a whole system while it runs is a recipe for data corruption. Instead of using diffs, I notice that rsync can create delta files, and rsync can run safely on a system while it is running. With both diffs and rsync delta files, the question remains, though: if they work by calculating the changes between two data sets.... how does one calculate the difference between a system's / and the same / that is compressed and tarred?

Last edited by ParanoidAndroid (2014-01-23 23:31:31)

Offline

#19 2014-01-24 01:00:44

frostschutz
Member
Registered: 2013-11-15
Posts: 1,383

Re: from-scratch minimalist backup implementation?

I like DAR (disk archive) as an alternative to tar, for incremental backups. It compresses files individually (and selectively, so files that can't be compressed anyway don't have to pass through the compression), slices archives so you can distribute among multiple disks, and it offers direct file access with catalogue function.

Online

#20 2014-01-24 01:30:07

ParanoidAndroid
Member
Registered: 2012-10-14
Posts: 114

Re: from-scratch minimalist backup implementation?

DAR looks really useful. Two problems, though:
1. it uses its own format. For portability and compatability reasons, using standard utils like tar, bzip2, etc. strikes me as a safer option. Also, for practical reasons I'm unable to gaurantee access to DAR in the event of a crash or erasure.
2. I'd prefer to create my own implementation, both as an essay in the craft and just because DIY is emminently satisfying.

So I've figured out that I can use tar together with the -I command and a string to couple it with lrzip, so compression is done in one step. Encryption looks like it can be handled by lrzip, too, or by piping it through openssl. My question now is how to get the diff-type thing working. How do I get tar to compare my archive to my root directory and either generate a diff file which can be appended to the original archive, or get it to check / and update it to reflect deletions, additions, or alterations of and to any files/folders in /? Perhaps this could be done with rsync's efficient batch delta feature, but I can't see how it could work without unpacking and decompressing the archive.

EDIT:

I found out that, apparently, one cannot append data to a compressed tar archive. I could ditch compression completely, or I could compress only the delta information (as mentioned above).

I would assume that encrypted archives cannot be appended to as well without decryption and re-encryption. Am I correct?

updated idea:

compress each file under / and pipe them into a tar archive. encrypt that, and then use the delta concept to see which files under / have changed since the last archive. recompress those files and append them. This has problems of its own. Essentially, I'd like to replicate the functionality of dar with common unix tools.

Last edited by ParanoidAndroid (2014-01-24 03:10:08)

Offline

#21 2014-01-24 03:26:05

esko997
Member
From: Connecticut, USA
Registered: 2014-01-13
Posts: 22

Re: from-scratch minimalist backup implementation?

As others have already suggested I think zfs is your best bet. I think it will give you all of the functionality you are looking for (snapshots, intense compression/decompression, drive mirroring/raidz, encryption, and much more). I work with it everyday so I'm probably more than a little biased but definitely consider looking at it.

I know a friend of mine also just finished a new AUR package for it as well called zfs-dkms.

Last edited by esko997 (2014-01-24 03:26:48)

Offline

#22 2014-01-24 03:42:13

ParanoidAndroid
Member
Registered: 2012-10-14
Posts: 114

Re: from-scratch minimalist backup implementation?

Again, ZFS looks great... but because of the limitations I have and my desire to have dependence only on core unix utilities like rsync, diff, tar, bzip2, etc. I can't use it.

Even if I were to forgo compression, I'd still need encryption. How can I achieve a single encrypted tarball as the end result, with the delta behavior of rsync?

If this is impossible, btar looks like my next best option.

EDIT:

btar isn't it, either. Duplicity looks like exactly what I want, except I'm fairly certain it creates a whole bunch of tarballs and diffs on a drive, instead of just one. That's my only major barrier to using it. Would it be possible to duplicate duplicity's functionality with just tar, rsync, diff, GPG and bzip2 held together with BASH?

----

Well, I'm resigned to duplicity. While I'm fairly certain after reading about the methodology behind Duplicity that I could replicate its function using a script and the basic utilities, there'd be little point. My only complaint with Duplicity is that it can't store all of it's backup data in a single tar archive. I suppose I could write a script that would create a compressed, encrypted backup of my system plus some sort of record of its operations, store it in a tarball, use tar to search for those records and then use the retrieved records to do an rsync delta. Compress the delta, encrypt it, and append it to the main archive. Except I don't know how to implement the full backup part. Based on experimentation, it seems that neither bzip2 nor tar can handle archiving and compressing of very large data sets: from my tests, they both crash after exactly 4.1 GB with an I/O error. I assume that this is because I'm writing the compressed tar directly to a file on a backup drive to spare my SSD excessive writes, but I'm unsure. I also don't know how one would perform a delta with rsync on a tarball vs. a regular FS, I can't puzzle out how duplicity does it. If that could be fixed I might still give this script a try. Having a single tar containing both the full backup and incremental deltas would save me a lot of trouble if it comes to a manual restore.

Will Duplicity write heavily to my SSD while backing up? that's one of my greatest concerns.

Last edited by ParanoidAndroid (2014-01-24 05:18:43)

Offline

#23 2014-01-24 08:18:01

frostschutz
Member
Registered: 2013-11-15
Posts: 1,383

Re: from-scratch minimalist backup implementation?

Is the backup drive FAT32? a 4GB limit sounds very much like a filesystem limitation somehow

Online

#24 2014-01-24 14:47:32

ParanoidAndroid
Member
Registered: 2012-10-14
Posts: 114

Re: from-scratch minimalist backup implementation?

By sheerest coincidence, it WAS a filesystem limitation, and it was an FAT-formatted disk smile That's one quandry solved.

I also ran a full backup using duplicity, and I hate it. So far as I can tell there is no way to recover my information easily or quickly with standard tools. Would my scheme as mentioned in the last post work? and if so, how would I go about solving the issues posed?

EDIT:

I've taken a closer look at rdiffdir, which uses the kind of delta thing I've been looking for. Here's the revised plan:

1. create a signature file of my entire system; compress and encrypt.
2. store that in a tar archive.
3. for each subsequent backup, use tar to examine the signature file without extracting it; then use rdiffdir to generate a delta file against the original signature. Compress, encrypt, and append to the tar archive which stores my original signature file.

Last edited by ParanoidAndroid (2014-01-24 16:40:46)

Offline

#25 2014-01-24 18:32:09

ANOKNUSA
Member
Registered: 2010-10-22
Posts: 2,141

Re: from-scratch minimalist backup implementation?

I use rsync to make vital work and full-system back-ups I can browse to my home server's encrypted drive, and tarsnap for remote backups.  Unless you foresee needing to grab a back-up file and edit it on a Windows computer, tarsnap is IMHO the way to go.  Look into it; you might find you can back up your entire system for six months for the cost of a pint of beer.

For things I need to carry with me on a flash drive, though, I use this. As long as you have access to a Linux, BSD or Mac system it should work fine (as far as I can tell, anyway tongue ).

EDIT: Anyone using the linked-to script should make sure to read the first comment at the bottom of the article, as the script was written using BSD's version of "date."

Last edited by ANOKNUSA (2014-01-24 18:34:03)

Offline

Board footer

Powered by FluxBB