You are not logged in.
Random folder combining for simple incremental backup - good or bad?
At the moment my backup "script" looks like this:
sudo mount /media/BACKUP
sudo rsync -avb --delete --backup --force --ignore-errors \
/var /home /etc /boot /root /media/BACKUP/recent \
--backup-dir=/media/BACKUP/$(date +%F) \
--log-file=/media/BACKUP/logs/$(date +%F_%T).log \
--exclude-from '/home/user/scripts/backup.exclude'
sudo umount /media/BACKUP
Now I'm thinking of - with a random chance of 3:1 or something - combining a random folder with the next stage every time it creates a new directory / backup increment. This, with time, should automatically lead to older folder being less probable to contain many stages of the same files and thus older backups taking up less space when it comes to files that often change... while the total amount of backups I'll have and the amount of increments should still be growing at at least one third the rate, there should be a theoretical point where the total file size should stay almost the same with almost no manual trimming needed, because with time there will be more big folders and a bigger chance that many files get overwritten at once... or something? Not exactly but almost...?
Something like...
if [number of directories after backup] > [directories before backup]
if [random(1..3) > 1]
mv [random directory] [next younger directory]
I'm still really slow at writing small bash scripts, so before I waste a lot of time:
Is this a bad idea? Did I miss some important aspect?
Does someone use something like this?
Thanks!
Last edited by whoops (2010-12-21 09:56:59)
Offline
What.
Use GIT.
Offline
use rsnapshot and a filesystem that supports hardlinking.
Offline
Have you tried rdiff-backup
Offline