You are not logged in.

#1 2006-06-02 20:36:39

Komodo
Member
From: Oxford, UK
Registered: 2005-11-03
Posts: 674

Hard link redundancy system

This is my random thought of the day.

What if, after a fresh install, you created a /mnt/backup dir, and hard linked _every_ file on the system in there in the same hierarchical order. Then you take a snapshot of the system, by... i dunno... backing up the 'locate' db, and then set a cron job that every hour takes a new snapshot, compares this with the old snapshot, adds/removes hard links to the /mnt/backup sub-hierarchy accordingly, and then stores the new snapshot in the old one's place for comparison next time the job runs. Or better still, have a daemon that continually monitors for file system changes and modifies the backup system as needed.

Though incredibly clunky, can anyone give me any reason why this system would be ineffective?

A couple of things I'd be interested to know are

a.    Is traversal of the directory tree affected only by the distance travelled through it, or by its total size?
b.    Does it seem likely that there's _any_ way this could be made to work with ext3, by backing up the current inode table along with each snapshot for example? Am I remembering correctly when I think that ext3 zeros out the block pointers of inodes for deleted files, but the blocks themselves are unaffected, yes?


.oO Komodo Dave Oo.

Offline

#2 2006-06-03 20:15:07

bogomipz
Member
From: Oslo, Norway
Registered: 2003-11-23
Posts: 169

Re: Hard link redundancy system

What exactly is the idea here? Is it to have an extra set of links to the file contents, so that if you accidentally delete a file you can restore it? If that's the case, wouldn't the system shoot itself in the foot by immediately deleting the backup link when the original is unlinked? (well, if using node monitoring it will be immediate, with cron there will be a delay)


All of your mips are belong to us!!

Offline

#3 2006-06-03 20:47:35

codemac
Member
From: Cliche Tech Place
Registered: 2005-05-13
Posts: 794
Website

Re: Hard link redundancy system

Yea, I'm not quite sure if I see the purpose in this backup system.  It seems like if you were to run this too often, it would undo everything it had saved.

Offline

#4 2006-06-04 06:08:21

retsaw
Member
From: London, UK
Registered: 2005-03-22
Posts: 132

Re: Hard link redundancy system

I'm not sure exactly what result you intend to get with this, but if you intend that the "backup" is purely a set of hard-links from the original files then you'd have to be aware that when you modify a file you would also change the "backup", so this method would in effect only provide protection from accidental deletion.

There is a backup method using hard-links and rsync to create multiple snapshots of a filsystem while keeping actual space used to a minimum and because of the hard-links each of the snapshots look like a full backup, but you don't waste space on duplicated files.  Perhaps this is more like what you intended?

Offline

#5 2006-06-04 06:28:48

Komodo
Member
From: Oxford, UK
Registered: 2005-11-03
Posts: 674

Re: Hard link redundancy system

codemac wrote:

Yea, I'm not quite sure if I see the purpose in this backup system.  It seems like if you were to run this too often, it would undo everything it had saved.

How come?

The idea behind this is that if you accidentally deleted file X, then because you have a hard-link to X somewhere in the file hierarchy under /mnt/backup then you can 'get it back', so to speak, by copying this hard link.


.oO Komodo Dave Oo.

Offline

#6 2006-06-04 12:16:33

phildg
Member
Registered: 2006-03-10
Posts: 146

Re: Hard link redundancy system

It is only available until the next backup is made, if you delete a file then only realise that you need it 3 backup periods later the system isn't much use.

Offline

#7 2006-06-04 14:09:26

bogomipz
Member
From: Oslo, Norway
Registered: 2003-11-23
Posts: 169

Re: Hard link redundancy system

Komodo wrote:

set a cron job that every hour takes a new snapshot, compares this with the old snapshot, adds/removes hard links to the /mnt/backup sub-hierarchy accordingly

If you skip the remove part, this system would do what you want. Just remember that overwriting the file would still affect the backup. To actually free up the space, you would have to manually delete the file in /mnt/backup, unless you manage to add some sophisticated feature that removes the backup link, say, a week after the original link was removed. You could also have a simple purge script for deleting both links, but that would probably defeat the purpose as you would find yourself using purge instead of rm most of the time.


All of your mips are belong to us!!

Offline

#8 2006-06-05 11:50:28

Komodo
Member
From: Oxford, UK
Registered: 2005-11-03
Posts: 674

Re: Hard link redundancy system

bogomipz wrote:
Komodo wrote:

set a cron job that every hour takes a new snapshot, compares this with the old snapshot, adds/removes hard links to the /mnt/backup sub-hierarchy accordingly

If you skip the remove part, this system would do what you want.

oops, yes, i wasn't thinking very clearly when i put 'removes' in there :?


.oO Komodo Dave Oo.

Offline

#9 2006-06-05 17:50:57

codemac
Member
From: Cliche Tech Place
Registered: 2005-05-13
Posts: 794
Website

Re: Hard link redundancy system

There we go Komodo.

Now all you would have to do is set a "clean" command of some sort to clean up this backup when you know you don't want to recover any more files, and would like to have the backup smaller/cleaner.

Oh, and you would need some way to resolve file conflicts.  Example:

 % touch bobby.txt
 % backup
 % rm bobby.txt
 % touch bobby.txt
 % backup

Now what if I want the old bobby.txt back? hmm.

Offline

#10 2006-06-05 19:12:47

Komodo
Member
From: Oxford, UK
Registered: 2005-11-03
Posts: 674

Re: Hard link redundancy system

codemac wrote:

Oh, and you would need some way to resolve file conflicts.

Very true codemac. I suppose the simplest way to do this would be to compare file sizes.


.oO Komodo Dave Oo.

Offline

#11 2006-06-05 20:05:59

bogomipz
Member
From: Oslo, Norway
Registered: 2003-11-23
Posts: 169

Re: Hard link redundancy system

Maybe you want to add .1 to the end of the file name, and if that hard link already exists add .2 instead? Each time you increment the number, first compare the content to the actual file, and if it's identical you already have a backup so you don't have to do anything.

~$ uname > foo.txt        (hard link foo.txt.1)
~$ rm foo.txt             (nothing happens)
~$ pacman -Q > foo.txt    (hard link foo.txt.2)
~$ rm foo.txt             (nothing happens)
~$ pacman -Q > foo.txt    (nothing happens)
~$ rm foo.txt             (nothing happens)
~$ ps ax > foo.txt        (hard link foo.txt.3)
~$ rm foo.txt             (nothing happens)
~$ touch foo.txt          (hard link foo.txt.4)
~$ rm foo.txt             (nothing happens)
~$ uname > foo.txt        (nothing happens)

All of your mips are belong to us!!

Offline

#12 2006-06-05 20:38:05

Mr Green
Forum Fellow
From: U.K.
Registered: 2003-12-21
Posts: 5,899
Website

Re: Hard link redundancy system

The idea behind this is that if you accidentally deleted file X, then because you have a hard-link to X somewhere in the file hierarchy under /mnt/backup then you can 'get it back', so to speak, by copying this hard link.

I do not know, rm is a very dangerous command unlike mv ....

soo if rm is linked (alias!) to mv your files (even if deleted would reappear say in /backup /trash etc...)

that would kinda solve that one

As for hardlinking everything not so sure .... its a difficult one ... I backup /home & /etc nothing else sure at the very least my personal data is safe


Mr Green

Offline

Board footer

Powered by FluxBB