You are not logged in.

#1 2010-12-02 02:48:35

Shapeshifter
Member
Registered: 2008-03-11
Posts: 230

I need a secure and transparent backup method!

Okay, so it's really quite simple:

- A server
- A client

And I need:

- Encrypted storage
- The server should not at any point need to mount the storage or know about the encryption key
- Encrypted transfer
- The backup should be browsable
- Incremental, snapshot backups

And I found a way of doing all this but it's immensly slow and prone to huge lags or stuff just hanging when cancelled:

1.) On the server I made a 17 gigabyte file using dd
2.) I mounted the folder containing that file on my client using sshfs
3.) I mounted that file using losetup (loop)
4.) I used LUKS to make a new encrypted storage container in that file
5.) I mounted the container and made a new filesystem in it
6.) I mounted that filesystem
7.) Now I can use something like rdiff-backup to make backups to that folder

So I have a folder mounted locally which is actually a filesystem encrypted with luks mounted through sshfs. The server doesn't know about the encryption key, the communication goes over ssh.

But it's so slooooow!

Unbearable.

Any other suggestions on how to do this? I don't want to use NFS because I won't bother with Kerberos and plain authentication is totally insecure. And I don't want to mount the encrypted filesystem on the server.

Duplicity would be a rather straight forward way of having backups that are encrypted but they're not really browsable. They're a pain in the ass, really.

Then again, maybe it's just slow because my internet sucks. I've got an additional requirement: QoS would be brilliant. And the possibility to pause backups.

How would you do this?

Thanks.

Last edited by Shapeshifter (2010-12-02 02:54:50)

Offline

#2 2010-12-02 02:58:23

thestinger
Package Maintainer (PM)
From: Toronto, Canada
Registered: 2010-01-23
Posts: 478

Re: I need a secure and transparent backup method!

duplicity doesn't let you browse the backups, but you can list the files currently stored in the backup and restore specific ones

there's also dropbox-like stuff like sparkleshare that might work for you

Last edited by thestinger (2010-12-02 03:00:56)

Offline

#3 2010-12-03 01:12:56

rwd
Member
Registered: 2009-02-08
Posts: 664

Re: I need a secure and transparent backup method!

you could use an intermediary step where you rdiff-backup to  an encrypted file/container, and then rdiff-backup it  to the remote server. That makes encrypted transmission overhead unnecessary because the contents are already encrypted.

Last edited by rwd (2010-12-03 01:15:02)

Offline

#4 2010-12-03 04:50:52

Shapeshifter
Member
Registered: 2008-03-11
Posts: 230

Re: I need a secure and transparent backup method!

rwd, indeed that's an option, but not in my case as I plan to use this for very large volumes. Even if rdiff-backup were able to diff such large files, that would cause a lot of overhead and/or temporary storage.

I went with my initial plan, but refined it a bit and it seems to work quite well:

After having created the volume on the server using dd and having made a filesystem on it, it's a matter of:

sshfs -p12345 user@domain:/mnt/space /mnt/enc # mount remote share containing encrypted container
losetup /dev/loop0 /mnt/enc/encstorage # mount container as loopback device
cryptsetup -d keyfile luksOpen /dev/loop0 enc # open loopback device using LUKS
mount /dev/mapper/enc /mnt/enc_container/ # mount opened LUKS device, now browsable

And to unmount, in reverse order:

umount /mnt/enc_container/
cryptsetup luksClose enc
losetup -d /dev/loop0
umount /mnt/enc

It's not as bad as I initially thought, I just needed to use some QoS, so that my network connection doesn't get busted. I this from here and adjusted it for my server IP and desired speeds:

DEV=eth0
DEV_MAX=100Mbit
SSH_UPLOAD=750kbit
SERVER_IP=192.168.10.50/32
SERVER_PORT=12345
tc qdisc del dev $DEV root
tc qdisc add dev $DEV root handle 1: htb default 10
tc class add dev $DEV parent 1: classid 1:1 htb rate $DEV_MAX
tc class add dev $DEV parent 1:1 classid 1:10 htb rate 500kbit ceil $DEV_MAX
tc class add dev $DEV parent 1:1 classid 1:11 htb rate 750kbit ceil $SSH_UPLOAD
tc qdisc add dev $DEV parent 1:10 handle 10: sfq
tc qdisc add dev $DEV parent 1:11 handle 11: sfq
tc filter add dev $DEV parent 1: protocol ip prio 5 \
u32 match ip dst $SERVER_IP \
match ip dport $SERVER_PORT 0xffff classid 1:11

And it works quite well. Thunar just sees a folder, but in reality it's an encrypted container residing on a remote server.

I've made a couple of interesting observations when modifying files on the volume:

- When copying something to it, it would happen instantly, even if the file was large, but of course my upstream isn't that quick. The transfer only kicked in like 20 seconds late (observed through iftop).
- If I were to delete the file I copied right after, it would be gone immediately and no network traffic would be caused.
- If I copied something and then waited until the transfer had started, and while it was copying the file, removed it from the mounted folder, the file would appear to be gone immediately and I was able to copy a file with the same name to it, right away, although the traffic would keep up.

So it seems like the whole process using some sort of transaction queuing which actually works quite well, coupled with the QoS, there's no real impact on any sort of workflow, even if doing complicated file transactions. I'll test how things work out for larger amounts of data.

Btw, I tried the same thing, but instead of using sshfs, using nfs tunnelled through ssh. Same results, really, and there is not much of a point in using nfs, because file-locking is no use on a single large volume.

Last edited by Shapeshifter (2010-12-03 04:52:58)

Offline

#5 2010-12-03 11:47:14

rwd
Member
Registered: 2009-02-08
Posts: 664

Re: I need a secure and transparent backup method!

Interesting solution! I'm bookmarking this.

Offline

Board footer

Powered by FluxBB