You are not logged in.
Heyo,
tl;dr at bottom
I currently have a home server which has been running well however my storage requirements keep going up and have reached a point where a change is needed.
my system has a nonRAID HBA with connectivity for up to 16 drives.
I knew my storage requirements would grow but i have not anticipated the speed at which it has.
atm my system boots from an ssd, runs samba, ssh, sql, vpn, multiple simple nginx sites,
the storage setup atm is dm-raid with 7 drives in RAID5 with a drive soon to be added to provide RAID6.
I have ext4 on luks however i have come to a point where i cannot expand the array past 16TB, which i thought was no longer a problem with ext4
Google searches however showed me that it must be created above 16TB initially and that i cannot resize past 16TB.
i chose ext4 as i was expected to convert to btrfs once it became viable.
my questions now are.
where should i go from here. im currently stuck in a 15TB filesystem quickly running out of space.
its important for me to have a single volume of data, with data protection(hd failure resolution+encryption) and becoming a concern in the long term is data integrity.
With the choice to convert over to btrfs i have another question in my head regarding raid/encryption. as i understand btrfs has raid1/0 support and potentially raid5+ in the future,
i currently like the filesystem ontop of luks ontop of dmraid, it makes sense to me.
if i change to btrfs does this layering choice make sense?
can someone recommend a more appropriate choice?
in the not to distant future my storage requirements may go up to around 30TB+ easily.
as i see it i can do the following.
1. keep dm-raid+luks and replace ext4 with btrfs and grow volume(via convert filesystem) - (ok for meantime but an array of may disks like this doesnt sound ideal)
2. change to a different filesystem that supports multi-device large volumes with the features requested.
3. split up the storage into multiple smaller volumes(least preferred)
advice would be very much appreciated,
tl;dr HALP! many disk RAID array wont grow above 16TB(ext4) and need a solution for future.
Offline
its important for me to have a single volume of data, with data protection (hd failure resolution+encryption) and becoming a concern in the long term is data integrity.
If you want a single volume and data integrity, on Linux you can only have Btrfs or ZFS. Both do not include encryption (yet), but you can use additional software for that.
i currently like the filesystem ontop of luks ontop of dmraid, it makes sense to me. if i change to btrfs does this layering choice make sense?
Both Btrfs and ZFS combine file system and raid manager, so you only need to "layer" the encryption. For the data integrity magic to work, however, Btrfs/ZFS must be the lowest layer, directly accessing your disks. You would add encryption on top of it and my first choice for that would be eCryptfs. EncFS or dm-crypt/LUKS seem unfeasible for your scenario. Note that eCryptfs has a limit on path length but I have never come across it myself.
If you ask me, go for:
2. change to a different filesystem that supports multi-device large volumes with the features requested.
Personally, I consider ZFS the better option; more stable, more featureful, better designed and easier to work with. You can set up a RAID-10-style pool (i.e. mirrored disks concatenated) and migrate your data step by step. Easily extendable.
Offline
Somewhat biased info, but:
http://rudd-o.com/linux-and-free-softwa … than-btrfs
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline
You could create a second RAID array, then use LVM to create a single volume out of those 2 arrays, then layer your LUKS, filesystem etc on top of that.
Of course, that would require a fair bit of data moving to 'insert' LVM into your existing array.
Are you familiar with our Forum Rules, and How To Ask Questions The Smart Way?
BlueHackers // fscanary // resticctl
Offline
I have a server running with 4 luks encrypted disks and ZFS as RAID-Z1 (with the kernel driver, not fuse) on top of it using the luks-devices. This setup works quite well and performance is quite good (at least the zfs-volume is faster than my GB-Ethernet, so It's not the bottleneck)..
The setup can be expanded easily adding disks, exchanging the disks for bigger ones etc..
The setup is stable so far (has been running it for several months now without a problem) and maintanance of the ZFS is nice. I also quite like the advanced features of ZFS like subvolumes, data compression etc..
My System: Dell XPS 13 | i7-7560U | 16GB RAM | 512GB SSD | FHD Screen | Arch Linux
My Workstation/Server: Supermicro X11SSZ-F | Xeon E3-1245 v6 | 64GB RAM | 1TB SSD Raid 1 + 6TB HDD ZFS Raid Z1 | Proxmox VE
My Stuff at Github: github
My Homepage: Seiichiros HP
Offline
Thanks everyone, zfs does sound most apropriate.
however i have some concerns
Its my understanding that i cant just add a single disk and increase the size of the raidz volume. I would in fact have to replace each disk one by one with larger disks?
I really would like to have the ability to expand the volume by adding individual disks.
mdadm works really well for me in this, and allows me to react to my growing storage needs.
am i wrong in my assumption that raidz doesnt allow me to do this?
having to drop a large amount of cash on some 4tb drives seems like a waste for my uses when my existing drives are much more cost effective.
i guess its all about compromises.
seiichiro0185 for your luks zfs setup. am i understanding that you have 4 individual luks volumes that are then pooled together with zfs? isnt this an inefficient method. sounds a bit hacky, but if there is no alternative to this then i guess its the best of a limited situation.
this may seem stupid but what about mdadm+luks+zfs or even mdadm+luks+btrfs?
ignoring the fact that zfs is meant to be used as volume manager as well.
i obviously would loose some features of these filesystems, but allows my storage to expand, allows some cool new features that ext4 just doesnt do. and i guess if btrfs matures and adds some form of raid(other than 0,1 or 10) and is stable,
i could always blow that array away and switch over +eCryptFS
mdadm in the long run isnt a solution, but expansion of the array is pretty important, especially since i already run 3tb drives.
Last edited by smelly (2013-08-03 03:12:15)
Offline
You're right about the fact that you can't add a single disk to a raidz pool in zfs like you can with mdadm. You can however add a new raidz to the pool (e.g if you have a raidz with 3 drives, you can add another raidz with 3 drives to the pool, example shown here: http://www.zfsbuild.com/2010/06/03/howt … ted-zpool/). For me this wasn't a real limitation, since I use a NAS-like case for the server that can't take more than 4 drives anyways.
For my setup you're right, I have 4 individual luks volumes which are then pooled togehter with zfs. I actually thought about the efficiency too, but my tests showed that this setup is actually faster than my previous mdadm raid5 with luks on top. I get about 120MB/s writing and 180MB/s reading from the pool on an Intel Pentium G2120. It's also faster than other aproaches like zfs with a file-system level encryption like ecryptfs/encfs on top (at least in my limited tests).
Last edited by seiichiro0185 (2013-08-03 05:53:39)
My System: Dell XPS 13 | i7-7560U | 16GB RAM | 512GB SSD | FHD Screen | Arch Linux
My Workstation/Server: Supermicro X11SSZ-F | Xeon E3-1245 v6 | 64GB RAM | 1TB SSD Raid 1 + 6TB HDD ZFS Raid Z1 | Proxmox VE
My Stuff at Github: github
My Homepage: Seiichiros HP
Offline
I think that none of the setups both of you describe is really hacky. Layering several filesystems and whatnot on top of each other works fine and people do it all the time. I am, however, a bit critical of both approaches. (Particularly as a fan of ZFS, so consider me biased.)
1. If seiichiro0185 uses a limited number of 4 disks as raid-z inside a NAS, then this is a perfectly fine solution. But with that setup he won't ever experience that large raid-z arrays with the wrong number of disks (for more info see here: http://www.solarisinternals.com/wiki/in … ces_Guide) or even several raid-z arrays grouped together (even more info here: http://constantin.glez.de/blog/2010/06/ … rformance) will give you performance hits.
2. If smelly has 7 disks he wants to keep (and planning to add more over time, if I understood correctly), I would recommend what I wrote above: Creating a striped ZFS pool from sets of mirrors, similar to RAID-10, by using zpool add and zpool attach. This will give you less space than RAID-Z (granted) and maybe slower r/w speeds (don't know enough about that), but your CPU will have to do less work and you will be more flexible in adding/removing disks. Of course, you need an even number of disks for that, so 6 for now. creating mirrors of same-size (or similar size) disks and glueing them together to make one pool. You can add more mirrors to extend the pool over time, or, if one disk in a mirror fails you can replace it with a bigger one. As soon as you replace the other disk, ZFS will automatically grow the pool.
And keep in mind that setting up ZFS on top of anything else but physical disks (and most definitely placing ZFS anywhere above an mdadm layer) will practically disable ZFS's error correction magic!
I could go on and on about this but I'll stop here. Hope you don't mind the pamphlet.
Offline
i cannot expand the array past 16TB
And you say this is a "home server"? You need to actually watch some of that porn (or Dexter episodes). You do know humans have a limited lifespan, right?
quick calc: 1GB = 1 hour, so 1 TB = 3 hours/day for a year, so 16 GB = 16 years!
Last edited by vacant (2013-08-03 14:26:00)
Offline
And keep in mind that setting up ZFS on top of anything else but physical disks (and most definitely placing ZFS anywhere above an mdadm layer) will practically disable ZFS's error correction magic!
Could you elaborate on that (or provide a link with an explanation)? I don't quite get why having a layer of encryption between the individual physical discs and ZFS should disable the error correction (I do understand why it could be problematic on a multi-disc spannig device like mdadm/lvm). To my understanding, since everything is checksumed in ZFS, it shouldn't make a difference if the physical drive goes bad or the encryption layer (or both for that matter). In both events ZFS should be able to detect and correct the problem (granted some kind of redundancy is in place). And since each luks-volume is essentialy a physical drive it should behave exactly as if the encryption wasn't there. Maybe I overlooked something here, but thats why I'm asking .
My System: Dell XPS 13 | i7-7560U | 16GB RAM | 512GB SSD | FHD Screen | Arch Linux
My Workstation/Server: Supermicro X11SSZ-F | Xeon E3-1245 v6 | 64GB RAM | 1TB SSD Raid 1 + 6TB HDD ZFS Raid Z1 | Proxmox VE
My Stuff at Github: github
My Homepage: Seiichiros HP
Offline
quick calc: 1GB = 1 hour, so 1 TB = 3 hours/day for a year, so 16 GB = 16 years!
You mean 16 TB = 16 years. Anyway, maybe he has _a lot_ of p0rn
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline