You are not logged in.
I currently have 3x 3TB drives in a RaidZ-1 setup. I recently inherited a Seagate Exos 10TB drive. I am wondering what is the best way to configure these 4 drives. I see two options:
A mirror setup of (Raid0 group with the 3x TB drives) and (the 10TB drive). This reduces the 10TB drive to being effectively a 9TB drive.
A RaidZ-1 setup with all four drives. This would reduce the 10TB drive to being effectively another 3TB drive.
I believe each of these setups would give 9TB of effective space, and all can tolerate one drive loss. How do I decide which setup to go with? Which one is better?
Last edited by zzggbb (2024-07-26 05:23:16)
Offline
option one is not possible:
a zpool is an implicit stripe (raid0) over its vdevs - and there are only these two layers - you can not mirror a vdev with another vdev
also your logic is wrong: a raidz1 (raid5) uses n-1 drives for data and the last drive for parity - so with 3x 3tb raidz1 you get only 6TB useable space
I don't see a usefull setup here other than to use the bigger drive as a backup-target for the pool
however - you can use regular md-raid or lvm to create a stripe (raid0) of the 3tb drives to get a 9tb logical and mirror that with a 9tb partition on the 10tb drive - but there's no real benefit other than maybe slightly improved read speeds
as with all "raid" setups: it's best to create arrays with same size drives - mixing sizes always results in waste of capacity
Last edited by cryptearth (2024-07-27 03:54:16)
Offline
Zfs uses slightly different terminology than RAID.
With zfs, hard drives are grouped into vdevs. Groups of vdevs make up the pool. If *ANY* vdev in a pool fails, you lose the entire pool and all your data.
Keep your vdevs healthy and your pool will be healthy.
1. You could set this up as 4 separate vdevs (one disk per vdev). But this setup would have no redundancy and a failure of any disk would destroy all your data.
2. You could set this up as 2 vdevs, each with 2 disks. Within each vdev, you could mirror the two disks. The 10 TB drive would be limited to 3TB capacity. You could lose 1 disk from each vdev and still have a viable pool, but losing both disks in either vdev would cause you to lose all your data. Total storage capacity would be around 5.79TB.
3. You could set this up as 1 vdev with 4 disks in a Raid-Z1 structure. The 10 TB drive would be limited to 3TB capacity. You could lose 1 disk and have a viable pool, but losing 2 disks would destroy all your data. Total storage capacity would be around 8.42 TB.
4. You could set this up as 1 vdev with 4 disks in a Raid-Z2 structure. The 10 TB drive would be limited to 3TB capacity. You could lose 2 disk and have a viable pool, but losing 3 disks would destroy all your data. Total storage capacity would be around 5.62 TB.
Some resources:
ZFS Storage Design and other FreeNAS Information. (The link to the Power Point presentation is labeled "FreeNAS Guide 9.1.pdf" and has many useful examples.)
Keep in mind that rebuilding a RAID array places a lot of stress on the remaining drives, which increases the risk of a second drive failure before you can rebuild the array. I believe this is why RAID-Z1 (RAID5) is not really recommended anymore (they both tolerate only a single drive failure).
[Option #4]
Once a vdev is created, you cannot add more disks (to increase storage capacity). You can however, change out existing disks to larger disks. The capacity will remain at the old value until all disks have been upgraded. So, using #4 as an example, you could gradually replace the 3 TB drives (as money allows) with 10 TB drives. When they have all been replaced, the total storage capacity would grow from 5.62 TB to 18.7 TB. This is what I recommend.
[Option #2]
I have also used #2. The advantage here is that, with 2 vdevs, write speed to the pool is much faster... at the expense of storage capacity. Also, rebuilding an array which uses a straight mirror is much faster and stresses the remaining drives much less than rebuilding from parity (or so I've read).
Cheers,
"Before Enlightenment chop wood, carry water. After Enlightenment chop wood, carry water." -- Zen proverb
Offline
although dakota gave lot of useful info I'd like to add an important info about a very old topic "raid10 vs raid6" - or in zfs terms: "2 mirror vdev vs 1 raidz2" - or more general: "striped mirror cs dual-parity":
as noted a zfs pol is always an implicit stripe across its vdevs - so in old raid-level notation its alway an raidN0 configuration
hence when one vdev (a sub-group of drives) fails the entire pool is gone for good
the main difference isn't how many drives can fail but rather which drives can fail
a striped mirror lloks like this:
pool
mirror-1
disk1
disk2
mirror-2
disk3
disk4
in terms of fault tolerance/resiliency this setup is a double-edge sword: it CAN withstand the failure of two disk - as long as only one disk per mirror fails (disk1 and disk3) - but it will fail entirely when both disks of the same mirror fail (disk1 and disk2)
a striped mirror has benefits in performance both read and write speeds but is limited in capacity (50% max) and fault tolerance
a raidz2 looks like this:
pool
raidz2
disk1
disk2
disk3
disk4
this pool also has 50% of useable space - but it can withstand a double failure no matter which drives fail
as a penalty it comes with decreased performance in writes and no increase in reads (in fact even reads can be slightly worse)
its main purpose is when one requires high availability for a workload with more reads than writes and preferable big bulk transfers - like a streaming server
the major differences comes into play when you scale a pool up beyond 4 drives:
pool
mirror-1
disk1
disk2
mirror-2
disk3
disk4
mirror-3
disk5
disk6
mirror-4
disk7
disk8
pool
raidz2-1
disk1
disk2
disk3
disk4
raidz2-2
disk5
disk6
disk7
disk8
pool
raidz2
disk1
disk2
disk3
disk4
disk5
disk6
disk7
disk8
with the mirror it's the same: 50% space, one disk per mirror can fail
with the double raidz2 it's also the same: also 50% space with two drives per vdev can fail
if a second drive in the same mirror or a third drive in the same raidz2 fails the entire pool is gone
the interesting setup is the 8-wide raidz2:
the space efficiency increases to 6/8 = 75% but the fault tolerance stays at two disks - if a third fails the vdev fails and the pool fails
there're several options:
3-wide mirrors: instead of have just two disks per mirror one uses three disk per mirror - this way each mirror can withstand a double-failure on its own - but space reducec to just 33%
raidz3: there's no classic raid-level equivalent - raidz3 uses triple-parity so a vdev can withstand a triple failure without losing data
d-raid: a new more dynamic approach - don't have played around with it yet but provides a more finer way to describe the striping and mirroring and parity of a vdev - although I'm not sure if there's a benefit to combine multiple d-raid vdevs
in the end it comes down to your requirements:
I personally would recommend a striped mirror only if it's ok for the pool to go down and be rebuild from backups
if you want a bit more safety and be ok with performance penalty go raidz2 (the internet recommends 8-wide to 12-wide for raidz2 vdevs)
and of course - as always - raid is no ... ah, screw that - yes, of course raid isn't a backup - in fact it's often an excuse for no backup at all - but it's a first step towards not losing data due to random drive failure - and even following the 3-2-1 rule - not matter what setup my primary storage is - at least the backup targets should always be arrays in some form - even if it's just a simple mirror by writing two tapes instead of just one
Offline