You are not logged in.
Pages: 1
Hey,
I want to create proper raid5 array with 5 750GB SATA2 drives and no spare drive. I will use this array to complement my backup systems.
The devices in question are:
/dev/sda, /dev/sdb, /dev/sdd, /dev/sde, /dev/sdf
They have served in a LVM2 before but for data safety reason I now require a raid5.
So far, I have used fdisk to create a single, disk-filling partition of type 'fd' (linux raid autodetect) for each of those drives I listed. Of course, for each drive, it is the only existing partition on there.
I proceeded to create a raid5 array using mdadm:
mdadm --create /dev/md0 --raid-devices=5 --spare-devices=0 /dev/sda1, /dev/sdb1, /dev/sdd1, /dev/sde1, /dev/sdf1
At first, it looked like it all went well because:
mdadm --detail /dev/md0
gave this:
/dev/md0:
Version : 0.90
Creation Time : Thu Jun 11 14:07:09 2009
Raid Level : raid5
Array Size : 2930287616 (2794.54 GiB 3000.61 GB)
Used Dev Size : 732571904 (698.64 GiB 750.15 GB)
Raid Devices : 5
Total Devices : 5
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Thu Jun 11 14:54:33 2009
State : clean, degraded, recovering
Active Devices : 4
Working Devices : 5
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
Rebuild Status : 30% complete
UUID : 0a1c4ad5:826b1955:fc774189:849a68dd
Events : 0.28
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 49 2 active sync /dev/sdd1
3 8 65 3 active sync /dev/sde1
5 8 81 4 spare rebuilding /dev/sdf1
But wait! Spare device? Spare rebuilding? I didn't want any spares or I'd be doing raid6 anyway! Why is it degraded? I can understand it is rebuilding because it is new, but why the spare? Is it damaged?
SMART gives green light on all the drives, which isn't saying much, but severe damage should easily detected by SMART.
Somebody please give me some insight on this.
Offline
Looks like I was too stupid to read the man pages after all!
From mdadm man page :
"...When creating a RAID5 array, mdadm will automatically create a
degraded array with an extra spare drive. This is because building the
spare into a degraded array is in general faster than resyncing the
parity on a non-degraded, but not clean, array. This feature can be
over-ridden with the --force option."
Offline
Sorry for digging up an oldie, but since I'm having the same issue, I hope it isn't a problem. What did you do, Svenstaro, did you use the --force option or was the spare marked as active after the rebuilding? I'm asking because I'm having the same "issue" as described here.
Offline
Read!
mdadm creates a degraded array with an extra spare drive. And than proceeds to built that spare into the array. As this is generally faster than making a clean array and syncing the parity all over it.
So when creating a 5 drive RAID5 array, it creates a degraded array, which will use 4 drives and a spare. Then it starts to "recover" the array, building in the spare. This can take a while, but you can already use the array at this point. Note however that during the rebuilding the array is not capable of surviving a drive failure yet and also performance will be suboptimal.
When rebuilding is done, you will have a clean RAID5 array, consisting of 5 active drives.
There is no need to use the --force parameter
Last edited by Ultraman (2010-03-27 11:27:04)
Offline
Pages: 1