Hi, I recently did a complete update, and now I really wish I could go back as I can't get the new kernel to boot properly, but anyway. After rebooting I was greeted with "mdadm /dev/md0 started with 1 drive (of 2)"
This particular volume is just one partition across 2 drives and it operates as `/` with RAID 1. I have GRUB entries to boot from either drive(in case one fails). Both of these work, so I'm not really seeing any true failure. However, I definitely do not want them to get out of sync or anything.
[earlz@EarlzZeta ~]$ sudo mdadm --detail /dev/md0 /dev/md0: Version : 0.90 Creation Time : Thu Jun 23 09:17:58 2011 Raid Level : raid1 Array Size : 96256 (94.02 MiB 98.57 MB) Used Dev Size : 96256 (94.02 MiB 98.57 MB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Wed Oct 3 17:06:19 2012 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 UUID : 66683e43:f27e50e8:52419904:51489ef3 Events : 0.70 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 17 1 active sync /dev/sdb1
What's the safest way to go about getting this volume back up with both drives?
You need to do one of two things (I've been meaning to post about this since I figured it out myself):
1] Remove it and add it back. For example:
# /sbin/mdadm /dev/md0 --fail /dev/sda5 --remove /dev/sda5 # /sbin/mdadm /dev/md0 --add /dev/sda5
2] That didn't work for me, as it said that the device wasn't part of an array. I just had to add it back:
# mdadm --manage /dev/md0 --add /dev/sda5
Of course, you would replace the 'md0' and /dev/sda5 with the correct mappings for your setup, maybe /dev/sda1
You might use smartmontools to check to see that the drive(s) aren't failing.
I may have to CONSOLE you about your usage of ridiculously easy graphical interfaces...
Look ma, no mouse.