You are not logged in.
Hi,
I configured RAID1 on my Arch Linux system. Sometime during the past few days (I don't know when) I think SATA cable of one of the disks (sda) went loose. I got a hint by some error messages in dmesg output.
I realized this only today. To minimize the damage I rebooted the system, made sure that all connectors are connected properly and booted it again. Now the system is up and running, but what I see is that only one disk is being used at this time around.
# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdc6[1]
915811008 blocks super 1.2 [2/1] [_U]
bitmap: 6/7 pages [24KB], 65536KB chunk
I want to enable the other disk as well. The other disk has sda6 as the partition with is used in RAID1 array.
Can somebody help me how to go about this? I don't want some inconsistency to creep in, because of my doing something wrong.
Last edited by slash_blog (2015-04-29 04:47:12)
Offline
One of your drives has failed: "[_U]"
Offline
Since you are using a bitmap you could try to re-add sda6 and it _should_ synchronize whatever was modified after sda6 dropped out of the array. There might be some problems with this approach that I'm not aware off, since I don't have much experience with raid. What I can tell you is that I've tried it before and things seem to have worked fine.
R00KIE
Tm90aGluZyB0byBzZWUgaGVyZSwgbW92ZSBhbG9uZy4K
Offline
I am seeing the following. I am booting from this system and yet it says no md superblock detected on /dev/md0, while for the two partitions which are part of this RAID1 array there doesn't seem to be any problem.
This is getting confusing. Can somebody kindly comment?
# mdadm --examine /dev/md0
mdadm: No md superblock detected on /dev/md0.
# mdadm --examine /dev/sda6
/dev/sda6:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 00a62e18:a8c29e17:77d4627c:91a94994
Name : ncdl-desk:0 (local to host ncdl-desk)
Creation Time : Thu Apr 2 03:00:39 2015
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 1831622064 (873.39 GiB 937.79 GB)
Array Size : 915811008 (873.39 GiB 937.79 GB)
Used Dev Size : 1831622016 (873.39 GiB 937.79 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=48 sectors
State : clean
Device UUID : 88809386:57f1b788:8f3a0e82:72b0ac87
Internal Bitmap : 8 sectors from superblock
Update Time : Tue Apr 28 00:36:18 2015
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 672755a9 - correct
Events : 10520
Device Role : Active device 0
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
# mdadm --examine /dev/sdc6
/dev/sdc6:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 00a62e18:a8c29e17:77d4627c:91a94994
Name : ncdl-desk:0 (local to host ncdl-desk)
Creation Time : Thu Apr 2 03:00:39 2015
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 1831622064 (873.39 GiB 937.79 GB)
Array Size : 915811008 (873.39 GiB 937.79 GB)
Used Dev Size : 1831622016 (873.39 GiB 937.79 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=48 sectors
State : clean
Device UUID : 9d297b70:3ce0ec72:49be866b:52e5b83b
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Apr 29 00:07:35 2015
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : d8cb5560 - correct
Events : 36218
Device Role : Active device 1
Array State : .A ('A' == active, '.' == missing, 'R' == replacing)
Offline
Naturally there would be no md superblock on /dev/md. That would be RAID on RAID which is nonsense in most cases.
Your /dev/sda6 stopped being part of the RAID on Tue Apr 28 00:36:18 2015 (see the update time). Whereas the other device is still active at Wed Apr 29 00:07:35 2015. so that's almost 24 hours.
For the failure reason you'd have to check your logs (if your system was up and running and logging at the time of failure).#
In eiher case you should try re-adding the disk to the raid. Also check SMART data and run a long self-test just to be sure the disks themselves are OK.
Offline
Thanks frostschutz, jasonwryan, and R00KIE for your help. I got it working now.
I simply re-added using the following command, and it worked:
mdadm --manage /dev/md0 --re-add /dev/sda6
Offline