You are not logged in.
Hello.
A week ago I assembled a (mirror) Raid 1 with two 3T external usb drives following arch's RAIDs wiki. It was working very well. This means the sda and sdb disks were mounted into one md0 partition. Also, every time I would copy a file to this partition, the read/write lights in both disks were blinking.
Just yesterday I found out that after some time (have no idea how long) there's only one drive mounted in the raid. I can still copy/read files to/from the md0 partition but apparently there is only one disk mounted (sda) and running.
lsblk -f gives me this:
NAME FSTYPE LABEL UUID MOUNTPOINT
sda linux_raid_member alarmpi:0 98fbd334-fe36-bc7a-15b6-e4d83aa95ef6
└─md0 ext4 b9c383c0-53fe-4ce7-9afa-a9b8f19a0410 /media/SyncFolder
sdb linux_raid_member alarmpi:0 98fbd334-fe36-bc7a-15b6-e4d83aa95ef6
and the fstab file concerning the RAID is as follows:
UUID=b9c383c0-53fe-4ce7-9afa-a9b8f19a0410 /media/SyncFolder ext4 defaults 0 0
The output of cat /proc/mdstat is:
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md0 : active raid1 sda[0]
2930102272 blocks super 1.2 [2/1] [U_]
bitmap: 16/22 pages [64KB], 65536KB chunk
unused devices: <none>
The expected behaviour should be to have the sda and sdb drives mounted in the raid md0 partition (/media/SyncFolder).
Does any of you know what could I have done wrong?
Is there a way to turn on the sdb drive and sync it with sda?
Best regards, Jose.
Last edited by boina (2017-03-29 08:51:51)
Offline
One of the members of the array has failed, you need to either reassemble the array so that it can resync, or replace the drive if it is faulty.
Offline
Thanks for the reply. I guess I accidentally touched a cable or something and disconnected it temporally.
Now it's mounted an up again.
The new output of cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md0 : active raid1 sdb[1] sda[0]
2930102272 blocks super 1.2 [2/1] [U_]
[>....................] recovery = 0.6% (18182080/2930102272) finish=2585.7min speed=18768K/sec
bitmap: 16/22 pages [64KB], 65536KB chunk
unused devices: <none>
I guess this means the disks are being synchronized back.
just to try to fully understand what happened, once there's a failure, the disk is marked an the raid will not try to use it again? Not even after a reboot?
Thanks a lot for your help.
I love this community!!
Jose.
Offline
I don't know if this is a good advice, more experience people here might be able to chime in on it, but I would look into adding a write-intent bitmap.
I have a setup similar to yours with two usb disks in raid1 and I also had a problem similar to yours and it was a pain having to resync everything every time a disk dropped from the array (due to me hitting the cable or random dropouts due to EMI).
I have since added the write-intent bitmap and resyncs are much fast if/when a disk drops out of the array for reasons other than actual disk problems. Mind you that using a write-intent bitmap can reduce performance.
Edit:
I shouldn't post while I'm still half asleep, you are using a write-intent bitmap, nothing to do there.
Last edited by R00KIE (2017-03-29 09:51:31)
R00KIE
Tm90aGluZyB0byBzZWUgaGVyZSwgbW92ZSBhbG9uZy4K
Offline