You are not logged in.

#1 2018-02-04 12:17:01

nicocot
Member
Registered: 2018-02-04
Posts: 29

Issue with software RAID array using mdadm

Hello,

I'm trying to set up a RAID 1 system with 2 WD Red 4Tb HDDs ("/dev/sdb" and "/dev/sdc"), using mdadm.

I first started by creating a GPT table on each of the HDDs with GParted and then partitioned them so as to leave 100 MB of unallocated space at the end of them (as suggested in the Wiki: https://wiki.archlinux.org/index.php/RAID, leaving me with two partitions "/dev/sdb1" and "/dev/sdc1" as seen below).

Then, I followed the following steps of the Wiki: "Build the Array", "Update configuration file" and "Assemble the Array" without any issue.

However, as I now try to "Format the RAID Filesystem", I'm not able to calculate the stride and stripe width since I don't get any output from the command line:

# mdadm --detail /dev/mdX | grep 'Chunk Size'

I also have an error message with the command line:

#  tune2fs -l /dev/md0
tune2fs 1.43.8 (1-Jan-2018)
tune2fs: Numéro magique invalide dans le super-bloc lors de la tentative d'ouverture de /dev/md0

(Bad magic number in super-block while trying to open /dev/md0)

And also this error:

# mdadm --examine /dev/md0
mdadm: No md superblock detected on /dev/md0.

What should I do next? It seems I cannot even re-start from the begining with:

# mdadm --misc --zero-superblock /dev/<drive>
# mdadm: Couldn't open /dev/sdb1 for write - not zeroing

Here are the steps that worked well:


# cat /proc/mdstat
Personalities : [raid1] 
md0 : active raid1 sdc1[1] sdb1[0]
      3906784256 blocks super 1.2 [2/2] [UU]
      bitmap: 0/30 pages [0KB], 65536KB chunk
# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Sat Feb  3 15:27:28 2018
     Raid Level : raid1
     Array Size : 3906784256 (3725.80 GiB 4000.55 GB)
  Used Dev Size : 3906784256 (3725.80 GiB 4000.55 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Feb  3 23:29:51 2018
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : computerng:0  (local to host computerng)
           UUID : 02f98ea1:6bfbe6ee:ba5ef6d7:7dfaf2d6
         Events : 5871

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1

Any advice will be much appreciated.

Many thanks for your help!

Last edited by nicocot (2018-02-04 14:51:55)

Offline

#2 2018-02-18 16:16:55

paulkerry
Member
From: Sheffield, UK
Registered: 2014-10-02
Posts: 611

Re: Issue with software RAID array using mdadm

I know this is a late answer to your post and you might now be sorted, but did you actually make the filesystem after creating the raid?
As one of your commands is tune2fs, mkfs.ext4 for instance should have been used somewhere in there..

Offline

#3 2018-02-18 18:31:13

R00KIE
Forum Fellow
From: Between a computer and a chair
Registered: 2008-09-14
Posts: 4,734

Re: Issue with software RAID array using mdadm

I might be wrong but for RAID1 you don't have a chunk size which means no stripe or stride configuration in the mkfs command, that only applies to RAID levels that split data between several disks.

As for why 'tune2fs -l /dev/md0' returns an error, maybe you forgot to format the filesystem.


R00KIE
Tm90aGluZyB0byBzZWUgaGVyZSwgbW92ZSBhbG9uZy4K

Offline

#4 2018-02-18 19:15:41

nicocot
Member
Registered: 2018-02-04
Posts: 29

Re: Issue with software RAID array using mdadm

Hi paulkerry & R00KIE,

Thank you very much for your help.

I finally formated my md0 raid filesystem in ext4 with 'gnome-disk-utility', since I was unable to 'calculate the stride and stripe width' as per the Wiki guidelines.

It is not clear for me whether calculating these stride/stripe is useful or needed for creating a RAID1 system. The French version of the Wiki does include an example of a RAID1 system wherein these stride/stripe are configured (https://wiki.archlinux.fr/RAID#Exemple_1_:_RAID_1), but not the English version of the Wiki.

I've been able to mount the RAID1 partition and it seems to work, but I'm not sure I did everything properly though.

For instance, I manually edited the /etc/fstab file (as shown in this video https://www.youtube.com/watch?v=7u4ml7P3iX4 at 09:00), which is not very recommended according to certain users on other forums.

And I still get this error:

# mdadm --examine /dev/md0
mdadm: No md superblock detected on /dev/md0

which might be an indication that it's in fact not working properly.

I also added mdadm_udev to mkinitcpio.conf as per the Wiki and get this error at every boot:

PCIe Bus Error: severity=Corrected, type=Physical Layer, id=006e6(Receiver ID)
device [8086:a296] error status/mask=00000001/00002000"

This does not prevent me booting though.

I will try to put my Raid1 system to the test with these guidelines as soon as I find some time:
https://www.techrepublic.com/blog/data- … -prepared/

Last edited by nicocot (2018-02-18 19:16:43)

Offline

#5 2018-02-18 19:58:50

loqs
Member
Registered: 2014-03-06
Posts: 17,369

Re: Issue with software RAID array using mdadm

# mdadm --detail /dev/mdX | grep 'Chunk Size'

Was /dev/mdX a transcription error?  What is the output of

# mdadm --detail /dev/md0

Offline

#6 2018-02-19 19:04:22

nicocot
Member
Registered: 2018-02-04
Posts: 29

Re: Issue with software RAID array using mdadm

loqs wrote:
# mdadm --detail /dev/mdX | grep 'Chunk Size'

Was /dev/mdX a transcription error?  What is the output of

# mdadm --detail /dev/md0

Hi loqs,

Thank you for your message.

The output is:

# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Feb  9 19:40:40 2018
     Raid Level : raid1
     Array Size : 3906784256 (3725.80 GiB 4000.55 GB)
  Used Dev Size : 3906784256 (3725.80 GiB 4000.55 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Mon Feb 19 19:42:46 2018
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : computerng:0  (local to host computerng)
           UUID : 77687d44:1d02986e:8fbb31ba:8f95d1f9
         Events : 5866

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1

Offline

#7 2018-02-19 19:19:24

frostschutz
Member
Registered: 2013-11-15
Posts: 1,418

Re: Issue with software RAID array using mdadm

RAID 1 does not have chunk, stride, etc. - everything's on every disk anyways so there is nothing to optimize about what ends up on which disk like in a RAID5 / RAID6

Offline

#8 2018-02-19 19:25:51

nicocot
Member
Registered: 2018-02-04
Posts: 29

Re: Issue with software RAID array using mdadm

frostschutz wrote:

RAID 1 does not have chunk, stride, etc. - everything's on every disk anyways so there is nothing to optimize about what ends up on which disk like in a RAID5 / RAID6

That makes sense. Thanks frostschutz!

Offline

Board footer

Powered by FluxBB