You are not logged in.
Only since last mdadm (3.1.1-1) upgrade, rc.sysinit mdadm array assembling portion say to have failed the task. in fact not true, or at least only partially.
i have mdadm hook in mkinitcpio.conf and initial assembling is still perfect and all my /dev/md* spin up and kernel param root=/dev/md0 take place.
culprit sysinit part
# If necessary, find md devices and manually assemble RAID arrays
if [ -f /etc/mdadm.conf -a "$(/bin/grep ^ARRAY /etc/mdadm.conf 2>/dev/null)" ]; then
status "Activating RAID arrays" /sbin/mdadm --assemble --scan
fi
my mkinitcpio.conf HOOKS
HOOKS="base udev autodetect pata scsi sata mdadm usbinput keymap filesystems"
my ARRAYs detail, stats and mountpoints
ARRAY /dev/md0 metadata=0.90 UUID=374f603c:367e764c:dfa9f51a:614217c1
ARRAY /dev/md1 metadata=0.90 UUID=f6c55da7:ca1bb339:0bba7969:973cd962
ARRAY /dev/md2 metadata=0.90 UUID=4ade1b14:7a32c6bc:9a6cd223:ef0c314e
Personalities : [raid6] [raid5] [raid4] [raid1]
md2 : active raid1 sda1[0] sdc1[2] sdb1[1]
313152 blocks [3/3] [UUU]
md1 : active raid5 sda3[0] sdc3[2] sdb3[1]
589617408 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
md0 : active raid5 sda2[0] sdc2[2] sdb2[1]
32804480 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
unused devices: <none>
/dev/md0 on / type ext4 (rw,relatime)
/dev/md1 on /home type ext4 (rw,relatime)
/dev/md2 on /boot type ext2 (rw,relatime)
with mdadm new version...
$ sudo mdadm --assemble --scan; echo $?
2
... as you can see, if arrays are already up (i think), assembling return exit code 2 forcing sysinit 'status' to say [FAIL].
Sincerely, i've never ascertain previous mdadm exit code, but not failing i think was 0 (zero).
is happening to me only?
Offline
Same for me, mdadm fails at startup
Running /etc/rc.d/mdadm start works.
Offline
I upgraded this weekend, and now I'm getting a failure with raid. ./mdadm start fails as well. All worked fine prior to the upgrade.
Offline
Have a look into /etc/mdadm.conf there have to be something like this:
ARRAY /dev/md/0 metadata=0.90 UUID=b0cde0aa:69091344:3809f4f9:f7aa96a8
If there is nothing like this activate all RAIDs and use the following command:
mdadm --detail --scan > /mnt/etc/mdadm.conf
This command will generate the needed entries.
Source: http://wiki.archlinux.org/index.php/Ins … AID_or_LVM
Website: andrwe.org
Offline
Have a look into /etc/mdadm.conf there have to be something like this:
ARRAY /dev/md/0 metadata=0.90 UUID=b0cde0aa:69091344:3809f4f9:f7aa96a8
If there is nothing like this activate all RAIDs and use the following command:
mdadm --detail --scan > /mnt/etc/mdadm.conf
This command will generate the needed entries.
Source: http://wiki.archlinux.org/index.php/Ins … AID_or_LVM
I fell into that trap once, so I added that to the Wiki. I'm happy to see it's helping out!
(And indeed it can be done in just one command, I probably wasn't aware of that at the time of writing...)
I've just installed another machine with the current mdadm (3.1.1) and LVM2. It also says it fails when assembling, but it assembles just fine in fact. This "issue" seems to be related to the script indeed.
I now feel safe to upgrade my main rig to mdadm 3.1.1 as well
Last edited by Ultraman (2010-02-09 12:43:52)
Offline
I can also confirm that this is happening. When the raid is assembled in /etc/rc.sysinit, it fails. There is nothing different about my kernel log, it has always showed that an array was set to active. It must be that mdadm now returns a non-zero status when trying to bring up an array that is already active. I've done more testing and the raid is definitely already assembled prior to the mdadm command running. I don't believe this is proper behavior for mdadm to report failure on an already working array IMO.
Feb 10 15:15:22 mycotoxin kernel: md: md0 stopped.
Feb 10 15:15:22 mycotoxin kernel: md: bind<sdc1>
Feb 10 15:15:22 mycotoxin kernel: md: bind<sdb1>
Feb 10 15:15:22 mycotoxin kernel: md: raid1 personality registered for level 1
Feb 10 15:15:22 mycotoxin kernel: raid1: raid set md0 active with 2 out of 2 mirrors
Feb 10 15:15:22 mycotoxin kernel: md0: detected capacity change from 0 to 200046936064
I guess if you really want to fix the error message you can prevent the kernel from assembling the raids at boot time.
When md is compiled into the kernel (not as module), partitions of
type 0xfd are scanned and automatically assembled into RAID arrays.
This autodetection may be suppressed with the kernel parameter
"raid=noautodetect". As of kernel 2.6.9, only drives with a type 0
superblock can be autodetected and run at boot time.
Offline
If the raid is already detected by kernel why should mdadm assemble it adain?
Or am I missing something?
Last edited by Andrwe (2010-02-11 09:11:51)
Website: andrwe.org
Offline
If the raid is already detected by kernel why should mdadm assemble it adain?
Or am I missing something?
in fact it shouldn't.
Mdadm hook use mdadm.conf to assemble in initrd arrays that are needed to continue the boot process, as in my case where md0 is root partition
the sysinit part is there to assemble all other raid arrays not relevant as 'system mountpoints' like some 'non system backup drives'
I don't believe this is proper behavior for mdadm to report failure on an already working array IMO.
I agree IMHO
Last edited by max.bra (2010-02-11 13:58:57)
Offline
I was just wondering if anyone reported a bug for this? I think it could be fixed in the the mdadm package.
Offline
I honestly do not understand where file a bug to mdadm project... or just a suggestion
Offline
The "problem" is in /etc/rc.sysinit. Arrays are assembled by the boot image but rc.sysinit defines a section to assemble them as well. You're free to comment out lines 121 through 123 if this really bothers you that much.
There's plenty of other threads on this forum about this same topic.
Offline
The "problem" is in /etc/rc.sysinit. Arrays are assembled by the boot image but rc.sysinit defines a section to assemble them as well. You're free to comment out lines 121 through 123 if this really bothers you that much.
There's plenty of other threads on this forum about this same topic.
no no. i've looked for similar topic very well before post. at least there wasn't with the date of first post.
plenty of messages you say are talking about a FAILED ARRAYS ASSEMBLING. here the md* creation and assembling is not in question. all is functional!
i think you don't have read at all the 1st post and some of the relevant answers
the "PROBLEM" is that NOW rc.sysinit say [FAIL] after last mdadm upgrade. 2 week ago was all perfect as usual.
also udevd gave a false boot error message some days ago. also there, was easy to resolve commenting out some easy lines. but, coincidence, with last mkinitcpio upgrade the problem is resolved upstream considering new udevd behavior.
is easy to solve a problem healing symptoms and not the disease itself.
Last edited by max.bra (2010-02-15 10:40:01)
Offline
This is a known issue.
EDIT: http://mailman.archlinux.org/pipermail/ … 10987.html
Last edited by pyther (2010-02-15 22:42:11)
Offline
So has there been a bug reported for this? Nobody has posted one and while this may seem harmless for many since the RAID is in fact fine and the system is impacted...I use my Arch systems under a secure DoD facility and anything that FAILS is deemed broken...regardless if the FAIL is a false positive. Anyone know if someone's filed a bug or this will be fixed with a future update of mdadm?
./
Offline
Offline
This unofficial abs file might help someone... it helped me. It adds a /etc/mdadm-init.conf file that gets incorporated into the ram disk to kick start your root device and it will not interfere with the sysinit script.
http://ceruleanmicrosystems.com/archlin … src.tar.gz
So you just add your system critical device (i.e. root raid device) into /etc/mdadm-init.conf and all other raid devices go in /etc/mdadm.conf along with any monitoring configuration, etc.
Then just rebuild the ram disk with:
mkinitcpio -p kernel26
Offline
15 years later, this is still a problem! I submitted an mdadm bug report: https://github.com/md-raid-utilities/mdadm/issues/144
Offline
There's also a similar bug report in Gentoo, FYI: https://bugs.gentoo.org/521280
Offline