You are not logged in.
Sometimes when I boot the system, my /dev/md0 device fails to build and I go into ramfs. Like 80% of the time everything is fine though. Is anyone else seeing this? It started happening to me after the latest kernel update
Offline
I get this on every reboot now, not just some of the time. Any one have a clue why this might be happening?
Offline
I got a 10 seconds "waiting for device" delay with my md0 (which is swap) now and a resume-from-disk fail rate of about 10-20%. There's some new md0 related error somewhere above the "waiting for md0 10 seconds" - still didn't manage to read what it says. Using arch64, software raid on 3 partitions of the same two hd's - swap/ext4/ext4. Only md0 seems to have problems.
Offline
I am running my / on RAID 1 and my storage on RAID 5, I have no problems (built my own 2.6.30 kernel though).
Got Leenucks? :: Arch: Power in simplicity :: Get Counted! Registered Linux User #392717 :: Blog thingy
Offline
I switched to ext2 from ext4 and the problem seems to have gone away. Need more testing to be sure...
Offline
Try using the 'mdadm' hook (instead of the 'raid' hook) in mkinitcpio.conf and run `mkinitcpio -pkernel26` and see if that helps.
The new mdadm hook has solved all my RAID issues.
Are you familiar with our Forum Rules, and How To Ask Questions The Smart Way?
BlueHackers // fscanary // resticctl
Offline
I have my RAID setups defined in both mkinitcpio.conf and menu.lst, maybe that prevents the problem from happening here... Still using the raid hook.
Got Leenucks? :: Arch: Power in simplicity :: Get Counted! Registered Linux User #392717 :: Blog thingy
Offline
I have my RAID setups defined in both mkinitcpio.conf and menu.lst, maybe that prevents the problem from happening here... Still using the raid hook.
I've had much success with the 'mdadm' hook, and without any md config in menu.lst:
HOOKS="base udev autodetect pata scsi sata mdadm lvm2 filesystems"
kernel /vmlinuz26 root=/dev/mapper/vgSys-root ro 5
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=0.90 UUID=237ecc6a:b1fb38e2:9797892e:e6c74188
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=3b1e2d02:b0360beb:f5837caf:44e8551c
Are you familiar with our Forum Rules, and How To Ask Questions The Smart Way?
BlueHackers // fscanary // resticctl
Offline
Thanks fukawi. Looks like ext4 had nothing to do with it, I still had the same issue until I added mdadm as a hook, now it's working 100% again.
Offline