You are not logged in.
Pages: 1
I'm trying to get my new install up and running using root on lvm on my six disk RAID10 array. I've done this before with fewer disks, using /boot on a separate RAID1 array, and managed to get the system to work fine. Now, I'm having trouble with the kernel not seeing the proper /dev/mapper devices and failing to mount the root filesystem. I think that I managed to fix this before by putting md= parameters into the kernel command line for each MD array. This was with only three drives, so while it made the kernel command line somewhat long, it was not that bad. Now, I've got two arrays of six disks each, and this seems like a very long kernel command line.
So, my questions are (1) how to properly make the kernel see the root fs on an LVM setup and (2) is there a better way to tell the kernel about software raid arrays than putting every disk in the command line?
Edit: I solved the issue by rebuilding the initcpio image.
Last edited by iBertus (2008-11-08 06:15:22)
Offline
Okay, so I tried putting the md arrays on the command line, but that didn't work. I get dropped to the ramfs prompt when the filesystem hook is executed. I did notice that the LVM and raid-partitions hooks are executed first. The crappy thing is that usb devices don't work at this point, so I have to hit the reset switch.
Could it be that a needed module isn't included in the initcpio image?
Offline
I've solved the issue by rebuilding the initcpio image from the live cd. It seems like something happened during the install and the image was missing modules. A simple 'mkinitcpio -g /boot/kernel26.img' fixed the problem and now the system boots.
Offline
Pages: 1