You are not logged in.
Pages: 1
Following installation from the latest 0.7.1 disc (ie using the 2.6.14 kernel), I'm having problems getting my LVM volumes (/usr, /opt, /var & /home) on the LVM partition /dev/hdb4 from being recognised on booting.
The USELVM="yes" option is set in /etc/rc.conf, and /etc/fstab contains the following:
/dev/vg1/homelv /home reiserfs defaults 0 0
/dev/vg1/optlv /opt reiserfs defaults 0 0
/dev/vg1/usrlv /usr reiserfs defaults 0 0
/dev/vg1/varlv /var reiserfs defaults 0 0
The error message on boot following the Activating LVM2 Groups" message is:
"/proc/misc: No entry for device-mapper found. Is device-mapper missing from the kernel?".
The problem previously prevented me upgrading to the 2.6.13 kernel, so I assumed a fresh installation should sort it.
Ironically, logging on to the crippled system then allows the partitions to be recognised and mounted by manually running:
modprobe dm-mod
vgchange -a y
mount -a
I don't understand why it works at this stage but not on boot. I have another PC that is set up identically with no problem. The only difference being that the failure occurs on the PC with / on hdb3 (as opposed to hda3).
The relevant grub menu.1st contains:
root (hd1,2)
kernel /boot/vmlinuz26 root=/dev/hda3 vga=773 ro
initrd /boot/initrd26.img
Is there now some limitation on which disk can be used for LVM? What am I missing?
Any help would be appreciated as this has taken up much time and obviously the system is still not properly functional.
Offline
This entry in the blog might help :
Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.
clean chroot building not flexible enough ?
Try clean chroot manager by graysky
Offline
Pages: 1