You are not logged in.
So I've been running Arch quite happily on an encrypted LVM for some time now, but all of a sudden things started to go awry when I upgraded to linux-3. I tried rebooting and apparently my init images are gone. So I generated some new ones with mkinitcpio. Unfortunately, now whenever I try to boot with these images I get this:
Waiting 10 seconds for device /dev/mapper/vgroup-root ...
ERROR: Unable to determine major/minor number of root device '/dev/mapper/vgroup-root'.
You are being dropped to a recovery shell
At which point I get a recovery shell that I can't type in, since my laptop has a usb keyboard.
Any suggestions on how to fix this? I've tried removing autodetect, and then made sure that rc.conf had USELVM='yes' when I generated the image from the rescue cd. At this point I can't paste any output as all I'm able to do is chroot into my machine without internet access of any sort.
Offline
Can you post a copy of your /boot/grub/menu.lst and /etc/mkinitcpio.conf
just to be clear you have an encrypted drive with LVM on top of it correct?
Offline
same problem here. it looks like hooks, you enter on setup are ignored
Offline
The same here.
I have updated my laptop OK, but when I updated my desktop with RAID 1 on it, I got into the same problem.
I have no encryption on my LVM, though.
Last edited by dif (2011-08-09 01:02:07)
Offline
I have the same problem with RAID 1. I upgraded today before checking out the forums. guess that's why im posting in the newbie corner. I'm going to boot up a live cd and do some diggin.
Offline
Anything to do with this maybe? https://bbs.archlinux.org/viewtopic.php?pid=971853
Offline
Neither regular boot nor fallback works for me.
This is what I see after boot
(I have also tried adding rootfstype=reiserfs to my grub menu.lst kernel line)
:: Starting udev...
done.
:: Running Hook [udev]
:: Triggering uevents...done.
:: Running Hook [mdadm]
:: Running Hook [lvm2]
Activating logical volumes...
No volume groups found
:: Running Hook [keymap]
:: Loading keymap...done.
Waiting 10 seconds for device /dev/array/root ...
ERROR: Unable to determine major/minor number of root device '/dev/array/-root'.
You are being dropped to a recovery shell
Type 'exit' to try and continue booting
sh: can't access tty: job control turned off
"mkinitcpio.conf" is nowhere to find. I guess it is natural in this case.
Most commands do not work, like "find" for example.
Of course typying "exit" does not help, either.
I am too inexperienced to try different things on my own. I'd rather not start from my live/Arch installation CD, because I'd hate to spoil things even more.
Offline
ls /dev shows (among others) sda, sda1, sda2, sda3, sdb, sdb, sdb1, sb2, sdb3, md, md1, md2, md3... and md127 (?)
I tired moiunting md1 unsuccessfully:
mount -t reiserfs /dev/md1 /mnt
(I created /mnt before)
(failed: invalid argument)
(I tried it with and without "-t reiserfs)
Same thing with sda1
mount /dev/sda1 /mnt
(failed: Device or resource busy)
Last edited by dif (2011-08-09 12:30:34)
Offline
ls /dev shows (among others) sda, sda1, sda2, sda3, sdb, sdb, sdb1, sb2, sdb3, md, md1, md2, md3... and md127 (?)
I tired moiunting md1 unsuccessfully:mount -t reiserfs /dev/md1 /mnt
(I created /mnt before)
(failed: invalid argument)
(I tried it with and without "-t reiserfs)
Seriously? ls would tell you that /mnt doesn't exist. This isn't your root FS. It's a tiny little environment existing in RAM whose sole purpose is to mount your rootfs before destroying itself. You can mount it on /new_root.
Same thing with sda1
mount /dev/sda1 /mnt
(failed: Device or resource busy)
Right... its part of the assembled md device md127. If you were expecting your md devices to have persistent names, then you needed to include an /etc/mdadm.conf file that has these names.
Offline
cat /etc/mdadm.conf
ARRAY /dev/md1 metadata=0.90 UUID=.........
ARRAY /dev/md2 metadata=1.20 name=archiso:2 UUID=......
ARRAY /dev/md3 metadata=1.20 name=archiso:3 UUID=......
md1 is "boot" assembled sda1 and sdb1
md2 is "swap" assembled sda2 and sdb2
md3 is everything else includeing /home; assembled sda3 and sdb3
filesystem is "reiserfs"
I laso have an extra space on sdb4 for some data (fs = ext4).
Is there any chance that I can get my RAID back and working? What can I do, please?
Offline
By the way, is it a bug?
I think it is when a number of people experience this kind of major failure.
Offline
Feel free to peruse forum history. This "major failure" occurs every time there's a kernel version change. It's been bloated by the number of people unable to read the news and comprehend the changes required for their bootloader.
Your /etc/mdadm.conf clearly isn't on the initramfs. The array surely works, you just lack the understanding of your own setup to figure out where it broke.
Offline
You needn't have bothered to reply, Sir.
No reply would have been just as much helpful.
I am only sorry that you have wasted your precious time, anwering my and my fellow members' questions.
P.S.
If I had the understanding I wouldn't have asked in the first place. It is a "Newbie Corner", is that not? Or do I not understand the "newbie corner" just as I do not understand my setup? But it was worth posting a question. I have quickly learnt the legendary friendliness of the community. Do not bother to reply. I'm not going to check this forum topic again.
P.S.
My /etc/mdadm.conf is clearly on ramfs. This is the exact place I copied it for you.
Before I tried to mount /md1, I had mkdir'ed /mnt.
No, it was not possible to mount anything on /new_root
Last edited by dif (2011-08-09 15:16:12)
Offline
\me gives @dif a hug. :-)
Offline
So hoping that I could roll things back to the way they were, I downloaded an Arch rescue ISO, installed Arch on a flash drive, and copied the boot files over to my own computer with all of the appropriate hooks for initcpio. This time I managed to get as far as it asking for my password, and when entered correctly it begins booting, but then throws a "filesystem check failed" saying that it couldn't find the file /dev/mapper/vgroup-root. I'm assuming that this is now an LVM problem, but I have no idea how to fix it, because modprobe dm_mod doesn't work and thus I can't look at the logical volumes, or make them available. fsck doesn't work because it can't find /dev/mapper/vgroup-root, and a simple ls of the directory /dev/mapper/ reveals nothing.
Does anybody have any idea what to do? It frustrates me that everything is there and that I could boot into my system, save one little error
Offline
I'm back from holiday and back with a solution, or rather a procedure to go through. It helped me, but my system was not encrypted.
(1) Start from Archlinux liveCD. Use the latest iso.
(2) Load modules
modprobe raid1 (or whatever raid you have)
modprobe dm-mod
(3) Assemble your RAID automatically
mdadm --assemble --scan
(4) Activate lvm
vgscan
vgchange -ay
(5) After that the below listed commands should show what you expect
pvdisplay
vgdisplay
lvdisplay
(6) Mount your server file system by
mount /dev/array/root /mnt
where "root" is the name of one of my logical volumes, carrying the system.
Use your logical volume name if different.
And
mount /dev/md1 /mnt/boot
where md1 is my partition/device with the "boot" directory
(7) Re-create mdadm.conf
There are two options here
mdadm --examine --scan > /mnt/etc/mdadm.conf
or
mdadm --detail --scan > /mnt/etc/mdadm.conf
I chose the "detail" option.
(8) Check the contents of
/mnt/etc/mkinitcpio.conf
MODULES should contain dm-mod, and in my case reiserfs and raid1
HOOKS should contain mdadm i lvm2
/mnt/etc/rc.conf
Set USELVM="yes"
/mnt/boot/grub/menu.lst
Check the kernel line.
In my case it is
kernel /vmlinuz-linux root=/dev/array/root ro
/mnt/etc/fstab
Perhaps not necessary, but just make sure it is OK.
(9) Change root
# mount -o bind /dev /mnt/dev
# mount -t proc none /mnt/proc
# mount -t sysfs none /mnt/sys
# chroot /mnt /bin/bash
and regenerate initramfs files
mkinitcpio -p linux
(10) GRUB does not need to be reinstalled.
(11) End the procedure with Ctrl-D to exit chroot, and run "reboot".
If you end up in "ramfs" again, I suspect your RAID was not assembled or was assembled improperly.
Try
ls /dev/md*
and
mdadm --examine --scan
to see what md devices have been assembled.
If you see any unexpected mdXXX numbers you can stop those /dev/mdXXX with
mdadm --stop /dev/mdXXX
and remove them with
mdadm --remove /dev/mdXXX
You may have to delete/zero superblock from individual drives to get rid of such mdXXX device:
For example
mdadm --zero-superblock /dev/sda
but BE CAREFUL. I'm a newbie as you are, so consult other fora (forums) too.
If by any chance a developer sees this, please be so kind as to say, "Yes, go with it." Or "No, don't do it." For Christ's sake, refrain from the "Google!" advice. Although we are humble, meek, and stupid ones, we were intelligent enough to install Archlinux, and we have googled already.
Last edited by dif (2011-08-28 20:31:22)
Offline
One more thing.
In point (3) above, I suggest automatic assembling.
Perhaps more advisable would be manual assembling, because almost certainly you can assemble the right RAID array to mount it later and retrieve your data.
The automatic assembling on the other hand will show you what RAID array(s) the system sees. They may be different from what you expect. If so refer to ending part of my previous post.
So to be on the save side, assemble your RAID manually first and retrieve your data. And try automatic assembling later.
Hey, developers, does it make sense?
Offline
Woah, thx dif. I'm gonna try that when I get home.
At least you solved one annoyance for me:
I can use the mdadm --stop and --remove on my livecd, because that automatic raid assembly doesn't always work properly and I always rebooted instead of using those commands.
I'll have to throw in some encryption stuff, but I can figure that out. The md* locations are the main issue here and an updated mdadm.conf might just do the trick.
Offline