I'll have to throw in some encryption stuff, but I can figure that out. The md* locations are the main issue here and an updated mdadm.conf might just do the trick.
]]>So to be on the save side, assemble your RAID manually first and retrieve your data. And try automatic assembling later.
Hey, developers, does it make sense?
(1) Start from Archlinux liveCD. Use the latest iso.
(2) Load modules
modprobe raid1 (or whatever raid you have)
modprobe dm-mod
(3) Assemble your RAID automatically
mdadm --assemble --scan
(4) Activate lvm
vgscan
vgchange -ay
(5) After that the below listed commands should show what you expect
pvdisplay
vgdisplay
lvdisplay
(6) Mount your server file system by
mount /dev/array/root /mnt
where "root" is the name of one of my logical volumes, carrying the system.
Use your logical volume name if different.
And
mount /dev/md1 /mnt/boot
where md1 is my partition/device with the "boot" directory
(7) Re-create mdadm.conf
There are two options here
mdadm --examine --scan > /mnt/etc/mdadm.conf
or
mdadm --detail --scan > /mnt/etc/mdadm.conf
I chose the "detail" option.
(8) Check the contents of
/mnt/etc/mkinitcpio.conf
MODULES should contain dm-mod, and in my case reiserfs and raid1
HOOKS should contain mdadm i lvm2
/mnt/etc/rc.conf
Set USELVM="yes"
/mnt/boot/grub/menu.lst
Check the kernel line.
In my case it is
kernel /vmlinuz-linux root=/dev/array/root ro
/mnt/etc/fstab
Perhaps not necessary, but just make sure it is OK.
(9) Change root
# mount -o bind /dev /mnt/dev
# mount -t proc none /mnt/proc
# mount -t sysfs none /mnt/sys
# chroot /mnt /bin/bash
and regenerate initramfs files
mkinitcpio -p linux
(10) GRUB does not need to be reinstalled.
(11) End the procedure with Ctrl-D to exit chroot, and run "reboot".
If you end up in "ramfs" again, I suspect your RAID was not assembled or was assembled improperly.
Try
ls /dev/md*
and
mdadm --examine --scan
to see what md devices have been assembled.
If you see any unexpected mdXXX numbers you can stop those /dev/mdXXX with
mdadm --stop /dev/mdXXX
and remove them with
mdadm --remove /dev/mdXXX
You may have to delete/zero superblock from individual drives to get rid of such mdXXX device:
For example
mdadm --zero-superblock /dev/sda
but BE CAREFUL. I'm a newbie as you are, so consult other fora (forums) too.
If by any chance a developer sees this, please be so kind as to say, "Yes, go with it." Or "No, don't do it." For Christ's sake, refrain from the "Google!" advice. Although we are humble, meek, and stupid ones, we were intelligent enough to install Archlinux, and we have googled already.
]]>Does anybody have any idea what to do? It frustrates me that everything is there and that I could boot into my system, save one little error
]]>P.S.
If I had the understanding I wouldn't have asked in the first place. It is a "Newbie Corner", is that not? Or do I not understand the "newbie corner" just as I do not understand my setup? But it was worth posting a question. I have quickly learnt the legendary friendliness of the community. Do not bother to reply. I'm not going to check this forum topic again.
P.S.
My /etc/mdadm.conf is clearly on ramfs. This is the exact place I copied it for you.
Before I tried to mount /md1, I had mkdir'ed /mnt.
No, it was not possible to mount anything on /new_root
Your /etc/mdadm.conf clearly isn't on the initramfs. The array surely works, you just lack the understanding of your own setup to figure out where it broke.
]]>ARRAY /dev/md1 metadata=0.90 UUID=.........
ARRAY /dev/md2 metadata=1.20 name=archiso:2 UUID=......
ARRAY /dev/md3 metadata=1.20 name=archiso:3 UUID=......
md1 is "boot" assembled sda1 and sdb1
md2 is "swap" assembled sda2 and sdb2
md3 is everything else includeing /home; assembled sda3 and sdb3
filesystem is "reiserfs"
I laso have an extra space on sdb4 for some data (fs = ext4).
Is there any chance that I can get my RAID back and working? What can I do, please?
]]>ls /dev shows (among others) sda, sda1, sda2, sda3, sdb, sdb, sdb1, sb2, sdb3, md, md1, md2, md3... and md127 (?)
I tired moiunting md1 unsuccessfully:mount -t reiserfs /dev/md1 /mnt
(I created /mnt before)
(failed: invalid argument)
(I tried it with and without "-t reiserfs)
Seriously? ls would tell you that /mnt doesn't exist. This isn't your root FS. It's a tiny little environment existing in RAM whose sole purpose is to mount your rootfs before destroying itself. You can mount it on /new_root.
Same thing with sda1
mount /dev/sda1 /mnt
(failed: Device or resource busy)
Right... its part of the assembled md device md127. If you were expecting your md devices to have persistent names, then you needed to include an /etc/mdadm.conf file that has these names.
]]>mount -t reiserfs /dev/md1 /mnt
(I created /mnt before)
(failed: invalid argument)
(I tried it with and without "-t reiserfs)
Same thing with sda1
mount /dev/sda1 /mnt
(failed: Device or resource busy)
This is what I see after boot
(I have also tried adding rootfstype=reiserfs to my grub menu.lst kernel line)
:: Starting udev...
done.
:: Running Hook [udev]
:: Triggering uevents...done.
:: Running Hook [mdadm]
:: Running Hook [lvm2]
Activating logical volumes...
No volume groups found
:: Running Hook [keymap]
:: Loading keymap...done.
Waiting 10 seconds for device /dev/array/root ...
ERROR: Unable to determine major/minor number of root device '/dev/array/-root'.
You are being dropped to a recovery shell
Type 'exit' to try and continue booting
sh: can't access tty: job control turned off
"mkinitcpio.conf" is nowhere to find. I guess it is natural in this case.
Most commands do not work, like "find" for example.
Of course typying "exit" does not help, either.
I am too inexperienced to try different things on my own. I'd rather not start from my live/Arch installation CD, because I'd hate to spoil things even more.