You are not logged in.
Hello, apologies if this is in the wrong forum, wasn't sure where to post...
I recently replaced an old non-UEFI motherboard on my system. This sytem had 6 HDD's attached, 2 of which were in a RAID mirror, and all using LVM.
At first I couldn't boot due to mount errors. After removing everything but / and /home from my fstab, I get the system to boot.
However things are not as they were:
All my /dev/mapper devices are there, and lvdisplay displays them correctly. However when I try and mount any LVM partition (apart from / and /home, both of which are on the same LV and PV) I get the following error:
sudo mount /dev/mapper/hdd1-pacman--cache /var/cache/pacman [0]
mount: /var/cache/pacman: wrong fs type, bad option, bad superblock on /dev/mapper/hdd1-pacman--cache, missing codepage or helper program, or other error.
I get the same FS error for all my partitions. Should I just run fsck on these partitions? I'm a little wary as I don't have the disk space to back up all these partitions with dd before I do this.
I also seem to be missing my /dev/raid LVM mapper. I get the following output from mdadm --assemble --scan
ARRAY /dev/md/0 metadata=1.2 UUID=457a18fc:c7b54785:eb9946ed:fee900ca name=mushin:0
however /dev/md0 doesn't exist. I do have a /dev/md127. I used LVM ontop of my RAID device, but they are not detected at all by pvdisplay or vgdisplay.
Any suggestions for how to progress at this point would be much appreciated.
Thanks
Last edited by clanger (2017-12-14 10:28:34)
Offline
output of vgdisplay ?
does running vgscan -ay make a difference ?
What was providing the raid mirror , lvm or your previous motherboard raid support ?
Last edited by Lone_Wolf (2017-12-14 10:36:07)
Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.
clean chroot building not flexible enough ?
Try clean chroot manager by graysky
Offline
VGdisplay output:
-----------
--- Volume group ---
VG Name backup
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.36 TiB
PE Size 4.00 MiB
Total PE 357699
Alloc PE / Size 262144 / 1.00 TiB
Free PE / Size 95555 / 373.26 GiB
VG UUID HQW5Jp-FRYo-LWTj-RVhY-tSK8-y8HQ-IpYX9V
--- Volume group ---
VG Name hdd1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 11
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 5
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size <372.61 GiB
PE Size 4.00 MiB
Total PE 95388
Alloc PE / Size 95388 / <372.61 GiB
Free PE / Size 0 / 0
VG UUID AwlBXm-AdP6-7wZY-pjJ1-63Iw-790R-NreiKe
--- Volume group ---
VG Name ssd
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 22
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size <37.24 GiB
PE Size 4.00 MiB
Total PE 9533
Alloc PE / Size 9533 / <37.24 GiB
Free PE / Size 0 / 0
VG UUID XEnU5W-eB11-qQsN-AGFz-eKkc-hf0s-wXme5u
------------------
Notably this is missing my raid VG.
Running vgscan -ay just errors out (vgscan: invalid option -- 'a' Error during parsing of command line.).
RAID was done using mdadm in software. LVM was used ontop of the linux software raid.
Thanks
Offline
The RAID is now working again (I had to generate the config with mdadm --examine --scan and put it in /etc/mdadm.conf.
The LVM LV's on the RAID VG seem to be mounting ok.
The only remaining problem is the unmountable partitions on the hdd1 VG. I can mount one (empty) partition on that VG, but all the others give me the error I linked in my first post.
Offline
I ran fsck.ext4 on the smallest and most disposable of the unmountable LVM LV's. It had a lot of errors but now mounts correctly, and everything seems to be in order. Looks like the physical disk may be close to failure. Unsure why the RAID problem manifested itself at the same time.
Offline
try mounting them under /mnt to see if the error is because they replace existing folders.
Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.
clean chroot building not flexible enough ?
Try clean chroot manager by graysky
Offline