You are not logged in.
The PVs wich are missing are set up on raid0 arrays, but will not found using pvscan/vgscan etc.
Therefore i cannot set up any further lv using lvcreate (or anything else)
Rootfs is on vg1/arch, vg1 is on a PV on /dev/md2 - booting itself works fine
further lv are on a PV on /dev/md1 - mounted successful during boot
# pvscan
No device found for PV tk0791-5LDp-s8Rh-wf6O-dBzQ-vVG3-KJ1jFd.
No device found for PV t5FXSu-4AJr-6dHX-icyn-zf7V-4awo-HOGpKF.
No device found for PV tk0791-5LDp-s8Rh-wf6O-dBzQ-vVG3-KJ1jFd.
No device found for PV t5FXSu-4AJr-6dHX-icyn-zf7V-4awo-HOGpKF.
PV /dev/sdc3 VG vg3 lvm2 [1000.00 GiB / 50.00 GiB free]
PV /dev/sdc4 VG vg4 lvm2 [1.65 TiB / 94.52 GiB free]
# vgscan
Reading all physical volumes. This may take a while...
No device found for PV tk0791-5LDp-s8Rh-wf6O-dBzQ-vVG3-KJ1jFd.
No device found for PV t5FXSu-4AJr-6dHX-icyn-zf7V-4awo-HOGpKF.
Found volume group "vg3" using metadata type lvm2
Found volume group "vg4" using metadata type lvm2
# cat /proc/mdstat
Personalities : [raid10] [raid1] [raid6] [raid5] [raid4] [raid0]
md1 : active raid0 sda11[0] sdb11[1]
1743807744 blocks super 1.0 64k chunks
md2 : active raid0 sda10[0] sdb10[1]
167772032 blocks super 1.0 64k chunks
md0 : active raid1 sda1[0] sdb1[1]
255988 blocks super 1.0 [2/2] [UU]
unused devices: <none>
# grep "use_lvmetad = " /etc/lvm/lvm.conf
use_lvmetad = 1
# grep md_component /etc/lvm/lvm.conf
md_component_detection = 1
After switching off lvmetad = 0 and stopping lvmetad pvscan gives the correct results
# grep "use_lvmetad = " /etc/lvm/lvm.conf
use_lvmetad = 0
# systemctl stop lvmetad
# ps -ef | grep lvm
root 7130 1071 0 08:31 pts/0 00:00:00 grep --color=auto lvm
# pvscan
PV /dev/sdc4 VG vg4 lvm2 [1.65 TiB / 94.52 GiB free]
PV /dev/sdc3 VG vg3 lvm2 [1000.00 GiB / 50.00 GiB free]
PV /dev/md2 VG vg1 lvm2 [160.00 GiB / 80.00 GiB free]
PV /dev/md1 VG vg2 lvm2 [1.62 TiB / 905.02 GiB free]
Total: 4 [4.41 TiB] / in use: 4 [4.41 TiB] / in no VG: 0 [0 ]
# vgscan
Reading all physical volumes. This may take a while...
Found volume group "vg4" using metadata type lvm2
Found volume group "vg3" using metadata type lvm2
Found volume group "vg1" using metadata type lvm2
Found volume group "vg2" using metadata type lvm2
But using this setup (after mkinitcpio) booting isnt possible any longer,
initramfs requires lvmetad (no hooks for vgchange -ay available by default)
Is there any possibilty ot add use_lvmetad = 0 as supported config in arch?
Is there any issue using lvmetad?
Offline
Hi, wlcome to the arch forum.
Please post your mkinitcpio.conf
Offline
Hello all together
Attached the mkinitcpio.conf (i haved posted all except the comments)
--- /etc/mkinitcpio.conf ---
MODULES="dm_raid raid0 raid1 dm_mod nouveau"
BINARIES=""
FILES=""
HOOKS="base udev autodetect modconf mdadm_udev block lvm2 filesystems keyboard fsck"
# no compression or compression_options
Offline
@xunil64, welcome to the forums! You should probably take a moment to look at the formatting expectations here. You need to apply code blocks to the things posted above here. To see how to do this, you can follow the BBCode link, where you will find that information and all kinds of other neat tricks.
Offline
Hello all together
Attached the mkinitcpio.conf (i haved posted all except the comments)
--- /etc/mkinitcpio.conf ---
MODULES="dm_raid raid0 raid1 dm_mod nouveau" BINARIES="" FILES="" HOOKS="base udev autodetect modconf mdadm_udev block lvm2 filesystems keyboard fsck" # no compression or compression_options
You can leave out dm_raid in modules, does it even exist?
Could you post the commands you used to create the RAID and LVM?
Yes, please follow WonderWoofy's advice on using [ code] tags, makes it readable.
Offline
Could you post the commands you used to create the RAID and LVM?
I was creating the raid using:
mdadm --create --level=raid0 --raid-devices=2 --chunk=64 -e 1.00 /dev/md2 /dev/sda10 /dev/sdb10
Creating the lv
pvcreate /dev/md2
vgcreate vg1 /dev/md2
lvcreate -L 32G --name arch vg1
Yes, please follow WonderWoofy's advice on using [ code] tags, makes it readable.
Yes, sorry, will do that ...
Offline
mdadm --create --level=raid0 --raid-devices=2 --chunk=64 -e 1.00 /dev/md2 /dev/sda10 /dev/sdb10
I think this should be
mdadm --create --level=0 --raid-devices=2 --chunk=64 -e 1.00 /dev/md2 /dev/sda10 /dev/sdb10
That lmvtad warning you get, I had one a while ago, It had all to do with forgetting one of the commands, though I don't remember which one.
Did you ran mkinitcpio -p linux after you changed mkinitcpio.conf, otherwise chroot to your newly installed and do that.
Be sure to address the right root device in your bootloader config, if you're not sure, show it!
Offline
I think this should be
mdadm --create --level=0 --raid-devices=2 --chunk=64 -e 1.00 /dev/md2 /dev/sda10 /dev/sdb10
As mentioned, the raid array itself is running well
(i have another linux installation - gentoo - there everything is running fine)
# cat /proc/mdstat
Personalities : [raid0] [raid1]
md2 : active raid0 sda10[0] sdb10[1]
167772032 blocks super 1.0 64k chunks
md0 : active raid1 sda1[0] sdb1[1]
255988 blocks super 1.0 [2/2] [UU]
md1 : active raid0 sda11[0] sdb11[1]
1743807744 blocks super 1.0 64k chunks
That lmvtad warning you get, I had one a while ago, It had all to do with forgetting one of the commands, though I don't remember which one.
Did you ran mkinitcpio -p linux after you changed mkinitcpio.conf, otherwise chroot to your newly installed and do that.
Same for not using lvmetad - then everything is running well too
Be sure to address the right root device in your bootloader config, if you're not sure, show it!
Booting from the new created lv is without any issues (and of course i created a new initramfs image)
There is only a problem when boot process is done (login with root) then pvscan / vgscan / lvscan will fail.
When switching off lvmetad everything is fine again (same for my gentoo installation, but gentoo isnt using lvmetad by default
Offline
Have you used vgchange -ay after vgscan?
I can tell you what I always do;
Boot into livecd
modprobe raid0
modprobe raid1
modprobe dm_mod
chroot to install
gdisk /dev/sdc
create volume fd00
gdisk /dev/sdd
create volume fd00,
mdadm --create --level=0 --raid-devices=2 --chunk=64 -e 1.00 /dev/md2 /dev/sdc1 /dev/sdd1
watch -n1 cat /proc/mdstat
mdadm --misc --detail /dev/md2
mdadm --detail --scan >> /etc/mdadm.conf
pvcreate /dev/md2
lvcreate -L 15G extragroup -n lvextra
vgscan
vgchange -ay
mkfs.ext4 /dev/mapper/extragroup-lvextra
mdadm --examin --scan > /etc/mdadm.conf
create line in fstab for automount,
Unmount, reboot and if all went okay I have extra mounted to/mnt.
I hope this helps cause my idea box is getting empty.
Offline
Have you used vgchange -ay after vgscan?
Unmount, reboot and if all went okay I have extra mounted to/mnt.
I hope this helps cause my idea box is getting empty.
initially there was no problem. Im using arch since a few month (switching from gentoo to arch)
Everything was running well, at least i didnt recognized any issues (booting wasnt and isnt any issue)
A few days ago i wanted to create some additional LVs
At this time i recognized that pvscan / vgscan / lvscan gives the warning/error messages mentioned in post #1
Im creating the LVs using the gentoo box (no lvmetad; there isnt any problem)
But i was trying to create them with a chrooted arch too (no lvmetad)
Booting into the default arch installation and creating the LV isnt possible, except stopping lvmetad (including use_lvmetad = 0)
Starting the new created lvs isnt any problem, they will be found during boot, mounted, no issues at all
The only problem is using lvmetad and running any lv command, then the mentioned errors occurs.
I also tried to clean the lvm cache, before recreating any md devices i cleaned the superblocks before (--zero-superblock), etc.
No idea how to create lv with lvmetad running...
Offline
You should say that it doesn't matter what distro you use to build the LVM, all distro's should be able to use it, same for softraid.
I understand that it works if you don't use lvmetad, but it I have no idea how you could enable the daemon while still working.
While looking for an answer, I came across this thread, Changes to LVM2 and udev break LVM2 on LUKS?, I know it's not exactly about your problem, but it might give you some clues . Good luck.
Offline
That's what I get from trying to "fix" LVM.
Anyway, try
# udevadm info /dev/md1
# udevadm info /dev/md2
This should give some interesting output.
Offline
This should give some interesting output.
# udevadm info /dev/md2
P: /devices/virtual/block/md2
N: md2
L: 100
S: disk/by-id/md-name-arch:2
S: disk/by-id/md-uuid-a366b4a3:1ff95e59:5874231c:c1dd8909
E: DEVLINKS=/dev/disk/by-id/md-name-arch:2 /dev/disk/by-id/md-uuid-a366b4a3:1ff95e59:5874231c:c1dd8909
E: DEVNAME=/dev/md2
E: DEVPATH=/devices/virtual/block/md2
E: DEVTYPE=disk
E: ID_FS_TYPE=LVM2_member
E: ID_FS_USAGE=raid
E: ID_FS_UUID=g5W903-ehDk-d3QN-P0Sz-2qnI-di4Q-3Z9ARv
E: ID_FS_UUID_ENC=g5W903-ehDk-d3QN-P0Sz-2qnI-di4Q-3Z9ARv
E: ID_FS_VERSION=LVM2 001
E: MAJOR=9
E: MD_DEVICES=2
E: MD_DEVICE_sda10_DEV=/dev/sda10
E: MD_DEVICE_sda10_ROLE=0
E: MD_DEVICE_sdb10_DEV=/dev/sdb10
E: MD_DEVICE_sdb10_ROLE=1
E: MD_LEVEL=raid0
E: MD_METADATA=1.0
E: MD_NAME=arch:2
E: MD_UUID=a366b4a3:1ff95e59:5874231c:c1dd8909
E: MINOR=2
E: MPATH_SBIN_PATH=/bin
E: SUBSYSTEM=block
E: TAGS=:systemd:
E: UDISKS_MD_DEVICES=2
E: UDISKS_MD_DEVICE_sda10_DEV=/dev/sda10
E: UDISKS_MD_DEVICE_sda10_ROLE=0
E: UDISKS_MD_DEVICE_sdb10_DEV=/dev/sdb10
E: UDISKS_MD_DEVICE_sdb10_ROLE=1
E: UDISKS_MD_LEVEL=raid0
E: UDISKS_MD_METADATA=1.0
E: UDISKS_MD_NAME=arch:2
E: UDISKS_MD_UUID=a366b4a3:1ff95e59:5874231c:c1dd8909
E: UDISKS_PRESENTATION_NOPOLICY=1
E: USEC_INITIALIZED=52759
# udevadm info /dev/md1
P: /devices/virtual/block/md1
N: md1
L: 100
S: disk/by-id/md-name-nastassja:1
S: disk/by-id/md-uuid-55cb6f37:27f2eb20:817363be:c5ee6478
E: DEVLINKS=/dev/disk/by-id/md-name-nastassja:1 /dev/disk/by-id/md-uuid-55cb6f37:27f2eb20:817363be:c5ee6478
E: DEVNAME=/dev/md1
E: DEVPATH=/devices/virtual/block/md1
E: DEVTYPE=disk
E: ID_FS_TYPE=LVM2_member
E: ID_FS_USAGE=raid
E: ID_FS_UUID=tk0791-5LDp-s8Rh-wf6O-dBzQ-vVG3-KJ1jFd
E: ID_FS_UUID_ENC=tk0791-5LDp-s8Rh-wf6O-dBzQ-vVG3-KJ1jFd
E: ID_FS_VERSION=LVM2 001
E: MAJOR=9
E: MD_DEVICES=2
E: MD_DEVICE_sda11_DEV=/dev/sda11
E: MD_DEVICE_sda11_ROLE=0
E: MD_DEVICE_sdb11_DEV=/dev/sdb11
E: MD_DEVICE_sdb11_ROLE=1
E: MD_LEVEL=raid0
E: MD_METADATA=1.0
E: MD_NAME=nastassja:1
E: MD_UUID=55cb6f37:27f2eb20:817363be:c5ee6478
E: MINOR=1
E: MPATH_SBIN_PATH=/bin
E: SUBSYSTEM=block
E: TAGS=:systemd:
E: UDISKS_MD_DEVICES=2
E: UDISKS_MD_DEVICE_sda11_DEV=/dev/sda11
E: UDISKS_MD_DEVICE_sda11_ROLE=0
E: UDISKS_MD_DEVICE_sdb11_DEV=/dev/sdb11
E: UDISKS_MD_DEVICE_sdb11_ROLE=1
E: UDISKS_MD_LEVEL=raid0
E: UDISKS_MD_METADATA=1.0
E: UDISKS_MD_NAME=nastassja:1
E: UDISKS_MD_UUID=55cb6f37:27f2eb20:817363be:c5ee6478
E: UDISKS_PRESENTATION_NOPOLICY=1
E: USEC_INITIALIZED=51459
Offline
# pvck -v /dev/md2
Scanning /dev/md2
Found label on /dev/md2, sector 1, type=LVM2 001
Found text metadata area: offset=4096, size=520192
Found LVM2 metadata record at offset=5632, size=1024, offset2=0 size2=0
# pvdisplay /dev/md2
No device found for PV g5W903-ehDk-d3QN-P0Sz-2qnI-di4Q-3Z9ARv.
No device found for PV g5W903-ehDk-d3QN-P0Sz-2qnI-di4Q-3Z9ARv.
--- Physical volume ---
PV Name /dev/md2
VG Name vg1
PV Size 160.00 GiB / not usable 1023.00 MiB
Allocatable yes
PE Size 1.00 GiB
Total PE 159
Free PE 79
Allocated PE 80
PV UUID g5W903-ehDk-d3QN-P0Sz-2qnI-di4Q-3Z9ARv
Hint: The PE size was just a test regarding performance (mdadm chunk size PV dataalignment and PE size)
Last edited by xunil64 (2013-11-27 19:21:49)
Offline
Some progress ...
I recreated the raid array then the LVs again, but this time using the mdadm metadata format 1.2 (default)
# mdadm --create --level=0 --raid-devices=2 --chunk=64 /dev/md2 /dev/sda10 /dev/sdb10
# mdadm -E --scan > /etc/mdadm.conf
... then editing mdadm.conf to use /dev/md2 instead of /dev/md/2
... mkinitcpio etc
... rebooting
# pvcreate /dev/md2
# vgcreate vg1 /dev/md2
# lvcreate ...
# pvscan
No device found for PV tk0791-5LDp-s8Rh-wf6O-dBzQ-vVG3-KJ1jFd.
No device found for PV tk0791-5LDp-s8Rh-wf6O-dBzQ-vVG3-KJ1jFd.
PV /dev/sdc3 VG vg3 lvm2 [1000.00 GiB / 50.00 GiB free]
PV /dev/sdc4 VG vg4 lvm2 [1.65 TiB / 94.52 GiB free]
PV /dev/md2 VG vg1 lvm2 [159.87 GiB / 79.87 GiB free]
Total: 3 [2.79 TiB] / in use: 3 [2.79 TiB] / in no VG: 0 [0 ]
This time the /dev/md2 pv is found!
(using a running lvmetad)
Offline
After recreating the other md device with metadata format 1.2 the PV / LV will be found too!
It seems to be an issue using the metadata format 1.0
(when putting the metadata block to the end of the partition)
Offline
I was under the impression that metadata 1.0 also put it at the beginning, but I may be wrong. Anyway, having the metadata at the end leads to problems - I have never had issues with the 1.2 metadata.
Offline
@xunil64. I can confirm building the array with your conditions leads to the same point, "no device found for ..."
However, I'm also still able to use it.
@brain0, Just for completeness, I add the piece of the wiki about where the metadata is stored when you use version 0.90 --> 1.2 https://raid.wiki.kernel.org
Do we call this a bug in mdadm, building 1.2 is painless indeed.
Offline
Ah, so 1.0 is at the end - didn't know that. I never even built 1.0 except when installing syslinux on the device afterwards.
It would be interesting to find out why the 1.0 raid device is being ignored, this problem seems fixable now that we know what it is.
Offline
@xunil64. I can confirm building the array with your conditions leads to the same point, "no device found for ..."
However, I'm also still able to use it.
Do we call this a bug in mdadm, building 1.2 is painless indeed.
I think it is a bug in lvmetad.
mdadm isnt affected, there everything is fine
Even lvm without lvmetad can handle it without any issues
Offline
qinohe wrote:@xunil64. I can confirm building the array with your conditions leads to the same point, "no device found for ..."
However, I'm also still able to use it.
Do we call this a bug in mdadm, building 1.2 is painless indeed.I think it is a bug in lvmetad.
mdadm isnt affected, there everything is fine
Even lvm without lvmetad can handle it without any issues
The thing is, I tried to reproduce the situation, but it was a onetime event for me.
I tried a lot of combinations, started with fdisk creating a situation similar to yours.
The setup I used;
md0= rootvol --housing / and swap
md1= /boot ext4
md2= extravol ext4
These mdadm rules I used.
mdadm --create --level=0 --raid-devices=2 --chunk=64 --metadata 0.90 /dev/md[012] /dev/sd[ab]123
mdadm --create --level=0 --raid-devices=2 --chunk=64 --metadata 1.0 /dev/md[012] /dev/sd[ab]123
mdadm --create --level=0 --raid-devices=2 --chunk=64 --metadata 1.1 /dev/md[012] /dev/sd[ab]123
mdadm --create --level=0 --raid-devices=2 --chunk=64 --metadata 1.2 /dev/md[012] /dev/sd[ab]123
The same I did for gdisk, also without the problem.
I also tried grub and syslinux for all situations, but the problem never returns.
Offline
The same I did for gdisk, also without the problem.
I also tried grub and syslinux for all situations, but the problem never returns.
I will try to create the situation again using new gpt partitions (there is some space left to create them)
My config was
/dev/md0 raid1 /boot
/dev/md1 raid0 -- will be pv /dev/md1
/dev/md2 raid0 -- will be pv /dev/md2
md1 and md2 have been using metadata 1.0 format, but now switched to 1.2
Offline
Just one thing, though.. I always build / on md0, always, I don't know if that matters, but it felt like common sense to do it like that to me.
Offline
Just one thing, though.. I always build / on md0, always, I don't know if that matters, but it felt like common sense to do it like that to me.
Maybe, but im using a seperate boot partition for several os for a long time.
Its the first partition on both disks (raid1), therefore i used /dev/md0 for this since "ever"
My rootfs' are on any LVM LVs, therefore no real /dev/md0
Offline
qinohe wrote:Just one thing, though.. I always build / on md0, always, I don't know if that matters, but it felt like common sense to do it like that to me.
Maybe, but im using a seperate boot partition for several os for a long time.
Its the first partition on both disks (raid1), therefore i used /dev/md0 for this since "ever"
My rootfs' are on any LVM LVs, therefore no real /dev/md0
Ah, that probably doesn't matter than.
My root partition is also on LV, I don't use raid0 only raid1, this is, well clear why...I guess..
Only raw formated device is md1, I also always use this for /boot
Offline