You are not logged in.
Nevertheless (why using raid0 not raid1 )
I created another raid array (raid0; metadata format 1.2)
There also no issues
I will recreate the PVs / LVs using my gentoo box (there without any lvmetad), maybe this is a problem
Offline
I will recreate the PVs / LVs using my gentoo box (there without any lvmetad), maybe this is a problem
When creating them in the gentoo box (without lvmetad) there will be the issues again
Creating in arch:
# mdadm --create --level=0 --raid-devices=2 --chunk=64 -e 1.0 /dev/md4 /dev/sda\
9 /dev/sdb9
mdadm: array /dev/md4 started.
# pvcreate /dev/md4
Physical volume "/dev/md4" successfully created
# vgcreate vg5 /dev/md127
Volume group "vg5" sucessfully created
After rebooting everything was fine
Then switched to gentoo:
# vgremove vg5
# pvremove /dev/md127 -- wasnt there in /etc/mdadm.conf / initramfs, therefore named as md127
# pvcreate /dev/md127
# vgcreate vg5 /dev/md127
# lvcreate -L 10G --name test vg5
Rebooting to arch:
# pvscan
No device found for PV LP33Nk-7EaC-JPht-v33x-duOh-2uTQ-DfP0zz.
No device found for PV LP33Nk-7EaC-JPht-v33x-duOh-2uTQ-DfP0zz.
PV /dev/sdc3 VG vg3 lvm2 [1000.00 GiB / 50.00 GiB free]
PV /dev/md3 VG vg2 lvm2 [1.62 TiB / 904.77 GiB free]
PV /dev/sdc4 VG vg4 lvm2 [1.65 TiB / 94.52 GiB free]
PV /dev/md2 VG vg1 lvm2 [159.87 GiB / 79.87 GiB free]
Total: 4 [4.41 TiB] / in use: 4 [4.41 TiB] / in no VG: 0 [0 ]
After removing /dev/md4 (new new created md array) from /etc/mdadm.conf and rebooting arch (so the new md will be named 127 too), the issue is still there
Offline
Nevertheless (why using raid0 not raid1 )
I said I never use raid0 only raid1, but I guess you mean why not use raid0 and only1, or I misunderstand you here
I created another raid array (raid0; metadata format 1.2)
There also no issues
Very nice!
I will recreate the PVs / LVs using my gentoo box (there without any lvmetad), maybe this is a problem
Yeah, you might be spot on here, if it works for me, it should most certainly work for you too.
Btw. why don't you use Arch, OCD thing? I used to create my volumes/partitions for all other OS'ses with FreeBSD... though I already use Arch for it for quite some time now
Offline
xunil64 wrote:I will recreate the PVs / LVs using my gentoo box (there without any lvmetad), maybe this is a problem
When creating them in the gentoo box (without lvmetad) there will be the issues again
After rebooting everything was fine
Thats very nice.
Then switched to gentoo:
After removing /dev/md4 (new new created md array) from /etc/mdadm.conf and rebooting arch (so the new md will be named 127 too), the issue is still there
Why on earth would you do that when it was working?, or are you trying to pinpoint it?
Offline
Why on earth would you do that when it was working?, or are you trying to pinpoint it?
Sorry, just try to identify the main issue...
0) In the gentoo box the md array was named /dev/md127, i created the PV on base on that
a) When booting into arch without any definition in initramfs/etc/mdadm.conf (therefore name md127 too), the PV will be found
# pvdisplay /dev/md127
--- Physical volume ---
PV Name /dev/md127
VG Name vg5
PV Size 19.53 GiB / not usable 3.88 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 4999
Free PE 2439
Allocated PE 2560
PV UUID LP33Nk-7EaC-JPht-v33x-duOh-2uTQ-DfP0zz
b) When booting into arch with a definition in initramfs/etc/mdadm.conf (named md4), the PV will have issues
# pvscan
No device found for PV LP33Nk-7EaC-JPht-v33x-duOh-2uTQ-DfP0zz.
No device found for PV LP33Nk-7EaC-JPht-v33x-duOh-2uTQ-DfP0zz.
PV /dev/sdc3 VG vg3 lvm2 [1000.00 GiB / 50.00 GiB free]
PV /dev/md3 VG vg2 lvm2 [1.62 TiB / 904.77 GiB free]
PV /dev/sdc4 VG vg4 lvm2 [1.65 TiB / 94.52 GiB free]
PV /dev/md2 VG vg1 lvm2 [159.87 GiB / 79.87 GiB free]
Total: 4 [4.41 TiB] / in use: 4 [4.41 TiB] / in no VG: 0 [0 ]
# pvdisplay /dev/md4
No device found for PV LP33Nk-7EaC-JPht-v33x-duOh-2uTQ-DfP0zz.
No device found for PV LP33Nk-7EaC-JPht-v33x-duOh-2uTQ-DfP0zz.
--- Physical volume ---
PV Name /dev/md4
VG Name vg5
PV Size 19.53 GiB / not usable 3.88 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 4999
Free PE 2439
Allocated PE 2560
PV UUID LP33Nk-7EaC-JPht-v33x-duOh-2uTQ-DfP0zz
Offline
Could you show your mdadm.conf for cluess.
Offline
Could you show your mdadm.conf for cluess.
# cat /etc/mdadm.conf
MAILADDR root@arch.local
#PROGRAM /usr/bin/handle-mdadm-events
DEVICE /dev/sd*
ARRAY /dev/md/0 metadata=1.0 UUID=65a58608:9c6e3767:1ed32281:ddbb141f name=nastassja:0
ARRAY /dev/md/2 metadata=1.2 UUID=56a0fd3e:32e58bc6:350e9149:4624c998 name=arch:2
ARRAY /dev/md/3 metadata=1.2 UUID=ca1db8a2:4934466a:95f1e9cb:b3f942b8 name=arch:3
ARRAY /dev/md/4 metadata=1.0 UUID=591f9826:64527a8d:df6748e8:a0ce4f4b name=arch:4
If the /dev/md4 is available (in initramfs), pvscan will fail
If i comment out /dev/md4, pvscan will have no issues
Offline
I have read somewhere on the net that says something like: If mdadm discovers a problem, it will create new device names on it's own, if I find it I place the link.
The solution he used was removing 'unnecessary' items from mdadm.conf, something like this:
ARRAY /dev/md/0 UUID=65a58608:9c6e3767:1ed32281:ddbb141f
ARRAY /dev/md/2 UUID=56a0fd3e:32e58bc6:350e9149:4624c998
ARRAY /dev/md/3 UUID=ca1db8a2:4934466a:95f1e9cb:b3f942b8
ARRAY /dev/md/4 UUID=591f9826:64527a8d:df6748e8:a0ce4f4b
This should only affect 'naming' as far as I know.
edit: here is the link on Ubuntu forums
Last edited by qinohe (2013-11-30 11:15:02)
Offline
As mentioned, currently i dont have any problem, just wanted to figure out the described issues
No i can reproduce the error:
# mdadm --create --level=0 --e 1.0 --chunk=64 /dev/md5 /dev/sda5 /dev/sdb5
# -- reboot / not updating mdadm.conf in initramfs ---
# -- after reboot
# pvcreate /dev/md127 -- the new raid array will get a name, 127 by default
# vgcreate vg5 /dev/md127
# lvcreate -L 10G --name test vg5
# -- now updating mdadm.conf and initramfs
# mdadm -Es > /etc/mdadm.conf
# mkinitcpio -- new initramfs, is was even using pacman -S linux (easier ;))
# -- reboot / md127 will be md5
# -- after reboot
# pvscan
No device found for PV rKOLhP-bajK-EkUS-ipRD-9Ex1-81fR-QezXp3.
No device found for PV rKOLhP-bajK-EkUS-ipRD-9Ex1-81fR-QezXp3.
PV /dev/sdc3 VG vg3 lvm2 [1000.00 GiB / 50.00 GiB free]
PV /dev/md3 VG vg2 lvm2 [1.62 TiB / 904.77 GiB free]
PV /dev/sdc4 VG vg4 lvm2 [1.65 TiB / 94.52 GiB free]
PV /dev/md2 VG vg1 lvm2 [159.87 GiB / 79.87 GiB free]
Total: 4 [4.41 TiB] / in use: 4 [4.41 TiB] / in no VG: 0 [0 ]
# pvdisplay /dev/md5
No device found for PV rKOLhP-bajK-EkUS-ipRD-9Ex1-81fR-QezXp3.
No device found for PV rKOLhP-bajK-EkUS-ipRD-9Ex1-81fR-QezXp3.
--- Physical volume ---
PV Name /dev/md5
VG Name vg5
PV Size 19.53 GiB / not usable 3.88 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 4999
Free PE 2439
Allocated PE 2560
PV UUID rKOLhP-bajK-EkUS-ipRD-9Ex1-81fR-QezXp3
Offline
I understand, you have no problem, but lets go in the md5 -->md127 change.
Maybe thas what triggers it, your reboot in between.
The creation from mdx to lv is done in one go here.
I will try and do it like you, with a reboot in between, I'll report back.
Offline
but lets go in the md5 -->md127 change.
Maybe thas what triggers it, your reboot in between.
The creation from mdx to lv is done in one go here.
I will try and do it like you, with a reboot in between, I'll report back.
even for:
# mdadm --create --level=0 -e 1.0 --raid-devices=2 /dev/md5 /dev/sda5 /dev/sdb5
# pvcreate /dev/md5
# vgcreate vg5 /dev/md5
# lvcreate -L 10G --name test vg5
# mdadm -Es > /etc/mdadm.conf
# -- edit /etc/mdadm.conf, change /dev/md/5 to /dev/md/4
# -- mkinitcpio
# -- reboot
# -- pvscan will fail
Offline
You don't mention using vgscan & vgchange -ay I do use them, the wiki says this too.
Well, I tried it the way you do, with a reboot in between, also without the issue.
edit; Why run mkinitcpio?
Last edited by qinohe (2013-11-30 12:24:15)
Offline
edit; Why run mkinitcpio?
without updating the initramfs you will have a "default" named md device (isnt it?)
Offline
qinohe wrote:edit; Why run mkinitcpio?
without updating the initramfs you will have a "default" named md device (isnt it?)
When simply adding a array I don't do this I must say..
edit;Forget to say, rebooting is also not necessary, you can create it and mount it.
Last edited by qinohe (2013-11-30 13:32:00)
Offline