You are not logged in.
My system has running fine since 2012 (and still does) based upon a LVM space on top of an encrypted LUKS partition that I use for both root and home.
I have some free space on my LVM array that I use to perform system home & root snapshots prior to any Arch upgrade (just in case) that I remove after rebooting correctly.
Since the latest update I did a couple of days I keep getting the following message when creating the snapshot. A friend of mine with the exact same setup also gets the message. We haven't been so adventurous yet to answer "Yes" to the question regarding the wipe operation. What do you think?
#sudo lvcreate -L 4g -s -n root-snapshot /dev/vgroup/root
WARNING: DM_snapshot_cow signature detected on /dev/vgroup/root-snapshot at offset 0. Wipe it? [y/n] [n]
1 existing signature left on the device.
Logical volume "root-snapshot" created
Looks like snapshot creation went fine. Later I can remove it without any issue:
# sudo lvremove /dev/vgroup/root-snapshot
Do you really want to remove active logical volume root-snapshot? [y/n]: y
Logical volume "root-snapshot" successfully removed
Here is my LVM setup:
# sudo lvdisplay
--- Logical volume ---
LV Path /dev/vgroup/root
LV Name root
VG Name vgroup
LV UUID 9atWzf-Z9ii-ssiW-P51P-4M7o-uGI6-tgS93q
LV Write Access read/write
LV Creation host, time archiso, 2012-11-18 00:29:14 +0100
LV Status available
# open 1
LV Size 25,00 GiB
Current LE 6400
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:1
--- Logical volume ---
LV Path /dev/vgroup/home
LV Name home
VG Name vgroup
LV UUID tx4ZgI-a2pg-ksVU-vllz-SnWU-VL3p-z0EMSm
LV Write Access read/write
LV Creation host, time archiso, 2012-11-18 00:30:45 +0100
LV Status available
# open 1
LV Size 415,00 GiB
Current LE 106240
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:2
# sudo vgdisplay
--- Volume group ---
VG Name vgroup
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 914
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 445,12 GiB
PE Size 4,00 MiB
Total PE 113951
Alloc PE / Size 112640 / 440,00 GiB
Free PE / Size 1311 / 5,12 GiB
VG UUID Ragce3-tU7G-jcbz-JIdN-grad-yWFH-VyI4uk
# sudo pvdisplay
--- Physical volume ---
PV Name /dev/mapper/vgroup
VG Name vgroup
PV Size 445,13 GiB / not usable 5,82 MiB
Allocatable yes
PE Size 4,00 MiB
Total PE 113951
Free PE 1311
Allocated PE 112640
PV UUID 5cB7tp-yjNe-eskX-gVct-t3YJ-pd1r-JdaL2p
Offline
Same problem on all my machines with lvm.
Here the setup from one of my arch systems:
# sudo lvdisplay:
--- Logical volume ---
LV Path /dev/system/arch
LV Name arch
VG Name system
LV UUID XXXXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXXXX
LV Write Access read/write
LV Creation host, time ,
LV Status available
# open 1
LV Size 30,00 GiB
Current LE 7680
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:0
--- Logical volume ---
LV Path /dev/system/home
LV Name home
VG Name system
LV UUID XXXXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXXXX
LV Write Access read/write
LV Creation host, time ,
LV Status available
# open 1
LV Size 24,88 GiB
Current LE 6370
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:1
# sudo vgdisplay
--- Volume group ---
VG Name system
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3618
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 59,12 GiB
PE Size 4,00 MiB
Total PE 15135
Alloc PE / Size 14050 / 54,88 GiB
Free PE / Size 1085 / 4,24 GiB
VG UUID XXXXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXXXX
# sudo pvdisplay
--- Physical volume ---
PV Name /dev/sda2
VG Name system
PV Size 59,13 GiB / not usable 4,32 MiB
Allocatable yes
PE Size 4,00 MiB
Total PE 15135
Free PE 1085
Allocated PE 14050
PV UUID XXXXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXXXX
Offline
Thanks for the feedback. So apparently this is only related to LVM since you are apparently not using LUKS encryption. Now were you brave enough to answer Yes to the wipe question?
Offline
Now were you brave enough to answer Yes to the wipe question?
I did this already. Nothing happen. This question appears again on the next creation of a snapshot. (My daily backup remembers me on this. )
Offline
I've also started getting this in my backup scripts recently (uses LVM snapshots). Same as alphazo, LVMs on a LUKS encrypted partition. I didn't see a bug report yet.
Offline
I have a different (maybe related bug) and found a solution (or workaround) for yours. The prblem is the blkid wiping code. The message goes away if you compile the package with --disable-blkid_wiping.
ArchLinux - make it simple & lightweight
Offline
I have a different (maybe related bug) and found a solution (or workaround) for yours. The prblem is the blkid wiping code. The message goes away if you compile the package with --disable-blkid_wiping.
It works! Thank you.
Offline