You are not logged in.
After recent upgrade to 3.0.0-1, I'm unable to run any virtual machine with storage on LVM storage pool. Pool is defined, lvm volumes are created and recognized but running virsh command gives:
virsh # start centos7.0
error: Failed to start domain centos7.0
An error occurred, but the cause is unknown
Do you have any suggestions what could be wrong?
Last edited by mmarzantowicz (2017-01-27 09:24:03)
Offline
+1 on this. None of the previouly working machines on LVM are working for me after the update. Any machine created with a file backed storage works fine.
Offline
Same here, I think. I just downgraded:
libvirt 3.0.0-1 2.4.0-2 -1,37 MiB
libvirt-glib 1.0.0-1 0.2.3-1 0,11 MiB
libvirt-python 2.5.0-1 2.2.0-2 -0,02 MiB
Now it works like a charm again.
Before I had:
aes: Failed to open file '/var/lib/libvirt/qemu/domain-1-mymachine/master-key.aes': No such file or directory
And when I created an empty dummy file for master-key.aes:
2017-01-25T21:29:35.691145Z qemu-system-x86_64: -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=25: TUNGETIFF ioctl() failed: Bad file descriptor
TUNSETOFFLOAD ioctl() failed: Bad file descriptor
2017-01-25T21:29:35.691240Z qemu-system-x86_64: -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=25: vhost_set_owner failed: Bad file descriptor (9)
2017-01-25T21:29:35.691249Z qemu-system-x86_64: -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=25: vhost-net requested but could not be initialized
P.S.: I know that doesn’t point to LVM but my machine (Windows …) is also using LVs and this sounds like a bigger issue with libvirt 3.0.0.
Last edited by blacky (2017-01-25 22:18:54)
Offline
+1 on this as well.
Offline
+1 for me too. VMs using a file-based storage pool are fine. Not so for LVM-based.
Offline
I think this problem is related to this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1413773
The only fix right now seems to be to downgrade libvirt to the previous version.
Offline
Yeah, libvirt 3.0.0 breaks LVM-backed virtual machines. Fix it by downgrading to the previous version, 2.4.0:
https://archive.archlinux.org/packages/ … pkg.tar.xz
Offline
This looks like a major breakage. Should this be noted in some way in Arch Wiki or package?
Offline
+1 for me too. Downgrade to previous version fixed it
Offline
[deleted]
Last edited by dreamersbrow (2017-01-26 20:25:31)
Offline
+1, has anyone found a workaround that does not involve downgrading ?
Offline
No idea if this is related, but I ran into a very similar issue with ZFS pools when updating to libvirt 3.0.0.
Reason seemed to be that libvirt spawns qemu with a separate mount namespace presumably making the drives in /dev unavailable.
I could work around that by setting the namespaces option in qemu.conf to namespaces = [] (it can be found on the bottom of the file, this is a new security feature added in 3.0.0).
Not entirely sure why libvirt fails to automatically make those devices available to qemu though as it by default blocks access to everything in /dev anyways and enables just the stuff needed for storage drives...
Offline
Setting the namespace to [] worked for me on lvm volumes as well.
Thx GitOut!
Last edited by sulaweyo (2017-02-05 11:27:34)
Offline
Thx GitOut!
Offline
After libvirt-python has been upgraded to 3.0.0, virt-manager is now broken for those who have performed the libvirt downgrade.
Here's the previous version of libvirt-python for downgrading: https://archive.archlinux.org/packages/ … pkg.tar.xz
Set the following in /etc/pacman.conf to skip upgrading those packages until the issue has been resolved:
IgnorePkg = libvirt libvirt-python
Last edited by abefar (2017-02-12 02:18:23)
Offline
Thanks GitOUt,
namespaces = []
works I can start my vm again.
Offline
Yesterday I installed the new 3.1 release and it seems to be working fine now without any modifications:
community/libvirt 2.4.0-2 3.1.0-1 1,43 MiB 6,65 MiB
community/libvirt-glib 0.2.3-1 1.0.0-1 -0,11 MiB 0,27 MiB
community/libvirt-python 2.2.0-2 3.1.0-1 0,01 MiB 0,14 MiB
Offline