You are not logged in.

#1 2016-04-10 17:05:30

zipeldiablo
Member
From: Paris
Registered: 2015-08-15
Posts: 22

Libvirt fail to load vm after setting cset

Hi, so i have my win10 virtual machine which i load using virsh, for the vm config i use a libvirt xml file and some scripts to improve performances :

#!/bin/bash
if [[ $(virsh list | grep win10) == "" ]]
then
  vm=win10
fi

xrandr --output HDMI2 --off
xrandr --output HDMI1 --primary

cset set -c 0,4 -s system
cset proc -m -f root -t system
cset proc -k -f root -t system

cpupower -c 1-3 frequency-set -g performance
cpupower -c 5-7 frequency-set -g performance

synergyc 192.168.1.38:24800
#!/bin/bash
if [[ $EUID -ne 0 ]]; then
  echo "This script must be run as root" 1>&2
  su
  exit 0
fi

set -x

virsh start win10

if [[ $(virsh list | grep win10) == "" ]]
then
  exit 0
fi

set +x

exit 0
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  virsh edit win10
or other application using the libvirt API.
-->
<!-- mycomment   -->
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>win10</name>
  <uuid>9ac4b4e2-1960-4d4b-9918-83becd632ff2</uuid>
  <!-- customize memory   -->
  <memory unit='GiB'>8</memory>
  <currentMemory unit='GiB'>8</currentMemory>
  <memoryBacking>
    <hugepages/>
  </memoryBacking>
  <!-- mycomment   -->
  <vcpu placement='static'>6</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='1'/>
    <vcpupin vcpu='1' cpuset='2'/>
    <vcpupin vcpu='2' cpuset='3'/>
    <vcpupin vcpu='3' cpuset='5'/>
    <vcpupin vcpu='4' cpuset='6'/>
    <vcpupin vcpu='5' cpuset='7'/>
  </cputune>
  <cpu mode='host-passthrough'>
    <topology sockets='1' cores='3' threads='2'/>
  </cpu>
  <os>
    <type arch='x86_64' machine='pc-i440fx-2.4'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
    <nvram template='/usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd'/>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
    <kvm>
      <!-- hide kvm from the os to run nvidia drivers   -->
      <hidden state='on'/>
    </kvm>
    <!-- mycomment   -->
    <vmport state='off'/>
  </features>
  <!-- mycomment   -->
  <clock offset='localtime'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <pm>
    <!-- mycomment   -->
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/bin/qemu-kvm</emulator>
    <!-- System disk in a qcow2 container   -->
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/home/zipeldiablo/Windows/win10.qcow2'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </disk>
    <!-- mycomment   -->
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='usb' index='0' model='nec-xhci'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>
    <!-- use network bridge as network interface   -->
    <interface type='bridge'>
      <mac address='52:54:00:e9:91:48'/>
      <source bridge='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <!-- controller pci   -->
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </hostdev>
    <!-- controller pci bis   -->
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </hostdev>
    <!-- controler usb card -->
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </hostdev>
  </devices>
  <qemu:commandline>
    <qemu:arg value='-object'/>
    <qemu:arg value='input-linux,id=keyb,evdev=/dev/input/by-id/usb-Heng_Yu_Technology_K82H-event-kbd,grab_all=on,repeat=on'/>
    <qemu:arg value='-object'/>
    <qemu:arg value='input-linux,id=keyboard,evdev=/dev/input/by-id/usb-Heng_Yu_Technology_K82H-event-if01'/>
    <qemu:arg value='-object'/>
    <qemu:arg value='input-linux,id=mouse,evdev=/dev/input/by-path/pci-0000:00:14.0-usb-0:4:1.0-event-mouse'/>
  </qemu:commandline>
</domain>

I assume the error happens when libvirt tries to vcpupin my vcpu to the physical core, might be because when i use cset the software somehow takes control of the cores.
What i don't get is that it shouldn't happen, i run everything as root and those are my files, also on the source i used to create my bash scripts no one seems to encounter that issue.
Googling the issue got me some post on the redhat mailing list but no fix for this, i assume someone here might have encounter this very issue, if so any lead on how to solve this?

Offline

#2 2016-07-08 18:35:22

tholin
Member
Registered: 2015-03-17
Posts: 7

Re: Libvirt fail to load vm after setting cset

I had the same problem but I managed to find a solution after several hours.

by running
cset set -c 0,4 -s system
cset proc -m -f root -t system
cset proc -k -f root -t system
or just cset shield --kthread on --cpu 1-3,5-7

cset is creating a new cpusets named "system" with allowed cpus=0,4. It then moves all processes in the system to that new cpusets. It also makes that cpusets exclusive meaning that the cpus in the set can not be assigned to any other cpuset. This is a problem because the first thing libvirt does when it starts is creating a new cpuset named "machine" will all cpus. It then subdivides that cpuset into smaller pieces for the various vcores. If the system set is exclusive creation of the machine set fails.

The solution is to echo 0 > /sys/fs/cgroup/cpuset/system/cpuset.cpu_exclusive

Offline

Board footer

Powered by FluxBB