You are not logged in.
Hi. I'm trying to do this, but I keep getting a code 43 error.
I have a laptop with an intel i7 4900mq cpu and a nvidia gtx 770m gpu. I'm assigning the nvidia gpu to the vm (running windows 8.1) and using the igpu (intel hd 4600) on the host. Here's my vm's xml file:
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<name>goeosvm</name>
<uuid>fb039381-3dbf-4bb6-b8cd-83a602b141db</uuid>
<memory unit='KiB'>8388608</memory>
<currentMemory unit='KiB'>8388608</currentMemory>
<vcpu placement='static'>8</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type>
</os>
<features>
<acpi/>
<apic/>
<pae/>
<kvm>
<hidden state='on'/>
</kvm>
</features>
<cpu mode='host-model'>
<model fallback='allow'/>
<topology sockets='1' cores='4' threads='2'/>
</cpu>
<clock offset='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/sbin/qemu-system-x86_64</emulator>
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native'/>
<source dev='/dev/sdb4'/>
<target dev='hda' bus='ide'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<disk type='block' device='cdrom'>
<driver name='qemu' type='raw'/>
<target dev='hdb' bus='ide'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/libvirt/images/goeosvm.img'/>
<target dev='hdc' bus='ide'/>
<boot order='1'/>
<address type='drive' controller='0' bus='1' target='0' unit='0'/>
</disk>
<controller type='usb' index='0' model='ich9-ehci1'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x7'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci1'>
<master startport='0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0' multifunction='on'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci2'>
<master startport='2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x1'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci3'>
<master startport='4'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='ide' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<controller type='virtio-serial' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:3a:8b:fc'/>
<source network='default'/>
<model type='rtl8139'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<channel type='spicevmc'>
<target type='virtio' name='com.redhat.spice.0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='spice' autoport='yes' listen='127.0.0.1' keymap='tr'>
<listen type='address' address='127.0.0.1'/>
<image compression='off'/>
</graphics>
<sound model='ich6'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</sound>
<video>
<model type='cirrus' vram='16384' heads='1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</source>
<rom bar='on' file='/home/goeo_/Documents/MSI.GTX770M.3072.130324.rom'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
</source>
<address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
</hostdev>
<redirdev bus='usb' type='spicevmc'>
</redirdev>
<redirdev bus='usb' type='spicevmc'>
</redirdev>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</memballoon>
</devices>
<qemu:commandline>
<qemu:arg value='-drive'/>
<qemu:arg value='if=pflash,format=raw,readonly,file=/usr/share/edk2.git/ovmf-x64/OVMF-pure-efi.fd'/>
</qemu:commandline>
</domain>
What am I doing wrong?
Offline
noctlos wrote:aw wrote:In order to assign a PCI device to a VM, all of the guest memory needs to be pinned (ie. locked) into host memory and mapped through the IOMMU. A normal user only has the ability to lock 64KB of memory (see ulimit). Your VM is probably bigger than that, therefore you need to increase the locked memory limit for the user to run the VM.
Thanks for the tip! How do I increase the locked memory limit?
/etc/security/limits.d/ (see limits.conf it the parent directory)
I created a file at /etc/security/limits.d/50-qemu.conf
@domain hard as 10000000
I also did the following:
#chown noctlos:noctlos /dev/vfio/15
#chmod 666 /dev/vfio/vfio
#chown -R noctlos:noctlos /dev/hugepages
unfortunately, when I go to run qemu, it again gives me:
qemu-system-x86_64: -device vfio-pci,host=03:00.0: vfio_dma_map(0x7f69d1c20890, 0x0, 0x80000000, 0x7f6718000000) = -12 (Cannot allocate memory)
qemu-system-x86_64: -device vfio-pci,host=03:00.0: vfio_dma_map(0x7f69d1c20890, 0x100000000, 0x200000000, 0x7f6798000000) = -12 (Cannot allocate memory)
qemu-system-x86_64: -device vfio-pci,host=03:00.0: vfio: memory listener initialization failed for container
qemu-system-x86_64: -device vfio-pci,host=03:00.0: vfio: failed to setup container for group 15
qemu-system-x86_64: -device vfio-pci,host=03:00.0: vfio: failed to get group 15
qemu-system-x86_64: -device vfio-pci,host=03:00.0: Device initialization failed.
qemu-system-x86_64: -device vfio-pci,host=03:00.0: Device 'vfio-pci' could not be initialized
And for reference:
% ulimit -a :(
-t: cpu time (seconds) unlimited
-f: file size (blocks) unlimited
-d: data seg size (kbytes) unlimited
-s: stack size (kbytes) 8192
-c: core file size (blocks) unlimited
-m: resident set size (kbytes) unlimited
-u: processes 63576
-n: file descriptors 4096
-l: locked-in-memory size (kbytes) 64
-v: address space (kbytes) unlimited
-x: file locks unlimited
-i: pending signals 63576
-q: bytes in POSIX msg queues 819200
-e: max nice 20
-r: max rt priority 0
-N 15: unlimited
Last edited by noctlos (2015-06-03 14:10:37)
Offline
@noctlos
You're not setting the locked memory limit
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
@goeo_
You're expecting discrete graphics in a laptop to behave the same as a discrete card in a desktop. You might have a whole new set of issues to tackle for optimus.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
@goeo_
You're expecting discrete graphics in a laptop to behave the same as a discrete card in a desktop. You might have a whole new set of issues to tackle for optimus.
i thought optimus was software only? what kind of different behaviors should i expect? has this never been done before?
Offline
well, at least it somehow arbiters the display, it usually copies contents of the video buffers back and forth, I'd kinda fear what it will do if there will be different OSes using them ... though you can try )
Offline
@noctlos
You're not setting the locked memory limit
Yup. Instead of:
@group hard as <limit>
I needed to use something like
@group hard memlock <limit>
@group soft memlock <limit>
Along with chown'ing the appropriate files mentioned previously, I can start my VM as user. Woohoo!
Now, if only I could figure out a way to get the sound to work!
I'm using the qemu option
-soundhw hda
with QEMU_AUDIO_DRV=pa. I don't get any errors when I run it as user as some have, but the sound quality is as bad as ever. Did anyone solve this?
EDIT: Nevermind, the sound works okay when I increase QEMU_PA_SAMPLES to the default of 4096.
Last edited by noctlos (2015-06-03 15:15:30)
Offline
has this never been done before?
Exactly
BTW, i've observed very bad behaviour of 770m in some other(i think it was lenovo) notebook, breaking lspci with unknown headers.
So i'd totally not expect it to work. Ask ghormoon, he has an old laptop with a more "true" dedicated card, and that's even a radeon, and... there's still a huge fail.
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline
Hi, I tried to use GPU passthrough with libvirt, qemu-kvm on ubuntu, but there was a problem related to PCI-e in guest OS was downgraded to version 1.1 instead of 2.0 (when run win on bare metal), does this problem appears with latest software versions on arch?
Offline
Hi, I tried to use GPU passthrough with libvirt, qemu-kvm on ubuntu, but there was a problem related to PCI-e in guest OS was downgraded to version 1.1 instead of 2.0 (when run win on bare metal), does this problem appears with latest software versions on arch?
Devices will often reduce the link rate to save power when not under load, this is why tools like gpu-z provide a render load to see the correct link rate. You also don't want to fully trust what the guest thinks of the link rate, look at the host device with lspci.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
I can pass a second gpu through just fine with a naked qemu line, like exactly in the OP. But when I try to punch this into a libvirt xml, it won't work.
Even with xmls others posted as working.
The log says: that the monitoring socket could not be created and that vfio-pci could not be initialised because it gets access to /dev/vfio/16 denied.
Well, I have made everything root for testing in /etc/libvirt/libvirtd.conf and /etc/libvirt/qemu.conf.
I chmodded /dev/vfio/* to 666.
I set selinux to disabled.
I have allowed the device cgroup in the config files.
I tried with brute force on a testing kernel without support for cgroups, selinux, capabilities and every security feature I could find ripped out or set to allow. Still access denied.
Inbefore the failing are warnings that the virtual instance is "tained" with high privileges (o rly?) and custom argv. Yeah sure thats right. Does that taint prevent starting?
How can I give qemu the x-vga=on parameter without having the extra <qemu:commandline> segment in the xml? Would make life a lot easier.
I was browsing the libvirt source code some time but I wasnt entirely sure where the allowed extra arguments are passed to qemu.
I will paste actual configs, logs and versions later, because the target computer is offline and I am typing from another guys internet.
Offline
@PrinzipDesSees
See part 5 of the series in my blog (sig) for how to use wrapper scripts to apply x-vga=on. Using qemu:args is likely the problem, there are very few cases where those are necessary or advised. You should not need to touch libvirtd.conf or qemu.conf.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
update to the laptop experiment:
I've fiddled a bit more with drivers, ended up on latest beta ones anyway ...
it works except that it turns the display off in windows for some strange reason.
in linux, everything is ok, extending to external display too.
in windows, laptop screen goes black, though it actually sees it in the OS, detects resolutions and such. external display works fine. somehow usable if I disable the internal one so it won't extend desktop to it. windows rated it 5.3 and 5.9, which is aprox. what it had bare metal (I don't remember exactly). I'll do some game tests someday
what I need to fix now, is that the VMs don't shutdown properly. thinking of trying another distro (more stable kvm and qemu), did anyone succed vfio passthrough eg. on debian?
also next thing to do is to modify qemu for ps/2 passthrough, write some management scripts and I'm good to go!
I actually can somehow live with that the internal screen doesn't work in windows, though it's not preffered variant (can't play games except at home, but I don't have that much time for gaming anyway )
Offline
Say I wanted to run this to use primarily for virtual reality applications. Would the latency inherent in virtualization be a deal breaker? Has anyone tried?
Offline
Hi there! Long time no see! My System is working really well. However I am still at Linux 3.9.
Is it safe to upgrade to 4.04? I am using the regular Linux kernel, NOT a patched one.
Offline
Hi there! Long time no see! My System is working really well. However I am still at Linux 3.9.
Is it safe to upgrade to 4.04? I am using the regular Linux kernel, NOT a patched one.
You're really the only one that can answer this, but I sure hope the answer is yes. That's certainly the goal, to improve things without breaking existing users.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
Say I wanted to run this to use primarily for virtual reality applications. Would the latency inherent in virtualization be a deal breaker? Has anyone tried?
Direct device assignment is generally a way to avoid the latency inherent in virtualization. For the most part, the VM has direct access to the hardware. The exceptions to that are interrupts, which are bounced through the host on current hardware, PCI config space, which is filtered by the hypervisor for virtualization, and any regions that cannot map directly to the guest, such as quirks for mmio mirrors of config space, io-port regions, or regions smaller than the processor page size. There are also configuration options important to reducing latency, such as MSI tuning, vCPU pinning and proper balancing of CPU resources between host and guest, and potentially even IRQ pinning in the host. If you have a processor that supports Intel APICv, the latter may allow interrupts to be injected to the guest without a vmexit. We'll have to wait for software and hardware to support Intel Posted Interrupts to avoid the host in the interrupt path altogether. People here are showing results within a few percentage points of bare metal in the best cases, so I would naively assume that the same is possible for VR applications.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
awesome, thank you very much. I tried it and nothing broke! So I'm up to date.
I have another question that's bugging me for months.
Whenever I turn on my pc, first automatically archlinux starts and after that through virt-manager Windows 8.1.
How about turning it off? Is there a single command I can use to first shut down safely my virtualized Windows and after that the host?
Because If I simply use "poweroff" in terminal I don't think it's done properly, right??
Offline
awesome, thank you very much. I tried it and nothing broke! So I'm up to date.
I have another question that's bugging me for months.
Whenever I turn on my pc, first automatically archlinux starts and after that through virt-manager Windows 8.1.
How about turning it off? Is there a single command I can use to first shut down safely my virtualized Windows and after that the host?
Because If I simply use "poweroff" in terminal I don't think it's done properly, right??
On Fedora there's an /etc/sysconfig/libvirt-guests file that includes such things as:
# action taken on host boot
# - start all guests which were running on shutdown are started on boot
# regardless on their autostart settings
# - ignore libvirt-guests init script won't start any guest on boot, however,
# guests marked as autostart will still be automatically started by
# libvirtd
#ON_BOOT=start
# Number of seconds to wait between each guest start. Set to 0 to allow
# parallel startup.
#START_DELAY=0
# action taken on host shutdown
# - suspend all running guests are suspended using virsh managedsave
# - shutdown all running guests are asked to shutdown. Please be careful with
# this settings since there is no way to distinguish between a
# guest which is stuck or ignores shutdown requests and a guest
# which just needs a long time to shutdown. When setting
# ON_SHUTDOWN=shutdown, you must also set SHUTDOWN_TIMEOUT to a
# value suitable for your guests.
#ON_SHUTDOWN=suspend
# If set to non-zero, shutdown will suspend guests concurrently. Number of
# guests on shutdown at any time will not exceed number set in this variable.
#PARALLEL_SHUTDOWN=0
# Number of seconds we're willing to wait for a guest to shut down. If parallel
# shutdown is enabled, this timeout applies as a timeout for shutting down all
# guests on a single URI defined in the variable URIS. If this is 0, then there
# is no time out (use with caution, as guests might not respond to a shutdown
# request). The default value is 300 seconds (5 minutes).
#SHUTDOWN_TIMEOUT=300
I assume this works in conjunction with the systemd libvirt-guests.service. For an assigned device VM I'd generally suggest shutdown vs suspend, unless you know suspend works with your devices. I also notice that Windows only wakes up the first time sending a shutdown command, so there may be some configuration in the guest necessary to make it shutdown on the first attempt. There's also libvirt xml options to configure whether suspend is available to the guest.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
that's great. I will try this.
Thank you very much Alex! Always a help. Awesome!
Offline
Is there a updated site/post where I can find the newest _working_ nvidia driver version and a todo list to get it working?
Offline
Is there a updated site/post where I can find the newest _working_ nvidia driver version and a todo list to get it working?
There are no non-working versions. Follow the guide series in my blog below for what I think is the current best setup.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
Having issues with this, see my thread here: https://bbs.archlinux.org/viewtopic.php … 6#p1534516. Some sort of driver conflict, I think.
Offline
Having issues with this, see my thread here: https://bbs.archlinux.org/viewtopic.php … 6#p1534516. Some sort of driver conflict, I think.
There's a perfectly good guide that I'd recommend in the link in my sig, I'd strongly recommend it over any of this vfio-bind script nonsense. Your picture is way too blurry to figure out anything meaningful from it.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
Should I start from Part 3 and proceed from there? It will be a tad difficult since I cannot boot into a graphical environment, I think i just need to remove pci-stub from the initramfs, and boot without loading the systemd service and kernel module (pci_stub).
Last edited by garnerlogan65 (2015-06-05 20:00:04)
Offline