You are not logged in.
Now i could make it all work with stock (from repos, no recompiling) software available:
Kernel: 3.15 (no patches)
Qemu 2.0.0
libvirt 1.2.2
virt-manager 1.0.1
CPU AMD FX(tm)-8350 Eight-Core Processor
GPU radeon R9 270X
Motherboard SABERTOOTH 990FX R2.0
This is on linux mint 17 (based on ubuntu 14.04 LTS). For libvirt i used default pc chipset (no manual xml editing!). Using emulated GPU for booting, OS switches to physical GPU later on during boot. Good to finally see this work without much hackery. :)
Last edited by novist (2014-06-09 11:47:43)
Offline
Hey, novist. I am using the same software with you, and there is still one problem for me.
Kernel: 3.15 (no patches)
Qemu: 2.0.0
libvirt: 1.2.2
CPU: Intel(R) Core(TM) i7-4770
GPU: radeon 7870
Motherboard: Asus Z87K
My OS is Linux Mint 17 too. I used the kvm + vfio_pci method. I configured all according to this thread except kernel/qemu version and patch.
I have succeeded to boot my guest os, as i think the VGA card is at least partially passthroughed.
When i start my guest os win7, the boot screen of this win7 will show through radeon 7870 to my secondary monitor.
Problem is i just can't find the "radeon 7870" device in win7's device manager. There is only one graphics card called "Normal VGA Monitor".
Indeed, i have tried to install ATI driver and run warcraft III, everything is ok. I am just a little confused about that.
Offline
sounds like it works alright. if you get expected performance dont worry about it. could be some kvm hickup or something, who knows. you did not add emulated vga right? i can suggest you trying to set up vm on pc chipset with physical vga as secondary, maybe that would change something.
Offline
Got it working, however as expected audio sucks. What sort of latency would I been looking it if I connected a usb sound card to my line in?
Offline
I keep getting this error:
qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: error no iommu_group for device
qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device initialization failed.
qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device 'vfio-pci' could not be initialized
Specs:
3570K
AMD Radeon 6970 (Card I'm trying to pass through)
Mobo: ASUS P8Z77-V LK
This is the third time I've tried this after several formats (not related to this).
Any idea on how to solve it? I've tried looking around, but I can't find anything.
Offline
I keep getting this error:
qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: error no iommu_group for device
qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device initialization failed.
qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device 'vfio-pci' could not be initializedSpecs:
3570K
AMD Radeon 6970 (Card I'm trying to pass through)
Mobo: ASUS P8Z77-V LKThis is the third time I've tried this after several formats (not related to this).
Any idea on how to solve it? I've tried looking around, but I can't find anything.
Get a CPU that supports VT-d, yours does not:
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
iwasaperson wrote:I keep getting this error:
qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: error no iommu_group for device
qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device initialization failed.
qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device 'vfio-pci' could not be initializedSpecs:
3570K
AMD Radeon 6970 (Card I'm trying to pass through)
Mobo: ASUS P8Z77-V LKThis is the third time I've tried this after several formats (not related to this).
Any idea on how to solve it? I've tried looking around, but I can't find anything.Get a CPU that supports VT-d, yours does not:
I was not aware of that. Thanks for letting me know. Does the 4790K support it?
EDIT: Just looked it up. It does.
Last edited by iwasaperson (2014-06-09 20:57:08)
Offline
Hi,
First time poster here, I hope you'll forgive my inexperience. I should also point out that I am trying this with Debian testing / "jessie", hopefully that isn't a problem. This seems to be the best resource for KVM VGA passthrough right now.
I'm trying to get primary VGA passthrough working but running into an issue when trying to enable x-vga=on. Secondary vga passthrough is working fine with "-vga std" set but I get the following when using x-vga and vga none:
qemu-system-x86_64: -device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: Device does not support requested feature x-vga
qemu-system-x86_64: -device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: failed to get device 0000:02:00.0
qemu-system-x86_64: -device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device initialization failed.
qemu-system-x86_64: -device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device 'vfio-pci' could not be initialized
I should probably note here that I am using a git compiled kvm and qemu, but I am using a packaged kernel from Debian's testing repo: 3.14-1-amd64. This might be the source of my problems – it wasn't clear to me if many of the patches from the OP were still needed on newer releases of the kernel. I've read through a fair bit of the thread, but I'll confess that I haven't read all 81 pages yet. I also haven't tried a compiled kernel, but if this looks to be the source of the problem I'm happy to try it.
Otherwise, I'm not sure what could be causing this issue. Both cards are working in secondary passthrough and are ATI Radeon cards – not Nvidia (though I am hoping to test a GTX card soon as well)
Thanks for your time, any suggestions would be greatly appreciated!
Offline
qemu-system-x86_64: -device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: Device does not support requested feature x-vga
This means that either a) your kernel does not support CONFIG_VFIO_PCI_VGA or b) the device is not a VGA device. To test a):
$ grep CONFIG_VFIO_PCI_VGA /boot/config-`uname -r`
To test b):
$ lspci -s 2:00.0 | grep VGA
If you have Intel host graphics, you still need the i915 patch for your kernel. If you Radeon host graphics, you need the other VGA arbiter patch. Both of these have been referenced in the last few pages, IIRC.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
Passthrough of an AMD 6450 works well with kernel 3.14 with i915 VGA arbiter patch applied. I need to passthrough the onboard SATA controller as well and that goes less smoothly. Said controller is found in iommu group 10
### Group 10 ###
00:1f.0 ISA bridge: Intel Corporation H77 Express Chipset LPC Controller (rev 04)
00:1f.2 SATA controller: Intel Corporation 7 Series/C210 Series Chipset Family 6-port SATA Controller [AHCI mode] (rev 04)
00:1f.3 SMBus: Intel Corporation 7 Series/C210 Series Chipset Family SMBus Controller (rev 04)
When starting a guest in libvirt with the SATA controller marked for passthrough results in
Error starting domain: internal error: process exited while connecting to monitor: qemu-system-x86_64: -device vfio-pci,host=00:1f.2,bus=root.1: vfio: error, group 10 is not viable, please ensure all devices within the iommu_group are bound to their vfio bus driver.
qemu-system-x86_64: -device vfio-pci,host=00:1f.2,bus=root.1: vfio: failed to get group 10
qemu-system-x86_64: -device vfio-pci,host=00:1f.2,bus=root.1: Device initialization failed.
qemu-system-x86_64: -device vfio-pci,host=00:1f.2,bus=root.1: Device 'vfio-pci' could not be initialized
The acs override patch was applied against the 3.14 sources from which the running kernel was built (doublechecked that by applying it to the same source tree again).
kvmhost-2:/home/user/Desktop # uname -r
3.14.4-vfio-acs-1.gbebeb6f-desktop
kvmhost-2:/usr/src/linux-3.14.4-vfio-acs-1.gbebeb6f # patch -Np1 -i override_for_missing_acs_capabilities.patch
patching file Documentation/kernel-parameters.txt
Reversed (or previously applied) patch detected! Skipping patch.
1 out of 1 hunk ignored -- saving rejects to file Documentation/kernel-parameters.txt.rej
patching file drivers/pci/quirks.c
Reversed (or previously applied) patch detected! Skipping patch.
3 out of 3 hunks ignored -- saving rejects to file drivers/pci/quirks.c.rej
The boot loader has a flag set as per an earlier post:
intel_iommu=on pcie_acs_override=downstream
I'm completely out of ideas. Any and all feedback are much appreciated. Thanks.
Offline
Error starting domain: internal error: process exited while connecting to monitor: qemu-system-x86_64: -device vfio-pci,host=00:1f.2,bus=root.1: vfio: error, group 10 is not viable, please ensure all devices within the iommu_group are bound to their vfio bus driver.
I have met this error several times. I patched the acs patch on 3.14.5. I think it just won't work. Because that patch may not be applicable to the kernel version you used.
For now, kernel 3.15 is ok. It seems like that acs commit is got applied.
Last edited by apporc (2014-06-10 01:18:28)
Offline
### Group 10 ### 00:1f.0 ISA bridge: Intel Corporation H77 Express Chipset LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation 7 Series/C210 Series Chipset Family 6-port SATA Controller [AHCI mode] (rev 04) 00:1f.3 SMBus: Intel Corporation 7 Series/C210 Series Chipset Family SMBus Controller (rev 04)
When starting a guest in libvirt with the SATA controller marked for passthrough results in
qemu-system-x86_64: -device vfio-pci,host=00:1f.2,bus=root.1: vfio: error, group 10 is not viable...
The acs override patch was applied against the 3.14 sources from which the running kernel was built (doublechecked that by applying it to the same source tree again).
...
The boot loader has a flag set as per an earlier post:intel_iommu=on pcie_acs_override=downstream
I'm completely out of ideas. Any and all feedback are much appreciated. Thanks.
I won't advise it, but there are more options to pcie_acs_override than just downstream. See https://lkml.org/lkml/2013/5/30/513 IMHO, unless you have a specific need, assigning a SATA controller is more trouble than it's worth, especially when you can't guarantee device isolation. virtio-blk provides plenty of performance when backed by a sufficiently fast disk or ssd.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
I won't advise it, but there are more options to pcie_acs_override than just downstream. See https://lkml.org/lkml/2013/5/30/513 IMHO, unless you have a specific need, assigning a SATA controller is more trouble than it's worth, especially when you can't guarantee device isolation. virtio-blk provides plenty of performance when backed by a sufficiently fast disk or ssd.
Thanks Alex for your fast reply. I'll have a look at those options. Agree with virtio-blk offering sufficient performance, however in this particular case I need to passthrough the SATA controller to a guest running UnRAID (a networked storage server product). I have this running on another box that is on kernel 3.11 and QEMU 1.6.2 using PCI assign.
Offline
I have met this error several times. I patched the acs patch on 3.14.5. I think it just won't work. Because that patch may not be applicable to the kernel version you used.
For now, kernel 3.15 is ok. It seems like that acs commit is got applied.
Thanks! That would be good. I'll give kernel 3.15 a try tomorrow and report back.
Offline
I just found out something unpleasant sniffing around my windows nvidia drivers and I notice it says I'm using "pci-express x4" turns out the secondary PCI-Express x16 slot on most motherboards is actually just x4 (read the small letters right?) the performance implication as far as I understand is that x16 has more bandwidth capabilites than x4 and as such this probably really borks the performance of high end graphics cards. As even old studies would suggest. I also hear that using the extra PCI-E slot will reduce the bandwidth of the primary slot (i.e. drop it from x16 to x12) but I haven't confirmed any of this yet. Chances are you have this same problem too unless you have like a really expensive high end motherboard. Mine is like medium end.
This whole thing is kinda dubious and I'm not sure what to think about it, so I asked. Only workaround for this problem I can think of is passing through the primary card which xen should be capable of but it's a lot harder to get working than this. Another workaround might be to use an intergrated gpu on the host (unless the primary gpu can actually be passed through instead of the secondary one)
Last edited by rabcor (2014-06-10 08:27:35)
Offline
rabcor, I think it is possible to pass through primary card. Do need your host to use any GPU at all, or are you happy to let it run headless (and perhaps present its GUI over the network using tools such as xvnc or nomachine)?
Offline
Lol of course I need my host to use a GPU at all, otherwise what would be the point in all of this? I'd just boot Windows and skip linux, but I want to be doing everything except gaming on Linux.
Last edited by rabcor (2014-06-10 08:46:34)
Offline
rabcor maybe it is possible to use second GPU for host. after all if windows can switch to second GPU on boot then linux should be able to do this even more so. sideeffect of that would be boot messages showing on first gpu early on boot including grub menus. if i were you i would try passing through first gpu same way like you do with second and see what happens. it might work.
Offline
Lol of course I need my host to use a GPU at all, otherwise what would be the point in all of this? I'd just boot Windows and skip linux, but I want to be doing everything except gaming on Linux.
Dunno, I am so used to Linux command line over ssh, that I do not care to see its GUI at all. I guess I am not the only one.
Still, Linux should work with second GPU, I think the challenge is to:
1. get vgacon to give away primary GPU before you start the guest
2. get X to ignore primary GPU
Both should be doable, I think 915 patch might help with 1.
Offline
I have met this error several times. I patched the acs patch on 3.14.5. I think it just won't work. Because that patch may not be applicable to the kernel version you used.
For now, kernel 3.15 is ok. It seems like that acs commit is got applied.
Built a kernel 3.15 with only the i915 VGA arbiter patch applied and have the same result-
vfio: error, group 10 is not viable, please ensure all devices within the iommu_group are bound to their vfio bus driver.
. There is a 3.15 version of the ACS override patch in linux-mainline.tar.gz linked on the first post and I'll try that next. Thanks.
Offline
Is libvirt/virsh viable for running a VM with VGA passthrough or does it lack some of the configuration settings? I ask because it'd be nice to have my VM in a deamon, and hotplugging say USB devices without having to have qemu stay open would be nice.
Offline
Is libvirt/virsh viable for running a VM with VGA passthrough or does it lack some of the configuration settings? I ask because it'd be nice to have my VM in a deamon, and hotplugging say USB devices without having to have qemu stay open would be nice.
Yes it is. You can convert a QEMU command line invocation to libvirt xml using
virsh domxml-from-native qemu-argv <qemu command line invocation>
It does not always work straight out of the box though, I've had difficulties with defining disks for example. More information can be found on http://libvirt.org/drvqemu.html#imex. I believe there are some examples in this thread of libvirt xml that you can use to define a guest. The configuration settings that are not directly supported by libvirt can be passed to QEMU as in the following example
<qemu:commandline>
<qemu:arg value='-device'/>
<qemu:arg value='ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1'/>
<qemu:arg value='-device'/>
<qemu:arg value='vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on'/>
<qemu:arg value='-device'/>
<qemu:arg value='vfio-pci,host=01:00.1,bus=root.1,addr=00.1'/>
<qemu:commandline>
edit: added url
Last edited by siddharta (2014-06-10 12:30:12)
Offline
bpye wrote:Is libvirt/virsh viable for running a VM with VGA passthrough or does it lack some of the configuration settings? I ask because it'd be nice to have my VM in a deamon, and hotplugging say USB devices without having to have qemu stay open would be nice.
Yes it is. You can convert a QEMU command line invocation to libvirt xml using
virsh domxml-from-native qemu-argv <qemu command line invocation>
It does not always work straight out of the box though, I've had difficulties with defining disks for example. More information can be found on http://libvirt.org/drvqemu.html#imex. I believe there are some examples in this thread of libvirt xml that you can use to define a guest. The configuration settings that are not directly supported by libvirt can be passed to QEMU as in the following example
<qemu:commandline> <qemu:arg value='-device'/> <qemu:arg value='ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=01:00.1,bus=root.1,addr=00.1'/> <qemu:commandline>
edit: added url
or like this:
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
virsh edit gaming
or other application using the libvirt API.
-->
<domain type='kvm'>
<name>gaming</name>
<uuid>4541f648-2ada-4268-9882-0f3d4f5100b1</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<memoryBacking>
<hugepages/>
</memoryBacking>
<vcpu placement='static'>8</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-2.0'>hvm</type>
<loader>/home/novist/vm/bios.bin-1.7.2</loader>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='custom' match='exact'>
<model fallback='allow'>Opteron_G5</model>
<vendor>AMD</vendor>
<topology sockets='1' cores='4' threads='2'/>
<feature policy='require' name='perfctr_core'/>
<feature policy='require' name='skinit'/>
<feature policy='require' name='perfctr_nb'/>
<feature policy='require' name='mmxext'/>
<feature policy='require' name='osxsave'/>
<feature policy='require' name='vme'/>
<feature policy='require' name='topoext'/>
<feature policy='require' name='fxsr_opt'/>
<feature policy='require' name='bmi1'/>
<feature policy='require' name='ht'/>
<feature policy='require' name='cr8legacy'/>
<feature policy='require' name='ibs'/>
<feature policy='require' name='wdt'/>
<feature policy='require' name='extapic'/>
<feature policy='require' name='osvw'/>
<feature policy='require' name='nodeid_msr'/>
<feature policy='require' name='tce'/>
<feature policy='require' name='cmp_legacy'/>
<feature policy='require' name='lwp'/>
<feature policy='require' name='monitor'/>
</cpu>
<clock offset='localtime'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native'/>
<source dev='/dev/sdc'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0d' function='0x0'/>
</disk>
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native'/>
<source dev='/dev/sda'/>
<target dev='vdb' bus='virtio'/>
<boot order='1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0e' function='0x0'/>
</disk>
<disk type='block' device='cdrom'>
<driver name='qemu' type='raw' cache='none'/>
<target dev='hda' bus='ide'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<controller type='usb' index='0' model='ich9-ehci1'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci1'>
<master startport='0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci2'>
<master startport='2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci3'>
<master startport='4'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='pci' index='1' model='pci-bridge'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</controller>
<controller type='pci' index='2' model='pci-bridge'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0c' function='0x0'/>
</controller>
<controller type='ide' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<controller type='scsi' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</controller>
<controller type='scsi' index='1' model='virtio-scsi'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x10' function='0x0'/>
</controller>
<interface type='bridge'>
<mac address='52:54:00:29:ae:89'/>
<source bridge='br0'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='vnc' port='-1' autoport='yes' keymap='en-us'/>
<video>
<model type='vga' vram='9216' heads='1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
</source>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x06' slot='0x00' function='0x1'/>
</source>
<address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x00' slot='0x13' function='0x0'/>
</source>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x00' slot='0x13' function='0x2'/>
</source>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/>
</hostdev>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0f' function='0x0'/>
</rng>
</devices>
</domain>
Last edited by novist (2014-06-10 13:07:40)
Offline
or like this:
Exactly, that's a very complete xml. Thanks. Didn't have a full xml handy.
Offline
siddharta wrote:I have met this error several times. I patched the acs patch on 3.14.5. I think it just won't work. Because that patch may not be applicable to the kernel version you used.
For now, kernel 3.15 is ok. It seems like that acs commit is got applied.Built a kernel 3.15 with only the i915 VGA arbiter patch applied and have the same result-
vfio: error, group 10 is not viable, please ensure all devices within the iommu_group are bound to their vfio bus driver.
There is a 3.15 version of the ACS override patch in linux-mainline.tar.gz linked on the first post and I'll try that next. Thanks.
I built two versions of the 3.15 kernel, one with and one without the override_acs patch from the archive linked in the first post. Both kernel sources have the i915_315 patch applied. I have two issues, broken VGA passthrough (working on 3.14) and not being able to passthrough onboard SATA.
Using kernel 3.15 I have host display corruption and non-working VGA passthrough (host with igfx, AMD HD6450 for passthrough) despite having applied the i915_315 VGA arbiter patch. I re-ran the patch against the source to be certain:
kvmhost-2:/usr/src/linux-3.15.0-desktop # patch -Np1 -i i915_315.patch
patching file drivers/gpu/drm/i915/i915_dma.c
Reversed (or previously applied) patch detected! Skipping patch.
2 out of 2 hunks ignored -- saving rejects to file drivers/gpu/drm/i915/i915_dma.c.rej
patching file drivers/gpu/drm/i915/i915_drv.h
Reversed (or previously applied) patch detected! Skipping patch.
1 out of 1 hunk ignored -- saving rejects to file drivers/gpu/drm/i915/i915_drv.h.rej
patching file drivers/gpu/drm/i915/i915_params.c
Reversed (or previously applied) patch detected! Skipping patch.
2 out of 2 hunks ignored -- saving rejects to file drivers/gpu/drm/i915/i915_params.c.rej
patching file drivers/gpu/drm/i915/intel_display.c
Reversed (or previously applied) patch detected! Skipping patch.
3 out of 3 hunks ignored -- saving rejects to file drivers/gpu/drm/i915/intel_display.c.rej
patching file drivers/gpu/drm/i915/intel_drv.h
Reversed (or previously applied) patch detected! Skipping patch.
1 out of 1 hunk ignored -- saving rejects to file drivers/gpu/drm/i915/intel_drv.h.rej
patching file include/linux/vgaarb.h
Reversed (or previously applied) patch detected! Skipping patch.
1 out of 1 hunk ignored -- saving rejects to file include/linux/vgaarb.h.rej
As for passing through the onboard SATA controller I have no luck
- with either 3.14 and 3.15 kernels
- with or without the respective override_acs patch
- with kernel boot parameter pcie_acs_override= set to downstream/multifunction/id:8086:1e02 or any combination thereof.
I've also attempted to unbind the SATA controller from vfio-pci and use PCI assign as I've done on QEMU 1.6.2 and kernel 3.11, defined with libvirt using Virtual Machine Manager. The device is defined as
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
</source>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</hostdev>
Surprisingly (to me) this fails with
Error starting domain: internal error: process exited while connecting to monitor: qemu-system-x86_64: -device vfio-pci,host=00:1f.2,id=hostdev0,bus=pci.0,addr=0x7: vfio: error, group 8 is not viable, please ensure all devices within the iommu_group are bound to their vfio bus driver.
qemu-system-x86_64: -device vfio-pci,host=00:1f.2,id=hostdev0,bus=pci.0,addr=0x7: vfio: failed to get group 8
qemu-system-x86_64: -device vfio-pci,host=00:1f.2,id=hostdev0,bus=pci.0,addr=0x7: Device initialization failed.
qemu-system-x86_64: -device vfio-pci,host=00:1f.2,id=hostdev0,bus=pci.0,addr=0x7: Device 'vfio-pci' could not be initialized
or the same error I have when attempting passthrough of this device using vfio. It would appear the PCI assign mechanism was abandoned in favor of vfio, is that correct? If so, is there a way to force use of PCI assignment?
I would really need to passthrough SATA and USB controllers, as USB per-device passthrough doesn't work for me due to a non-libusb enabled QEMU.
Please let me know if I can provide further information or take diagnostic steps. I'm not at all opposed to testing out new patches or using debug versions, consider me a guinea pig if so needed. Thanks all.
Last edited by siddharta (2014-06-10 14:40:30)
Offline