You are not logged in.
I am sure that I havn't seen this parameter ever and it was working.
Nevertheless I have added it with no success.
[pheinrich@ARCH ~]$ dmesg | grep BOOT_IMAGE
[ 0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-linux-acs root=UUID=bd845dd5-0a31-4cd0-ae57-875c43f15c30 rw intel_iommu=on pcie_acs_override=downstream pci-stub.ids=10de:05e2
[ 0.000000] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-linux-acs root=UUID=bd845dd5-0a31-4cd0-ae57-875c43f15c30 rw intel_iommu=on pcie_acs_override=downstream pci-stub.ids=10de:05e2
Offline
I am sure that I havn't seen this parameter ever and it was working.
Nevertheless I have added it with no success.
[pheinrich@ARCH ~]$ dmesg | grep BOOT_IMAGE [ 0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-linux-acs root=UUID=bd845dd5-0a31-4cd0-ae57-875c43f15c30 rw intel_iommu=on pcie_acs_override=downstream pci-stub.ids=10de:05e2 [ 0.000000] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-linux-acs root=UUID=bd845dd5-0a31-4cd0-ae57-875c43f15c30 rw intel_iommu=on pcie_acs_override=downstream pci-stub.ids=10de:05e2
Perhaps you should report what's in group 1 (ll /sys/kernel/iommu_groups/1/devices/) and what those devices are bound to (lspci -k) so we have a remote shot at helping.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
yes of course
[pheinrich@ARCH bin]$ ls -l -a --color=auto /sys/kernel/iommu_groups/1/devices/
total 0
drwxr-xr-x 2 root root 0 Jun 30 21:47 .
drwxr-xr-x 3 root root 0 Jun 30 21:47 ..
lrwxrwxrwx 1 root root 0 Jun 30 21:47 0000:00:01.0 -> ../../../../devices/pci0000:00/0000:00:01.0
lrwxrwxrwx 1 root root 0 Jun 30 21:47 0000:00:01.1 -> ../../../../devices/pci0000:00/0000:00:01.1
lrwxrwxrwx 1 root root 0 Jun 30 21:47 0000:01:00.0 -> ../../../../devices/pci0000:00/0000:00:01.0/0000:01:00.0
lrwxrwxrwx 1 root root 0 Jun 30 21:47 0000:02:00.0 -> ../../../../devices/pci0000:00/0000:00:01.1/0000:02:00.0
[pheinrich@ARCH bin]$ lspci -k
00:00.0 Host bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor DRAM Controller (rev 09)
Subsystem: ASUSTeK Computer Inc. Device 844d
00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor PCI Express Root Port (rev 09)
Kernel driver in use: pcieport
Kernel modules: shpchp
00:01.1 PCI bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor PCI Express Root Port (rev 09)
Kernel driver in use: pcieport
Kernel modules: shpchp
00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04)
Subsystem: ASUSTeK Computer Inc. P8 series motherboard
Kernel driver in use: mei_me
Kernel modules: mei_me
00:19.0 Ethernet controller: Intel Corporation 82579V Gigabit Network Connection (rev 05)
Subsystem: ASUSTeK Computer Inc. P8P67 Deluxe Motherboard
Kernel driver in use: e1000e
Kernel modules: e1000e
00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05)
Subsystem: ASUSTeK Computer Inc. P8 series motherboard
Kernel driver in use: ehci-pci
Kernel modules: ehci_pci
00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05)
Subsystem: ASUSTeK Computer Inc. Device 8469
Kernel driver in use: snd_hda_intel
Kernel modules: snd_hda_intel
00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5)
Kernel driver in use: pcieport
Kernel modules: shpchp
00:1c.1 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 2 (rev b5)
Kernel driver in use: pcieport
Kernel modules: shpchp
00:1c.4 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 5 (rev b5)
Kernel driver in use: pcieport
Kernel modules: shpchp
00:1c.6 PCI bridge: Intel Corporation 82801 PCI Bridge (rev b5)
00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05)
Subsystem: ASUSTeK Computer Inc. P8 series motherboard
Kernel driver in use: ehci-pci
Kernel modules: ehci_pci
00:1f.0 ISA bridge: Intel Corporation P67 Express Chipset Family LPC Controller (rev 05)
Subsystem: ASUSTeK Computer Inc. P8P67 Deluxe Motherboard
Kernel driver in use: lpc_ich
Kernel modules: lpc_ich
00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family SATA AHCI Controller (rev 05)
Subsystem: ASUSTeK Computer Inc. P8 series motherboard
Kernel driver in use: ahci
Kernel modules: ahci
00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05)
Subsystem: ASUSTeK Computer Inc. P8 series motherboard
Kernel modules: i2c_i801
01:00.0 VGA compatible controller: NVIDIA Corporation NV43 [GeForce 6600 GT] (rev a2)
Kernel driver in use: nvidia
Kernel modules: nouveau, nvidia
02:00.0 VGA compatible controller: NVIDIA Corporation GT200 [GeForce GTX 260] (rev a1)
Subsystem: NVIDIA Corporation Device 0585
Kernel driver in use: vfio-pci
Kernel modules: nouveau, nvidia
04:00.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 04)
Subsystem: ASUSTeK Computer Inc. P8P67 Deluxe Motherboard
Kernel driver in use: xhci_hcd
Kernel modules: xhci_hcd
05:00.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 04)
Subsystem: ASUSTeK Computer Inc. P8P67 Deluxe Motherboard
Kernel driver in use: xhci_hcd
Kernel modules: xhci_hcd
06:00.0 PCI bridge: ASMedia Technology Inc. ASM1083/1085 PCIe to PCI Bridge (rev 01)
07:00.0 Multimedia controller: Twinhan Technology Co. Ltd Mantis DTV PCI Bridge Controller [Ver 1.0] (rev 01)
Subsystem: TERRATEC Electronic GmbH Device 1178
Kernel driver in use: Mantis
Kernel modules: mantis
07:02.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8110SC/8169SC Gigabit Ethernet (rev 10)
Subsystem: ASUSTeK Computer Inc. Device 820d
Kernel driver in use: r8169
Kernel modules: r8169
07:03.0 FireWire (IEEE 1394): VIA Technologies, Inc. VT6306/7/8 [Fire II(M)] IEEE 1394 OHCI Controller (rev c0)
Subsystem: ASUSTeK Computer Inc. Motherboard
Kernel driver in use: firewire_ohci
Kernel modules: firewire_ohci
Offline
yes of course
[pheinrich@ARCH bin]$ ls -l -a --color=auto /sys/kernel/iommu_groups/1/devices/ total 0 drwxr-xr-x 2 root root 0 Jun 30 21:47 . drwxr-xr-x 3 root root 0 Jun 30 21:47 .. lrwxrwxrwx 1 root root 0 Jun 30 21:47 0000:00:01.0 -> ../../../../devices/pci0000:00/0000:00:01.0 lrwxrwxrwx 1 root root 0 Jun 30 21:47 0000:00:01.1 -> ../../../../devices/pci0000:00/0000:00:01.1 lrwxrwxrwx 1 root root 0 Jun 30 21:47 0000:01:00.0 -> ../../../../devices/pci0000:00/0000:00:01.0/0000:01:00.0 lrwxrwxrwx 1 root root 0 Jun 30 21:47 0000:02:00.0 -> ../../../../devices/pci0000:00/0000:00:01.1/0000:02:00.0
You certainly need the ACS patch applied and enabled in this configuration, or you could move one of the cards to the other x16 slot on the motherboard as I suggested previously. Since the ACS override doesn't seem to be working for you, I'd double check that it's actually applied and that you're booting the kernel you think you're booting.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
Hello, everyone, nice to see you again!
Now I got a lot of time to investigate because summer vacation.
Now I updated kernel to 3.16.0-rc2, but I found that I can't execute CUDA application on VM.
I set CPU function with
-cpu host,hv-time,kvm=off
This is no use.
But if I take kvm=off out, and only edit kvm.c, it worked but still got some problem when execute other CUDA applications.
Maybe I should downgrade the NVIDIA driver version on guest.
But I guess I should use vgaarb patch because my host GPU is AMD.
I know how to patch it, but what command should I type into GRUB?
Many thanks,
AK.
Last edited by AKSN74 (2014-07-01 04:05:09)
Offline
OK, after doing some tests, maybe I found a reason why guest Ubuntu can't run CUDA very well.
Few weeks ago, after found that NVIDIA limit KVM to use GPU, I can run CUDA with KVM sign changed. But still got some problem.
Run some CUDA samples will get this kernel message.
NVRM: os_pci_init_handle: invalid context!
And the same application can run at first time when boot in. After that, it can't run again.
But when I running a CUDA sample called 'vectorAdd', I found that it get error when application try to copy data from GPU memory to RAM.
So I check both dmesg (host and guest.), and I found that when I execute. Guest OS got a same IRQ number, but host is not a same.
For example, when I execute application first time in guest, the host dmesg is:
vfio-pci 0000:03:00.0: irq 88 for MSI/MSI-X
But when I execute second time or run other application, the host dmesg is:
vfio-pci 0000:03:00.0: irq 89 for MSI/MSI-X
This is my guess, maybe the MSI IRQ address changed cause GPU can't send data back to RAM.
But it just a guess, I'm not sure.
Maybe it can run CUDA very well when guest OS is Windows or Ubuntu Desktop, because they have GUI (X-server), so they need MSI IRQ at beginning and always the same IRQ.
And both my host and guest OS are Ubuntu Server.
I have to do more tests so that can figure out what's a real problem.
Last edited by AKSN74 (2014-07-04 01:19:18)
Offline
Hello,
I have tried VGA-Passthrough on my system but it fails.
My specifications are:
Motherboard: gigabyte ga 970 UD3
CPU: AMD FX 6300
VGA1: AMD HD 7790 on 01:00.0 01:00.01
VGA2: AMD HD 6450 on 06:00.0 06:00.01The host OS uses HD 6450, I am trying to pass HD 7790 to a guest OS.
Qemu version:
qemu-x86_64 version 2.0.0, Copyright (c) 2003-2008 Fabrice Bellardkernel version:
3.14.1-3-mainline #1 SMP PREEMPT (the one nbhs posted)Added this to grub:
vfio_iommu_type1.allow_unsafe_interrupts=1 pci-stub.ids=1002:665c,1002:0002binded the devices.
And got the Seabios output on the screen.
But after attachin something bootable to qemu, like this:
-drive file=/home/nbhs/windows.iso,id=isocd -device ide-cd,bus=ide.1,drive=isocdI get:
qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: VFIO 0000:01:00.0 BAR 0 mmap unsupported. Performance may be slow
qemu-system-x86_64: vfio_bar_write(,0x0, 0x0, 4) failed: Device or resource busy
qemu-system-x86_64: vfio_bar_write(,0x4, 0x0, 4) failed: Device or resource busy
qemu-system-x86_64: vfio_bar_write(,0x8, 0x0, 4) failed: Device or resource busy
qemu-system-x86_64: vfio_bar_write(,0xc, 0x0, 4) failed: Device or resource busy
qemu-system-x86_64: vfio_bar_write(,0x10, 0x0, 4) failed: Device or resource busy
qemu-system-x86_64: vfio_bar_write(,0x14, 0x0, 4) failed: Device or resource busy
qemu-system-x86_64: vfio_bar_write(,0x18, 0x0, 4) failed: Device or resource busy
...and a load of vfio_bar_write fails.
Any idea how to solve this one?
I solved the problem.
It seems that the graphic card on 01:00.0 is the primary on boot, meaning before the kernel loads this is the main card.
Because of this the os sees it is already used, and gives resource busy error.
I solved it by swapping the GPUs, so the other GPU is now on 01:00.0.
But now when I start the virtual machine it is very slow like 1/100 of the speed it should be or ti freezes. QEMU shows no error on the screen.
Any idea how to solve this one?
Offline
I tried to apply the acs overrice patch mentioned in the first post.
The first hunk (/Documentation/kernel-parameters.txt) is sucessful.
The second (/drivers/pci/quirks.c) not.
I copied the involved file to a tmp location and tried to fix the patched line but without success.
The mentioned header "struct pci_dev *pci_get_dma_source(struct pci_dev *dev)" is not the line 3292 ... it has moved to line 3372.
changing this ...
@@ -3292,11 +3292,113 @@ struct pci_dev *pci_get_dma_source(struct pci_dev *dev)
....
@@ -3372,11 +3372,113 @@ struct pci_dev *pci_get_dma_source(struct pci_dev *dev)
....
had no effect
#########
Edit 1
Also -F3 doesn't help
#########
Edit 2
succeeded with -F3 ... (typo)
hopefully it has done its job well
Last edited by _pheinrich_ (2014-07-01 14:48:54)
Offline
So I check both dmesg (host and guest.), and I found that when I execute. Guest OS got a same IRQ number, but host is not a same.
For example, when I execute application first time in guest, the host dmesg is:
vfio-pci 0000:03:00.0: irq 88 for MSI/MSI-X
But when I execute second time or run other application, the host dmesg is:
vfio-pci 0000:03:00.1: irq 89 for MSI/MSI-X
These IRQs are on different functions, GPU vs audio
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
ok I have compiled the new kernel with the acs patch and enabled it
[ 0.000000] Linux version 3.15.2-1-acs (pheinrich@ARCH) (gcc version 4.9.0 20140604 (prerelease) (GCC) ) #1 SMP PREEMPT Tue Jul 1 17:58:41 CEST 2014
[ 0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-linux-acs root=UUID=bd845dd5-0a31-4cd0-ae57-875c43f15c30 rw intel_iommu=on pcie_acs_override=downstream pci-stub.ids=10de:05e2
[ 0.000000] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-linux-acs root=UUID=bd845dd5-0a31-4cd0-ae57-875c43f15c30 rw intel_iommu=on pcie_acs_override=downstream pci-stub.ids=10de:05e2
binding my 0000:02:00.0 device creates an iommu_group 14 so I think the acs patch was successful
[pheinrich@ARCH ~]$ ls -l -a --color=auto /sys/kernel/iommu_groups/1/devices/
total 0
drwxr-xr-x 2 root root 0 Jul 1 18:26 .
drwxr-xr-x 3 root root 0 Jul 1 18:26 ..
lrwxrwxrwx 1 root root 0 Jul 1 18:26 0000:00:01.0 -> ../../../../devices/pci0000:00/0000:00:01.0
[pheinrich@ARCH ~]$ ls -l -a --color=auto /sys/kernel/iommu_groups/14/devices/
total 0
drwxr-xr-x 2 root root 0 Jul 1 18:26 .
drwxr-xr-x 3 root root 0 Jul 1 18:26 ..
lrwxrwxrwx 1 root root 0 Jul 1 18:26 0000:02:00.0 -> ../../../../devices/pci0000:00/0000:00:01.1/0000:02:00.0
But now if I start up the vm, the qemu control window is comming up, no output on 0000:02:00.0 and after 5 seconds everything is black also the host.
qemu-system-x86_64 -enable-kvm -M q35 -m 4096 -cpu host,kvm=off \
-smp 4,sockets=1,cores=4,threads=1 \
-bios /usr/share/qemu/bios.bin -vga none \
-device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
-device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \
-drive file=/srv/media/Data/qemu/win8/win8.qcow2,id=windisk -device ide-hd,bus=ide.0,drive=windisk \
-drive file=/srv/media/Data/OperatingSystem/en_windows_8_1_x64_dvd_2707217.iso,id=winiso -device ide-cd,bus=ide.1,drive=winiso
###########
Edit 1
ACS seems to be fine
[pheinrich@ARCH ~]$ dmesg | grep ACS
[ 0.000000] Warning: PCIe ACS overrides enabled; This may allow non-IOMMU protected peer-to-peer DMA
[ 0.000000] ACPI: FACS 0x00000000CF3BEF80 000040
[ 0.300365] pci 0000:00:1c.0: Intel PCH root port ACS workaround enabled
[ 0.300515] pci 0000:00:1c.1: Intel PCH root port ACS workaround enabled
[ 0.300666] pci 0000:00:1c.4: Intel PCH root port ACS workaround enabled
I do not get any error. The whole system is freezing, not only X because ssh sessions die too.
Last edited by _pheinrich_ (2014-07-01 21:18:52)
Offline
Hi guys, following this guide I got windows 8.1 installed and was able to pass through a nvidia 670 gpu, with a nvidia titan on the host side. The drivers that come with windows 8 work, however when I try to install the latest drivers from nvidia I either get a blank screen on reboot in the guest os, or non-functioning drivers in the guest os. Anyone else experience this?
Last edited by kristopher004 (2014-07-01 22:16:02)
Offline
kameloc wrote:dwe11er wrote:This is not the right patch (it doesn't hurt tho).
Could you link me to the right patch? I tried both VGA patches from the OP with the same errors.
Along with patches, you need to add 'i915.enable_hd_vgaarb=1' to kernel params. Also, you prob need to aquire vbios file for your gpu and pass it with romfile arg during the vm boot.
The kernel parameter is what I was missing, thank you.
kameloc wrote:dwe11er wrote:This is not the right patch (it doesn't hurt tho).
Could you link me to the right patch? I tried both VGA patches from the OP with the same errors.
this is what works for me (vga arbioter patch link inlcuded)
This post was also very helpful, thank you.
Another success story. To reiterate my setup is:
CPU: Intel i7-4790
Motherboard: GA-Z97N-WIFI
Host GPU: Intel HD Graphics 4600
Passthrough GPU: EVGA NVIDIA GTX 550Ti
Tried installing Windows 8.1 but couldn't get the sound to not sound extremely distorted. Moved to Windows 7 with the ac97 soundcard which runs really nice except it's a little off when playing games or video (like it's slowed down). I'm going to be messing with the audio settings to try to clean it up a bit. Also the video card persists through reboots only with the NVIDIA Drivers installed on Windows 7 (persists being not freezing at the shutdown screen even with the VM powered off).
Also I'm a little confused by the networking section of their documentation. I was using -net user at first but wanted to setup synergy so opted for a bridge. Bridge works fine but was thinking of adding two NICs so I could have one externally facing and the other inside QEMU's virtual networking (primarily for access to other VMs I'll create). I tried just adding two -nic devices but only one would appear, even with setting different VLANs. Anyone have experience adding two NICs? So to the documentation, here they say that -net is obsolete but I can't find an alternative for bridged devices in the documentation. I see this page on bridges but it also uses -net nic.
Offline
Hi guys, following this guide I got windows 8.1 installed and was able to pass through a nvidia 670 gpu, with a nvidia titan on the host side. The drivers that come with windows 8 work, however when I try to install the latest drivers from nvidia I either get a blank screen on reboot in the guest os, or non-functioning drivers in the guest os. Anyone else experience this?
You need latest qemu (option kvm=off) with 340 driver...
or u can patch it yourself, with help of this topic
Last edited by slis (2014-07-02 03:38:20)
Offline
You need latest qemu (option kvm=off) with 340 driver...
or u can patch it yourself, with help of this topic
Thanks slis, I have a few questions though. Do you mean the nvidia 340 driver for the host or for the guest or for both? Also when you say patch it do you mean to patch the 340 driver on the host side with the patch mentioned under the issues section of this guide?
Thanks!
Last edited by kristopher004 (2014-07-02 04:27:21)
Offline
you can use qemu-git from AUR or patching qemu.
I would say using qemu-git is the easier option
Offline
ok i found the issue.
nvidia-304xx module is the problem.
Removing all nvidia specific modules/packages and do not start X server, so both cards aren't really in use (then I do not need the ACS override), the vm is starting.
Maybe I try nouveau for the host now.
Offline
AKSN74 wrote:So I check both dmesg (host and guest.), and I found that when I execute. Guest OS got a same IRQ number, but host is not a same.
For example, when I execute application first time in guest, the host dmesg is:
vfio-pci 0000:03:00.0: irq 88 for MSI/MSI-X
But when I execute second time or run other application, the host dmesg is:
vfio-pci 0000:03:00.1: irq 89 for MSI/MSI-X
These IRQs are on different functions, GPU vs audio
I'm sorry that I key the wrong message
The second message is not 0000:03:00.1, it's same as first message. Just different IRQ number.
Last edited by AKSN74 (2014-07-02 13:35:20)
Offline
How do I go about adding the kvm=off flag when using virt-manager? I presumed I could add it to the XML file for the guest, but I don't know what format it should take. I don't seem to be having much luck with this:
<cpu mode='custom' match='exact' kvm=off/>
Last edited by MCMjolnir (2014-07-03 00:17:55)
Offline
nouveau produces a warped window and is not usable. My N43 chip should be supported but it does not work.
Now I am using vesa for playing with qemu passthrough.
As I asked some post before I would like to redirect the gpu output.
First I tried the vnc option. with "-vnc :0" but the mouse pointer is lagging and has an big offset.
So I decided to use a rdp connection.
To connect one option would be to start up a tun tap device so I can access my guest from the host.
I have this option working on my laptop but I do not get it to work on my desktop pc.
However "-redir tcp:5555::3389" should work too.
After I see the SeaBios and windows 8 is going to start everything is freezing, also the host
Anyone experienced something like that before?
Third and last option whould be to passthrough my second network card
Offline
Using two nvidia cards, latest qemu and nvidia 340.17 on host the nvidia driver on guest says: "Windows has stopped this device because it has reported problem. (code 43)" Anyone else have this problem?
Offline
Are you using -cpu kvm=off in the launch arguments?
Last edited by MCMjolnir (2014-07-02 21:57:36)
Offline
no I'll give that a try
Last edited by kristopher004 (2014-07-02 23:11:09)
Offline
How do I go about adding the kvm=off flag when using virt-manager? I presumed I could add it to the XML file for the guest, but I don't know what format it should take. I don't seem to be having much luck with this:
<cpu mode='custom' match='exact' kvm=off/>
I belive it's not implemented yet.
Offline
I installed Windows 7 VM again and run CUDA to see if host got a same message in post #2256.
But after the test, there is no message and CUDA still run perfectly on Windows.
So maybe my theory is right, GPU can't send data back to main memory because IRQ changed when execute CUDA application on Ubuntu VM.
But now I have to test driver because maybe NVIDIA Linux driver still have a problem. (Linux Driver is earlier to limit VM use GRID only than Windows.) Even changed KVM signature.
Maybe I should try the previous driver before NVIDIA released GRID. Or different Linux OS.
Last edited by AKSN74 (2014-07-04 14:01:46)
Offline
Hello everyone, thanks for this guide and other useful info in this thread!
I have been mostly successful with my passthrough setup, but I got stuck at a particular problem that I do not know how to trouleshoot.
I have managed to run qemu with what appears to be a working passthrough of my nvidia 770, but it seems the drivers fail to comunicate with the device itself.
If my guest system is a Linux, I can run lspci -v there, and apparently the dedicated graphics card is indeed exposed to the guest system, as lspci lists it. The `lspci -vmm` entry for the guest system is same as for the host.. On a windows, the device is also detected as a VGA display, with detailed information being also correct. So I assume up to this point I did everything right, and that the card got passed through.
However, whenever I try to install the drivers, something gets messed up.
In case of a linux system, the installation succeeds, but the X server fails to start, the logs say that nvidia xorg module complained that "The NVIDIA device on PCI:1:0:0 is not supported by this driver", which is wrong, because I have used the save driver with the very same piece of software with absolutelly no problems (and the driver manual says it does support that card).
In case of a windows, the driver installation suceeds, but the system fails to use the card afterwards, first two reboot are always a blue screen on the boot, and any subsequent will suceed, but windows will never use the card, there is nothing displayed. If the emulated vga is available, windows will use it instead (this is where I saw these blue screens too).
This feels like there is some complication with the drivers communicating with the hardware. I have no clue on how to troubleshoot that.
I will be very thankful for any hints!
PS. I have also just noticed, that I cannot even get seabios to display using the passthrough'd display. BUT the lspci of the guest system lists the device, so it is there! What am I missing?
Offline