You are not logged in.

#3201 2014-11-10 22:42:18

Child_of_Sun
Member
Registered: 2014-07-16
Posts: 8

Re: KVM VGA-Passthrough using the new vfio-vga support in kernel =>3.9

Child_of_Sun wrote:

Hi @all

I have managed to passthrough my Primary Graphic Card (And the Secondary for CrossfireX).

I use a custom initramfs with a custom init script, which means that i don't know if it's important to setup this during boot.

My Computer:
CPU: Amd FX-8350
Mainboard: Asrock 970 Extreme4
Primary Graphics: PowerColor PCS+ HD7770 GHz Edition 1GB GDDR5      (vfio-pci, Windows)
Secondary Graphics: PowerColor HD7770 GHz Edition 1GB GDDR5 (V2?)  (vfio-pci, Windows)
Tertiary Graphics: PowerColor HD7750 1GB GDDR5                                     (Linux Host)
Memory: 16 GB Transcend 1333MHz Memory
Power Supply: Thermaltake Berlin 630W

I use Gentoo Linux ~amd64 with kernel 3.18.0-rc2 (Because of the OverlayFS :-), should work with 3.17.1 too) and Qemu-2.1.2 with Seabios-1.7.5 (Release)

My Kernel cmdline is:

root=/dev/mapper/root rootfstype=btrfs rw iommu=pt video=radeondrmfb:1280x1024-24@75,mtrr:3,pmipal,ywrap kvm.ignore_msrs=1 vfio_iommu_type1.allow_unsafe_interrupts=1 pci-stub.ids=1b21:1042,1002:4383,1002:4391,1002:4393,1002:439c vfio-bind=0000:02:00.0,0000:02:00.1,0000:01:00.0,0000:01:00.1,0000:00:14.2,0000:00:11.0,0000:04:00.0,0000:05:00.0 hugepagesz=1GB fbcon=map:1

The vfio-bind Option is parsed by the custom init in my initramfs.

At boot i bind the devices (Like mentioned here) to the pci-stub driver, later to the vfio-pci driver. The Essential Lines from my init for this Proces are:

/sbin/vfio-bind "${vfiobond}"
echo 0000:03:00.1 > /sys/bus/pci/devices/0000:03:00.1/driver/unbind # The Amd Audio device binds to pci-stub, since it has the same ids at the 7750 and the 7770, this fix this Problem.
modprobe radeon # Here i Load the radeon kernel module for the framebuffer
sleep 4 # Wait a little bit to be sure everything has enough time to switch
echo 1 > /sys/bus/pci/drivers/vfio-pci/0000:01:00.0/remove # Here i remove the Primary adapter from the PCI bus
echo 1 > /sys/bus/pci/drivers/vfio-pci/0000:01:00.1/remove
echo 1 > /sys/bus/pci/rescan # Here i rescan the Pci-Bus for new devices, it find the Primary radeon and binds it automatically to the vfio-pci driver

The Last three Lines are Essential for this to work, with this config i can restart the VM, install newer Graphics Drivers, Play 3D Games (Elder Scrolls Online at the moment :-) ).

I use the rombios=/path/to/bios/file Option for the Primary Card, don't know if it is essential.

At Last here is the Qemu Startup Line from my script:


until /usr/bin/sudo /usr/bin/nice -n 10 /usr/bin/qemu-system-x86_64 -M q35 -enable-kvm -monitor stdio -nographic -balloon none -mem-path /dev/hugepages -mem-prealloc \
-m 10240 -k de -cpu host -smp 8,sockets=1,cores=8,threads=1 -bios /usr/share/qemu/bios.bin-1.7.5 -realtime mlock=on \
-vga none -D /var/log/qemu-out.log -boot menu=on -usb -usbdevice host:046d:c517 -usbdevice host:093a:2510 \
-device vfio-pci,host=00:11.0,bus=pcie.0,addr=1c.0,multifunction=on,bootindex=0 \
-device vfio-pci,host=00:14.2,bus=pcie.0,addr=1c.1,multifunction=on \
-device ioh3420,bus=pcie.0,addr=1c.2,multifunction=on,port=1,chassis=2,id=root.0 \
-device ioh3420,bus=pcie.0,addr=1c.3,multifunction=on,port=2,chassis=3,id=root.1 \
-device nec-usb-xhci,multifunction=on,addr=1c.4,bus=pcie.0,id=usb3 \
-device vfio-pci,host=01:00.0,bus=root.0,addr=00.0,multifunction=on,romfile=/etc/qemu/vbios/Powercolor.HD7770.1024.120418.rom,x-vga=on \
-device vfio-pci,host=01:00.1,bus=pcie.0,multifunction=on \
-device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \
-device vfio-pci,host=02:00.1,bus=pcie.0,multifunction=on \
-device vfio-pci,host=04:00.0,bus=pcie.0,addr=1c.6,multifunction=on \
-device vfio-pci,host=05:00.0,bus=pcie.0,addr=1c.7,multifunction=on \
-drive id=bitlocker_keys,file=/dev/loop0,if=none -device usb-storage,drive=bitlocker_keys,bus=usb3.0 \
-netdev type=tap,id=guest0,vhost=on,ifname="${IFACE}" -device virtio-net-pci,netdev=guest0,mac="${macaddr}" ${options} ; do
 echo "Qemu crashed with exit code $?.  Respawning.." >&2
    sleep 5
done

I hope it helps somebody who tries the same :-)


I bought a new Graphics Card and tried Primary Passthrough with it and it works perfect :-)

It's a Sapphire Radeon R9 280X Tri-X OC and it replaces both Radeon HD7770 and brings slightly better Graphics Performance.

My Sartup command is now:

until /usr/bin/sudo /usr/bin/nice -n 0 /usr/bin/qemu-system-x86_64 -M q35 -enable-kvm -monitor stdio -nographic \
-balloon none -mem-path /dev/hugepages -mem-prealloc -m 10240 -k de -cpu host -smp 8,sockets=1,cores=8,threads=1 \
-bios /usr/share/qemu/bios.bin-1.7.5 -realtime mlock=on -vga none -D /var/log/qemu-out.log -boot menu=on -usb \
-usbdevice host:046d:c517 -usbdevice host:093a:2510 -usbdevice host:1a2c:0023 \
-device vfio-pci,host=00:11.0,bus=pcie.0,addr=1c.0,multifunction=on,bootindex=0 \
-device vfio-pci,host=00:14.2,bus=pcie.0,addr=1c.1,multifunction=on \
-device ioh3420,bus=pcie.0,addr=1c.2,multifunction=on,port=1,chassis=1,id=root.0 \
-device vfio-pci,host=01:00.0,bus=root.0,addr=00.0,multifunction=on,x-vga=on \
-device vfio-pci,host=01:00.1,bus=pcie.0,multifunction=on \
-device vfio-pci,host=03:00.0,bus=pcie.0,addr=1c.6,multifunction=on \
-device vfio-pci,host=04:00.0,bus=pcie.0,addr=1c.7,multifunction=on \
-drive id=bitlocker_keys,file=/dev/loop0,if=none -device usb-storage,drive=bitlocker_keys \
-netdev type=tap,id=guest0,vhost=on,ifname="${IFACE}" -device virtio-net-pci,netdev=guest0,mac="${macaddr}" ${options} ; do
 echo "Qemu crashed with exit code $?.  Respawning.." >&2
    sleep 5
done

*EDIT* Since the Card have a switchable Bios (UEFI/Compatible) i wanted to tell you all that both settings work.

Last edited by Child_of_Sun (2014-11-10 22:45:00)

Offline

#3202 2014-11-11 02:11:44

s00pcan
Member
Registered: 2014-11-11
Posts: 1

Re: KVM VGA-Passthrough using the new vfio-vga support in kernel =>3.9

I've got this working, but I'm getting a jerky mouse input.  Video is smooth and I can play games fine.  What's the usual thing to try for this?

Offline

#3203 2014-11-11 09:46:50

Flyser
Member
Registered: 2013-12-19
Posts: 29

Re: KVM VGA-Passthrough using the new vfio-vga support in kernel =>3.9

aw wrote:
winie wrote:

Is there a way to fix the new nvidia drivers looking for hyper-v parameters? possibly in future releases of qemu or from inside windows?

Are you asking for a solution beyond simply not using hyper-v extensions?  You might want to revert back to an old nvidia driver and test whether it would be worthwhile.  I don't know of anyone looking a further solutions.  If you need hyper-v extensions, nvidia professional series cards or AMD cards might be a better option.

I have been thinking about the whole nvidia situation and came up with two possible solutions. However I don't know enough about the details of PCI-E and emulation in general to tell if they are feasible or not.

The first idea is to "softmod" a GeForce to a Quadro by tapping in the vfio driver or pcie subsystem of the linux kernel. So instead of hardmodding the pci id with a soldering iron, you provide a fake pci id to the guest.

The second idea is mostly a hack. Since the nvidia driver checks the cpuid for hv_* features, would it be possible to patch qemu to change the cpuid during runtime? So during bootup when the nvidia driver is initialized, the hv_* features are hidden and only enabled after two minutes or via the qemu console.

Maybe @aw or others could comment if one of these approaches could work and which component of the stack needs to be patched.

Offline

#3204 2014-11-11 10:13:29

Duelist
Member
Registered: 2014-09-22
Posts: 358

Re: KVM VGA-Passthrough using the new vfio-vga support in kernel =>3.9

Flyser wrote:

The first idea is to "softmod" a GeForce to a Quadro by tapping in the vfio driver or pcie subsystem of the linux kernel. So instead of hardmodding the pci id with a soldering iron, you provide a fake pci id to the guest.

http://www.eevblog.com/forum/chat/hacki … nterparts/
For SOME cards and vendors, you also need to provide the "correct" VBIOS. Interesting idea, actually.
It gets funnier as changing the PCI ID physically just makes the firmware blob work different, providing additional stuff. The chips are identical.


The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.

Offline

#3205 2014-11-11 10:14:19

TripleSpeeder
Member
Registered: 2011-05-02
Posts: 46

Re: KVM VGA-Passthrough using the new vfio-vga support in kernel =>3.9

aw wrote:
winie wrote:

Is there a way to fix the new nvidia drivers looking for hyper-v parameters? possibly in future releases of qemu or from inside windows?

Are you asking for a solution beyond simply not using hyper-v extensions?  You might want to revert back to an old nvidia driver and test whether it would be worthwhile.  I don't know of anyone looking a further solutions.  If you need hyper-v extensions, nvidia professional series cards or AMD cards might be a better option.

I see your point. But on the other hand I hate the fact that Nvidia is crippling the possible performance to boost sales of their professional series cards...

Flyser wrote:

I have been thinking about the whole nvidia situation and came up with two possible solutions. However I don't know enough about the details of PCI-E and emulation in general to tell if they are feasible or not.

The first idea is to "softmod" a GeForce to a Quadro by tapping in the vfio driver or pcie subsystem of the linux kernel. So instead of hardmodding the pci id with a soldering iron, you provide a fake pci id to the guest.

The second idea is mostly a hack. Since the nvidia driver checks the cpuid for hv_* features, would it be possible to patch qemu to change the cpuid during runtime? So during bootup when the nvidia driver is initialized, the hv_* features are hidden and only enabled after two minutes or via the qemu console.

Maybe @aw or others could comment if one of these approaches could work and which component of the stack needs to be patched.

The first idea sounds doable, the second approach I agree seems like a flaky hack. But before trying to hack something together - Are there some numbers in which way the hyper-v enlightments increase the performance of a typical gaming machine scenario? There are some stats included in http://www.linux-kvm.org/wiki/images/0/ … hyperv.pdf which show partly significant differences, but it is not clear for me if these tests are relevant for a gaming rig?

Offline

#3206 2014-11-11 10:25:17

Flyser
Member
Registered: 2013-12-19
Posts: 29

Re: KVM VGA-Passthrough using the new vfio-vga support in kernel =>3.9

TripleSpeeder wrote:

Are there some numbers in which way the hyper-v enlightments increase the performance of a typical gaming machine scenario? There are some stats included in http://www.linux-kvm.org/wiki/images/0/ … hyperv.pdf which show partly significant differences, but it is not clear for me if these tests are relevant for a gaming rig?

https://bbs.archlinux.org/viewtopic.php … 0#p1383040

Offline

#3207 2014-11-11 10:56:05

Denso
Member
Registered: 2014-08-30
Posts: 179

Re: KVM VGA-Passthrough using the new vfio-vga support in kernel =>3.9

Good afternoon everyone smile

@aw

Regarding X99 quirks , I recompiled 3.18-rc4 , and instead of adding the ACS patch , I edited the file "drivers/pci/quirks.c" and add these 2 lines :

/* Wellsburg (X99) PCH */
        0x8d10, 0x8d16, 0x8d18, 0x8d1c,

I added them exactly after the X79 ones . And it broke the large group just like the ACS patch did !

VMs rebooted fine , and system is stable (uptill now) .

lsgroup :

### Group 0 ###
    ff:0b.0 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 R3 QPI Link 0 & 1 Monitoring [8086:2f81] (rev 02)
    ff:0b.1 Performance counters [1101]: Intel Corporation Xeon E5 v3/Core i7 R3 QPI Link 0 & 1 Monitoring [8086:2f36] (rev 02)
    ff:0b.2 Performance counters [1101]: Intel Corporation Xeon E5 v3/Core i7 R3 QPI Link 0 & 1 Monitoring [8086:2f37] (rev 02)
### Group 1 ###
    ff:0c.0 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Unicast Registers [8086:2fe0] (rev 02)
    ff:0c.1 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Unicast Registers [8086:2fe1] (rev 02)
    ff:0c.2 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Unicast Registers [8086:2fe2] (rev 02)
    ff:0c.3 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Unicast Registers [8086:2fe3] (rev 02)
    ff:0c.4 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Unicast Registers [8086:2fe4] (rev 02)
    ff:0c.5 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Unicast Registers [8086:2fe5] (rev 02)
### Group 2 ###
    ff:0f.0 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Buffered Ring Agent [8086:2ff8] (rev 02)
    ff:0f.1 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Buffered Ring Agent [8086:2ff9] (rev 02)
    ff:0f.4 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 System Address Decoder & Broadcast Registers [8086:2ffc] (rev 02)
    ff:0f.5 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 System Address Decoder & Broadcast Registers [8086:2ffd] (rev 02)
    ff:0f.6 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 System Address Decoder & Broadcast Registers [8086:2ffe] (rev 02)
### Group 3 ###
    ff:10.0 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 PCIe Ring Interface [8086:2f1d] (rev 02)
    ff:10.1 Performance counters [1101]: Intel Corporation Xeon E5 v3/Core i7 PCIe Ring Interface [8086:2f34] (rev 02)
    ff:10.5 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Scratchpad & Semaphore Registers [8086:2f1e] (rev 02)
    ff:10.6 Performance counters [1101]: Intel Corporation Xeon E5 v3/Core i7 Scratchpad & Semaphore Registers [8086:2f7d] (rev 02)
    ff:10.7 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Scratchpad & Semaphore Registers [8086:2f1f] (rev 02)
### Group 4 ###
    ff:12.0 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Home Agent 0 [8086:2fa0] (rev 02)
    ff:12.1 Performance counters [1101]: Intel Corporation Xeon E5 v3/Core i7 Home Agent 0 [8086:2f30] (rev 02)
### Group 5 ###
    ff:13.0 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Integrated Memory Controller 0 Target Address, Thermal & RAS Registers [8086:2fa8] (rev 02)
    ff:13.1 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Integrated Memory Controller 0 Target Address, Thermal & RAS Registers [8086:2f71] (rev 02)
    ff:13.2 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder [8086:2faa] (rev 02)
    ff:13.3 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder [8086:2fab] (rev 02)
    ff:13.4 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder [8086:2fac] (rev 02)
    ff:13.5 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel Target Address Decoder [8086:2fad] (rev 02)
    ff:13.6 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 DDRIO Channel 0/1 Broadcast [8086:2fae] (rev 02)
    ff:13.7 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 DDRIO Global Broadcast [8086:2faf] (rev 02)
### Group 6 ###
    ff:14.0 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 0 Thermal Control [8086:2fb0] (rev 02)
    ff:14.1 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 1 Thermal Control [8086:2fb1] (rev 02)
    ff:14.2 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 0 ERROR Registers [8086:2fb2] (rev 02)
    ff:14.3 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 1 ERROR Registers [8086:2fb3] (rev 02)
    ff:14.6 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 DDRIO (VMSE) 0 & 1 [8086:2fbe] (rev 02)
    ff:14.7 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 DDRIO (VMSE) 0 & 1 [8086:2fbf] (rev 02)
### Group 7 ###
    ff:15.0 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 2 Thermal Control [8086:2fb4] (rev 02)
    ff:15.1 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 3 Thermal Control [8086:2fb5] (rev 02)
    ff:15.2 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 2 ERROR Registers [8086:2fb6] (rev 02)
    ff:15.3 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Integrated Memory Controller 0 Channel 3 ERROR Registers [8086:2fb7] (rev 02)
### Group 8 ###
    ff:16.0 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Integrated Memory Controller 1 Target Address, Thermal & RAS Registers [8086:2f68] (rev 02)
    ff:16.6 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 DDRIO Channel 2/3 Broadcast [8086:2f6e] (rev 02)
    ff:16.7 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 DDRIO Global Broadcast [8086:2f6f] (rev 02)
### Group 9 ###
    ff:17.0 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Integrated Memory Controller 1 Channel 0 Thermal Control [8086:2fd0] (rev 02)
    ff:17.4 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 DDRIO (VMSE) 2 & 3 [8086:2fb8] (rev 02)
    ff:17.5 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 DDRIO (VMSE) 2 & 3 [8086:2fb9] (rev 02)
    ff:17.6 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 DDRIO (VMSE) 2 & 3 [8086:2fba] (rev 02)
    ff:17.7 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 DDRIO (VMSE) 2 & 3 [8086:2fbb] (rev 02)
### Group 10 ###
    ff:1e.0 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Power Control Unit [8086:2f98] (rev 02)
    ff:1e.1 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Power Control Unit [8086:2f99] (rev 02)
    ff:1e.2 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Power Control Unit [8086:2f9a] (rev 02)
    ff:1e.3 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Power Control Unit [8086:2fc0] (rev 02)
    ff:1e.4 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Power Control Unit [8086:2f9c] (rev 02)
### Group 11 ###
    ff:1f.0 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 VCU [8086:2f88] (rev 02)
    ff:1f.2 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 VCU [8086:2f8a] (rev 02)
### Group 12 ###
    00:00.0 Host bridge [0600]: Intel Corporation Xeon E5 v3/Core i7 DMI2 [8086:2f00] (rev 02)
### Group 13 ###
    00:01.0 PCI bridge [0604]: Intel Corporation Xeon E5 v3/Core i7 PCI Express Root Port 1 [8086:2f02] (rev 02)
### Group 14 ###
    00:02.0 PCI bridge [0604]: Intel Corporation Xeon E5 v3/Core i7 PCI Express Root Port 2 [8086:2f04] (rev 02)
### Group 15 ###
    00:02.2 PCI bridge [0604]: Intel Corporation Xeon E5 v3/Core i7 PCI Express Root Port 2 [8086:2f06] (rev 02)
### Group 16 ###
    00:03.0 PCI bridge [0604]: Intel Corporation Xeon E5 v3/Core i7 PCI Express Root Port 3 [8086:2f08] (rev 02)
### Group 17 ###
    00:03.2 PCI bridge [0604]: Intel Corporation Xeon E5 v3/Core i7 PCI Express Root Port 3 [8086:2f0a] (rev 02)
### Group 18 ###
    00:05.0 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Address Map, VTd_Misc, System Management [8086:2f28] (rev 02)
    00:05.1 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 Hot Plug [8086:2f29] (rev 02)
    00:05.2 System peripheral [0880]: Intel Corporation Xeon E5 v3/Core i7 RAS, Control Status and Global Errors [8086:2f2a] (rev 02)
    00:05.4 PIC [0800]: Intel Corporation Xeon E5 v3/Core i7 I/O APIC [8086:2f2c] (rev 02)
### Group 19 ###
    00:11.0 Unassigned class [ff00]: Intel Corporation C610/X99 series chipset SPSR [8086:8d7c] (rev 05)
### Group 20 ###
    00:16.0 Communication controller [0780]: Intel Corporation C610/X99 series chipset MEI Controller #1 [8086:8d3a] (rev 05)
### Group 21 ###
    00:19.0 Ethernet controller [0200]: Intel Corporation Ethernet Connection (2) I218-V [8086:15a1] (rev 05)
### Group 22 ###
    00:1a.0 USB controller [0c03]: Intel Corporation C610/X99 series chipset USB Enhanced Host Controller #2 [8086:8d2d] (rev 05)
### Group 23 ###
    00:1b.0 Audio device [0403]: Intel Corporation C610/X99 series chipset HD Audio Controller [8086:8d20] (rev 05)
### Group 24 ###
    00:1c.0 PCI bridge [0604]: Intel Corporation C610/X99 series chipset PCI Express Root Port #1 [8086:8d10] (rev d5)
### Group 25 ###
    00:1c.3 PCI bridge [0604]: Intel Corporation C610/X99 series chipset PCI Express Root Port #4 [8086:8d16] (rev d5)
### Group 26 ###
    00:1c.4 PCI bridge [0604]: Intel Corporation C610/X99 series chipset PCI Express Root Port #5 [8086:8d18] (rev d5)
### Group 27 ###
    00:1c.6 PCI bridge [0604]: Intel Corporation C610/X99 series chipset PCI Express Root Port #7 [8086:8d1c] (rev d5)
### Group 28 ###
    00:1d.0 USB controller [0c03]: Intel Corporation C610/X99 series chipset USB Enhanced Host Controller #1 [8086:8d26] (rev 05)
### Group 29 ###
    00:1f.0 ISA bridge [0601]: Intel Corporation C610/X99 series chipset LPC Controller [8086:8d47] (rev 05)
    00:1f.2 SATA controller [0106]: Intel Corporation C610/X99 series chipset 6-Port SATA Controller [AHCI mode] [8086:8d02] (rev 05)
    00:1f.3 SMBus [0c05]: Intel Corporation C610/X99 series chipset SMBus Controller [8086:8d22] (rev 05)
### Group 30 ###
    05:00.0 VGA compatible controller [0300]: NVIDIA Corporation GF119 [GeForce GT 610] [10de:104a] (rev a1)
    05:00.1 Audio device [0403]: NVIDIA Corporation GF119 HDMI Audio Controller [10de:0e08] (rev a1)
### Group 31 ###
    04:00.0 Serial Attached SCSI controller [0107]: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] [1000:0072] (rev 03)
### Group 32 ###
    02:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK104 [GeForce GTX 770] [10de:1184] (rev a1)
    02:00.1 Audio device [0403]: NVIDIA Corporation GK104 HDMI Audio Controller [10de:0e0a] (rev a1)
### Group 33 ###
    01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GT218 [GeForce 210] [10de:0a65] (rev a2)
    01:00.1 Audio device [0403]: NVIDIA Corporation High Definition Audio Controller [10de:0be3] (rev a1)
### Group 34 ###
    07:00.0 PCI bridge [0604]: ASMedia Technology Inc. Device [1b21:118f]
### Group 35 ###
    08:01.0 PCI bridge [0604]: ASMedia Technology Inc. Device [1b21:118f]
    09:00.0 Ethernet controller [0200]: Intel Corporation I211 Gigabit Network Connection [8086:1539] (rev 03)
### Group 36 ###
    08:02.0 PCI bridge [0604]: ASMedia Technology Inc. Device [1b21:118f]
    0a:00.0 Network controller [0280]: Broadcom Corporation BCM4360 802.11ac Wireless Network Adapter [14e4:43a0] (rev 03)
### Group 37 ###
    08:03.0 PCI bridge [0604]: ASMedia Technology Inc. Device [1b21:118f]
### Group 38 ###
    08:04.0 PCI bridge [0604]: ASMedia Technology Inc. Device [1b21:118f]
### Group 39 ###
    0d:00.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch [10b5:8608] (rev ba)
### Group 40 ###
    0e:01.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch [10b5:8608] (rev ba)
### Group 41 ###
    0e:05.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch [10b5:8608] (rev ba)
### Group 42 ###
    0e:07.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch [10b5:8608] (rev ba)
### Group 43 ###
    0e:09.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch [10b5:8608] (rev ba)
### Group 44 ###
    0f:00.0 USB controller [0c03]: Renesas Technology Corp. uPD720202 USB 3.0 Host Controller [1912:0015] (rev 02)
### Group 45 ###
    10:00.0 USB controller [0c03]: Renesas Technology Corp. uPD720202 USB 3.0 Host Controller [1912:0015] (rev 02)
### Group 46 ###
    11:00.0 USB controller [0c03]: Renesas Technology Corp. uPD720202 USB 3.0 Host Controller [1912:0015] (rev 02)
### Group 47 ###
    12:00.0 USB controller [0c03]: Renesas Technology Corp. uPD720202 USB 3.0 Host Controller [1912:0015] (rev 02)
### Group 48 ###
    13:00.0 USB controller [0c03]: ASMedia Technology Inc. ASM1042A USB 3.0 Host Controller [1b21:1142]

So I think it is safe to add these to the quirks file upstream .

Thank you .

EDIT :

When rebooting the host I recieve :

genirq: Flags mismatch irq 17. 00000000 (vfio-intx(0000:10:00.0)) vs. 00000000 (vfio-intx(0000:0f:00.0))

And it is related to the USB PCI-E card I installed two days ago . This error prevents my 2nd VM from launching (It shares another controller from the same USB PCI-E card)

Tried booting with "options vfio_pci nointxmask=1" , both VMs boot fine , but the sound becomes choppy and useless .

Last edited by Denso (2014-11-11 11:20:05)

Offline

#3208 2014-11-11 10:56:12

TripleSpeeder
Member
Registered: 2011-05-02
Posts: 46

Re: KVM VGA-Passthrough using the new vfio-vga support in kernel =>3.9

Flyser wrote:
TripleSpeeder wrote:

Are there some numbers in which way the hyper-v enlightments increase the performance of a typical gaming machine scenario? There are some stats included in http://www.linux-kvm.org/wiki/images/0/ … hyperv.pdf which show partly significant differences, but it is not clear for me if these tests are relevant for a gaming rig?

https://bbs.archlinux.org/viewtopic.php … 0#p1383040

Doh... 20%! That's definitely worth a closer look - Thanks for the pointer.

Offline

#3209 2014-11-11 10:56:56

slis
Member
Registered: 2014-06-02
Posts: 127

Re: KVM VGA-Passthrough using the new vfio-vga support in kernel =>3.9

Yeah from my experience is also about 20% on cpu bound applications (dota2).

Offline

#3210 2014-11-11 11:57:40

Duelist
Member
Registered: 2014-09-22
Posts: 358

Re: KVM VGA-Passthrough using the new vfio-vga support in kernel =>3.9

Do i have to explicitly enable hyper-v extensions(enlightenment or what it's called), or it is automatically added if i use -cpu host?(provided i have an AMD trinity CPU)

Last edited by Duelist (2014-11-11 11:58:29)


The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.

Offline

#3211 2014-11-11 12:18:53

slis
Member
Registered: 2014-06-02
Posts: 127

Re: KVM VGA-Passthrough using the new vfio-vga support in kernel =>3.9

Yeah u must enable it, with hv-time option (-cpu host,hv-time) if i remember correctly.

Offline

#3212 2014-11-11 12:29:24

Duelist
Member
Registered: 2014-09-22
Posts: 358

Re: KVM VGA-Passthrough using the new vfio-vga support in kernel =>3.9

slis wrote:

Yeah u must enable it, with hv-time option (-cpu host,hv-time) if i remember correctly.

But i can't find it in -cpu help. Does AMD support these at all? There is "hypervisor" CPUID flag in qemu-system-x86-64 -cpu help list.


The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.

Offline

#3213 2014-11-11 12:36:02

4kGamer
Member
Registered: 2014-10-29
Posts: 88

Re: KVM VGA-Passthrough using the new vfio-vga support in kernel =>3.9

slis wrote:

You need to make sure you are using virtio disk in xml, then add ide cd-rom with virtio-drivers... like here

<disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source file='/mnt/hdd2/win8.img'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/hdd/INSTALL/en_windows_8.1_professional_vl_with_update_x64_dvd_4065194.iso'/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/hdd/INSTALL/virtio-win-0.1-81.iso'/>
      <target dev='hdc' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='1' target='0' unit='0'/>
    </disk>

At windows setup, when it asks where to install, choose load driver and then point it to mounted cd-rom with viritio drivers.


Hi there! Thanks for posting part of your XML file. Unfortunately, it didn't work. Not only did it not ask for the virtio Drivers, it also doesn't find my storage - a physical device. This is part of my XML file where I made some changes:


<domain type='kvm'>
  <name>win8.1</name>
  <uuid>502f5be4-0126-4a32-8d63-3f8d06a1c80a</uuid>
  <memory unit='KiB'>8388608</memory>
  <currentMemory unit='KiB'>8388608</currentMemory>
  <vcpu placement='static'>2</vcpu>
  <os>
    <type arch='x86_64' machine='pc-i440fx-2.2'>hvm</type>
    <loader type='pflash'>/var/lib/libvirt/images/win8.1-OVMF.fd</loader>
  </os>
....
   <devices>
    <emulator>/usr/sbin/qemu-system-x86_64</emulator>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source dev='/dev/mapper/VG-kvmwin'/>
      <target dev='vda' bus='virtio'/>
      <boot order='1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/root/windows.iso'/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <boot order='3'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/root/virtio.iso'/>
      <target dev='hdb' bus='ide'/>
      <readonly/>
      <boot order='2'/>
      <address type='drive' controller='0' bus='1' target='0' unit='1'/>
    </disk>

Can you please have a look? Is there something missing or wrong? Thank you very much!

Offline

#3214 2014-11-11 16:37:30

aw
Member
Registered: 2013-10-04
Posts: 921
Website

Re: KVM VGA-Passthrough using the new vfio-vga support in kernel =>3.9

Flyser wrote:
aw wrote:
winie wrote:

Is there a way to fix the new nvidia drivers looking for hyper-v parameters? possibly in future releases of qemu or from inside windows?

Are you asking for a solution beyond simply not using hyper-v extensions?  You might want to revert back to an old nvidia driver and test whether it would be worthwhile.  I don't know of anyone looking a further solutions.  If you need hyper-v extensions, nvidia professional series cards or AMD cards might be a better option.

I have been thinking about the whole nvidia situation and came up with two possible solutions. However I don't know enough about the details of PCI-E and emulation in general to tell if they are feasible or not.

The first idea is to "softmod" a GeForce to a Quadro by tapping in the vfio driver or pcie subsystem of the linux kernel. So instead of hardmodding the pci id with a soldering iron, you provide a fake pci id to the guest.

The second idea is mostly a hack. Since the nvidia driver checks the cpuid for hv_* features, would it be possible to patch qemu to change the cpuid during runtime? So during bootup when the nvidia driver is initialized, the hv_* features are hidden and only enabled after two minutes or via the qemu console.

Maybe @aw or others could comment if one of these approaches could work and which component of the stack needs to be patched.

I'm going to tread lightly here, but I will confirm that PCI config space of assigned devices is a combination of direct passthrough, emulation, and virtualization.  PCI SR-IOV devices, by the spec, don't actually have PCI vendor and device IDs, it's up to system software/VMM to provide them.  Likewise, it's a trivial matter to virtualize the PCI config space IDs - http://fpaste.org/149722/72280914/  In the vast majority of cases, modifying the vendor ID is a bad idea, but I include it to be complete.

On a completely separate note, I'll mention that PCI device IDs are also stored in the PCI ROM and the ROM includes a checksum.  In the unlikely event that you want to expose the ROM on a modified device, the device ID and checksum need to be updated.

Also note that supported Quadro devices operate in a secondary graphics mode, ie. in addition to emulated VGA.  I've also never had much luck getting them to work on Q35 machine types, but it's on my todo list to look further into the Code 12 they get there.  They work fine on a 440FX machine.

I don't think we have much opportunity to try to be creative in exposing hyper-v features, both the core guest OS and the graphics driver are going to be probing this at similar times and I doubt we have much visibility to which is calling for it.  I also tried to do some benchmarks with Unigine/3DMark so I could complain properly about the change, but I was unable to come up with a significant difference.  Perhaps this is because they're just benchmarks, maybe it's real.  We did note a while ago a fairly significant differences in Borderlands2, but that was also when we were fighting the lazy debug register use.  That 20% number might be much, much smaller now that we've optimized that path in the hypervisor.  It should be easy for someone to back down to a pre-340 driver version and test with and without hyper-v extensions for various games.


http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses?  Try https://www.redhat.com/mailman/listinfo/vfio-users

Offline

#3215 2014-11-11 16:48:40

aw
Member
Registered: 2013-10-04
Posts: 921
Website

Re: KVM VGA-Passthrough using the new vfio-vga support in kernel =>3.9

Denso wrote:

Good afternoon everyone smile

@aw

Regarding X99 quirks , I recompiled 3.18-rc4 , and instead of adding the ACS patch , I edited the file "drivers/pci/quirks.c" and add these 2 lines :

/* Wellsburg (X99) PCH */
        0x8d10, 0x8d16, 0x8d18, 0x8d1c,

I added them exactly after the X79 ones . And it broke the large group just like the ACS patch did !

VMs rebooted fine , and system is stable (uptill now) .

lsgroup :

<snip>

So I think it is safe to add these to the quirks file upstream .

The groups are going to be split by that patch regardless of whether it's safe to do so.  We still need to wait for confirmation from Intel that X99 PCH root ports use the same programming algorithm and most importantly, provide the required isolation.

EDIT :

When rebooting the host I recieve :

genirq: Flags mismatch irq 17. 00000000 (vfio-intx(0000:10:00.0)) vs. 00000000 (vfio-intx(0000:0f:00.0))

And it is related to the USB PCI-E card I installed two days ago . This error prevents my 2nd VM from launching (It shares another controller from the same USB PCI-E card)

Tried booting with "options vfio_pci nointxmask=1" , both VMs boot fine , but the sound becomes choppy and useless .

When you rebooted the host you saw this error, or do you mean after you rebooted with the new quirk you ran into this problem.  The description doesn't make much sense to me.  The error indicates that both devices tried to register with no flags (00000000), which I think means exclusive IRQ, which means the INTx disable masking test failed.  By using nointxmask=1, you're telling vfio to use the same path that it should have been using based on those flags, so I'm not sure why it makes a difference.


http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses?  Try https://www.redhat.com/mailman/listinfo/vfio-users

Offline

#3216 2014-11-11 18:13:14

slis
Member
Registered: 2014-06-02
Posts: 127

Re: KVM VGA-Passthrough using the new vfio-vga support in kernel =>3.9

I tested couple of games on 340 driver with hyperv on, and most games and benchmarks use GPU at 100% and CPU not so much, so that might be reason why u don't see much difference, as I said before Dota2 that depends on single/dual core performance works 20% better with hyperv on.

Last edited by slis (2014-11-11 18:17:44)

Offline

#3217 2014-11-11 20:14:57

devianceluka
Member
Registered: 2014-05-19
Posts: 44

Re: KVM VGA-Passthrough using the new vfio-vga support in kernel =>3.9

By the way, how do you guys enable hyperv? Im trying "-cpu host,hv-time" and its always saying that is is running as a VM in task manager?

Offline

#3218 2014-11-11 21:50:59

Denso
Member
Registered: 2014-08-30
Posts: 179

Re: KVM VGA-Passthrough using the new vfio-vga support in kernel =>3.9

aw wrote:

When you rebooted the host you saw this error, or do you mean after you rebooted with the new quirk you ran into this problem.  The description doesn't make much sense to me.  The error indicates that both devices tried to register with no flags (00000000), which I think means exclusive IRQ, which means the INTx disable masking test failed.  By using nointxmask=1, you're telling vfio to use the same path that it should have been using based on those flags, so I'm not sure why it makes a difference.

It happened before and after I applied the new quirk . Sorry for not mentioning that sad

It works with nointxmask=1 , but with sound being choppy and slow .

Also this might be related :

Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 68 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 68 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 69 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 68 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 69 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 70 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 68 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 69 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 70 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 71 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 68 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 69 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 70 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 71 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 72 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 68 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 69 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 70 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 71 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 72 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 73 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 68 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 69 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 70 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 71 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 72 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 73 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 74 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 68 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 69 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 70 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 71 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 72 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 73 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 74 for MSI/MSI-X
Nov 10 13:31:27 srv1 kernel: vfio-pci 0000:0f:00.0: irq 75 for MSI/MSI-X


Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 67 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 67 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 68 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 67 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 68 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 69 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 67 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 68 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 69 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 70 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 67 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 68 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 69 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 70 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 71 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 67 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 68 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 69 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 70 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 71 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 72 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 67 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 68 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 69 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 70 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 71 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 72 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 73 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 67 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 68 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 69 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 70 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 71 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 72 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 73 for MSI/MSI-X
Nov 10 13:31:47 srv1 kernel: vfio-pci 0000:10:00.0: irq 74 for MSI/MSI-X

Sorry for my lack of knowledge , but I see that both devices use some identical IRQs . Maybe that's the issue here ?

EDIT :

One side note , I use these modules options /etc/modprobe.d/kvm.conf :

options kvm ignore_msrs=1
options intel_kvm emulate_invalid_guest_state=0
options intel_kvm nested=1
options intel_kvm enable_shadow_vmcs=1
options intel_kvm enable_apicv=1
options intel_kvm ept=1
options vfio_pci nointxmask=1
options vfio_iommu_type1 disable_hugepages=1
options vfio_iommu_type1 allow_unsafe_interrupts=1

Last edited by Denso (2014-11-11 21:53:42)

Offline

#3219 2014-11-11 22:52:31

aw
Member
Registered: 2013-10-04
Posts: 921
Website

Re: KVM VGA-Passthrough using the new vfio-vga support in kernel =>3.9

devianceluka wrote:

By the way, how do you guys enable hyperv? Im trying "-cpu host,hv-time" and its always saying that is is running as a VM in task manager?

hv-time enables one of the hyper-v extensions, iow the guest thinks that it's running on the hyper-v hypervisor and enables some paravirtual features for the hypervisor.  KVM also supports these features.  AFAIK, nobody here is attempting to run hyper-v inside a KVM guest.  That might be possible with nested virtualization support, but it's not really relevant to the topic in this thread.


http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses?  Try https://www.redhat.com/mailman/listinfo/vfio-users

Offline

#3220 2014-11-11 22:59:33

devianceluka
Member
Registered: 2014-05-19
Posts: 44

Re: KVM VGA-Passthrough using the new vfio-vga support in kernel =>3.9

aw wrote:
devianceluka wrote:

By the way, how do you guys enable hyperv? Im trying "-cpu host,hv-time" and its always saying that is is running as a VM in task manager?

hv-time enables one of the hyper-v extensions, iow the guest thinks that it's running on the hyper-v hypervisor and enables some paravirtual features for the hypervisor.  KVM also supports these features.  AFAIK, nobody here is attempting to run hyper-v inside a KVM guest.  That might be possible with nested virtualization support, but it's not really relevant to the topic in this thread.

Yeah I'm sorry that its not relevant. What I want is nested virtualization. I thought it's easy-peasy for everyone here? It shouldn't be that hard.. I added "kvm-intel.nested=1" to grub and thats all someone can do I believe?

Offline

#3221 2014-11-11 23:03:48

aw
Member
Registered: 2013-10-04
Posts: 921
Website

Re: KVM VGA-Passthrough using the new vfio-vga support in kernel =>3.9

devianceluka wrote:
aw wrote:
devianceluka wrote:

By the way, how do you guys enable hyperv? Im trying "-cpu host,hv-time" and its always saying that is is running as a VM in task manager?

hv-time enables one of the hyper-v extensions, iow the guest thinks that it's running on the hyper-v hypervisor and enables some paravirtual features for the hypervisor.  KVM also supports these features.  AFAIK, nobody here is attempting to run hyper-v inside a KVM guest.  That might be possible with nested virtualization support, but it's not really relevant to the topic in this thread.

Yeah I'm sorry that its not relevant. What I want is nested virtualization. I thought it's easy-peasy for everyone here? It shouldn't be that hard.. I added "kvm-intel.nested=1" to grub and thats all someone can do I believe?

You might need to add +vmx to your -cpu parameters for QEMU


http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses?  Try https://www.redhat.com/mailman/listinfo/vfio-users

Offline

#3222 2014-11-11 23:05:16

devianceluka
Member
Registered: 2014-05-19
Posts: 44

Re: KVM VGA-Passthrough using the new vfio-vga support in kernel =>3.9

aw wrote:
devianceluka wrote:
aw wrote:

hv-time enables one of the hyper-v extensions, iow the guest thinks that it's running on the hyper-v hypervisor and enables some paravirtual features for the hypervisor.  KVM also supports these features.  AFAIK, nobody here is attempting to run hyper-v inside a KVM guest.  That might be possible with nested virtualization support, but it's not really relevant to the topic in this thread.

Yeah I'm sorry that its not relevant. What I want is nested virtualization. I thought it's easy-peasy for everyone here? It shouldn't be that hard.. I added "kvm-intel.nested=1" to grub and thats all someone can do I believe?

You might need to add +vmx to your -cpu parameters for QEMU

Oh forgot. And that yes. Those 2 things. Even with that it doesnt work.

But +vmx is already enabled if its one of the supported features by the CPU (it doesnt emulate it, but pass it through).

EDIT: I'm using i7-4790S. In ESXi theres an option for this and it then shows normally in task manager like it would be running bare-metal - with L1,L2,L3 cache sizes... Where in here it shows it is running as a VM)

Last edited by devianceluka (2014-11-11 23:07:49)

Offline

#3223 2014-11-11 23:26:37

dakabali
Member
Registered: 2014-11-11
Posts: 7

Re: KVM VGA-Passthrough using the new vfio-vga support in kernel =>3.9

Dear all,

I've succeded in setting up PCI passthrough with a Gigabyte board, i7 3770 (integrated iGPU) and a Gigabyte GTX Titan OC card. Now, I'm trying to use the same board with a Xeon 1390v2 (no iGPU) the same Gigabyte Titan OC card and a second video card (Gigabyte GT 730). So basically two Nvidia cards installed. I would like to passthrough the Titan which sits in 0000:0100.0. Unfortunately I get the following errors in dmesg while getting black screen on the second monitor:

[  113.148867] vfio_ecap_init: 0000:01:00.0 hiding ecap 0x19@0x900
[  113.150151] vfio-pci 0000:01:00.0: BAR 3: can't reserve [mem 0x98000000-0x99ffffff 64bit pref]
[  113.150980] vfio-pci 0000:01:00.1: enabling device (0000 -> 0002)
[  116.098894] vfio-pci 0000:01:00.0: Invalid ROM contents
[  116.099019] vfio-pci 0000:01:00.0: Invalid ROM contents

I'm using the linux-mainline package incl. ACS patch. Thx.

Last edited by dakabali (2014-11-11 23:27:59)

Offline

#3224 2014-11-11 23:35:10

Denso
Member
Registered: 2014-08-30
Posts: 179

Re: KVM VGA-Passthrough using the new vfio-vga support in kernel =>3.9

dakabali wrote:

Dear all,

I've succeded in setting up PCI passthrough with a Gigabyte board, i7 3770 (integrated iGPU) and a Gigabyte GTX Titan OC card. Now, I'm trying to use the same board with a Xeon 1390v2 (no iGPU) the same Gigabyte Titan OC card and a second video card (Gigabyte GT 730). So basically two Nvidia cards installed. I would like to passthrough the Titan which sits in 0000:0100.0. Unfortunately I get the following errors in dmesg while getting black screen on the second monitor:

[  113.148867] vfio_ecap_init: 0000:01:00.0 hiding ecap 0x19@0x900
[  113.150151] vfio-pci 0000:01:00.0: BAR 3: can't reserve [mem 0x98000000-0x99ffffff 64bit pref]
[  113.150980] vfio-pci 0000:01:00.1: enabling device (0000 -> 0002)
[  116.098894] vfio-pci 0000:01:00.0: Invalid ROM contents
[  116.099019] vfio-pci 0000:01:00.0: Invalid ROM contents

I'm using the linux-mainline package incl. ACS patch. Thx.

Try passing a ROM file using :

romfile=/PATH/TO/ROM/FILE.rom

Like this :

-device vfio-pci,host=01:00.0,multifunction=on,romfile=/PATH/TO/ROM/FILE.rom

You can obtain ROM files from TechPowerUp website .

Offline

#3225 2014-11-12 01:08:02

dakabali
Member
Registered: 2014-11-11
Posts: 7

Re: KVM VGA-Passthrough using the new vfio-vga support in kernel =>3.9

Denso wrote:
dakabali wrote:

Dear all,

I've succeded in setting up PCI passthrough with a Gigabyte board, i7 3770 (integrated iGPU) and a Gigabyte GTX Titan OC card. Now, I'm trying to use the same board with a Xeon 1390v2 (no iGPU) the same Gigabyte Titan OC card and a second video card (Gigabyte GT 730). So basically two Nvidia cards installed. I would like to passthrough the Titan which sits in 0000:0100.0. Unfortunately I get the following errors in dmesg while getting black screen on the second monitor:

[  113.148867] vfio_ecap_init: 0000:01:00.0 hiding ecap 0x19@0x900
[  113.150151] vfio-pci 0000:01:00.0: BAR 3: can't reserve [mem 0x98000000-0x99ffffff 64bit pref]
[  113.150980] vfio-pci 0000:01:00.1: enabling device (0000 -> 0002)
[  116.098894] vfio-pci 0000:01:00.0: Invalid ROM contents
[  116.099019] vfio-pci 0000:01:00.0: Invalid ROM contents

I'm using the linux-mainline package incl. ACS patch. Thx.

Try passing a ROM file using :

romfile=/PATH/TO/ROM/FILE.rom

Like this :

-device vfio-pci,host=01:00.0,multifunction=on,romfile=/PATH/TO/ROM/FILE.rom

You can obtain ROM files from TechPowerUp website .

Hello,

thanks for the quick answer. Getting the card rom helped to eliminate the errors regarding the "Invalid ROM contents". However, the situation didn't change - still no signal on the second monitor. Instead of enabling my Titan card /should be something like " vfio-pci 0000:01:00.0: enabling device (0000 -> 0001)" like for the audio 01:00.1/ it drops a memory reservation error:

[  179.361851] vfio_ecap_init: 0000:01:00.0 hiding ecap 0x19@0x900
[  179.363102] vfio-pci 0000:01:00.0: BAR 3: can't reserve [mem 0x98000000-0x99ffffff 64bit pref]
[  179.365305] vfio-pci 0000:01:00.1: enabling device (0000 -> 0002)

Furthermore starting Qemu says: "VFIO 0000:01:00.0 BAR 3 mmap unsupported. Performance may be slow"

Any idea? Thx.

Last edited by dakabali (2014-11-12 01:08:52)

Offline

Board footer

Powered by FluxBB