You are not logged in.
Has anyone tried using Oculus Rift (dk1 or 2) with any of the GPU passtrough setups? I'm not sure how that device works exactly as if it only uses HDMI for communication and all the processing happens inside the GPU, I guess it's possible?
I just wanted to know if anyone has any experience or knowledge about this subject as I'm really interested in all this vga passtrough stuff and virtual reality (but I don't have the equipment to test out yet).
Offline
Hey,
I've got VGA passthrough running with a windows 7 64 bit guest VM.
Apart from the PCI device, I'm passing through three drives using virtio:
-drive file=/home/janphilip/VMs/Win7.img,id=disk0,if=virtio,format=raw
-drive file=/dev/disk/by-label/Drive1,id=disk1,if=virtio,format=raw
-drive file=/dev/disk/by-label/Drive2_SSD,id=disk2,if=virtio,format=raw
Everything works; However Windows messes up the device order.
The Win7.img contains the windows system partition; this one is always named "C:".
Unfortunately, sometimes Drive1 ist called "E:" and Drive2_SSD "F:", and after a reboot it's the other way around.
Windows offers a feature to assign drive letters in the control panel.
However, even these are not persistent; After a reboot, the guest acts as if it had never seen that disk before.
Did anyone else encounter this problem of non-persistent drive letters or has any suggestions?
I tried googling, but couldn't find anyone having the same problem.
Offline
If anyone wants to try combining Windows 7 with OVMF again:
Latest qemu git seems to enable hv_time by default, so at least I had to disable it explicitly (and un-enable all other hyper-v options). Apart from that all I had to do is create a UEFI-bootable Win7 USB drive to install from.
Also, I can confirm that cpu pinning does really help in combination with the kernel boot options isolcpus=[cpus, eg. 2-7] and nohz_full=[cpus].
For the first time I could listen to music (via USB DAC connected to a passed-through USB-controller) continously in Windows 8, while both host and guest have quite a bit of cpu load (tested with prime95/mprime).
So thanks for that information!
Still, using http://www.resplendence.com/latencymon results in a frozen guest, everytime. I thought Windows 7 was less sensitive in that matter but the latest test proved otherwise. Gonna stay with 8 and 10 as guests.
Offline
If anyone wants to try combining Windows 7 with OVMF again:
Latest qemu git seems to enable hv_time by default, so at least I had to disable it explicitly (and un-enable all other hyper-v options). Apart from that all I had to do is create a UEFI-bootable Win7 USB drive to install from.Also, I can confirm that cpu pinning does really help in combination with the kernel boot options isolcpus=[cpus, eg. 2-7] and nohz_full=[cpus].
For the first time I could listen to music (via USB DAC connected to a passed-through USB-controller) continously in Windows 8, while both host and guest have quite a bit of cpu load (tested with prime95/mprime).
So thanks for that information!Still, using http://www.resplendence.com/latencymon results in a frozen guest, everytime. I thought Windows 7 was less sensitive in that matter but the latest test proved otherwise. Gonna stay with 8 and 10 as guests.
Wait, you got win7 booting in UEFI? And you have NVidia...
Something hints me that my GPUs have broken GOP for win7...
Time to dig the vbios again...
Last edited by Duelist (2015-02-07 13:58:23)
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline
Hi everyone,
I've been pushing to get GPU passthrough working for a while using the great info found in this forum and elsewhere, but have so far not been able to get past this:
qemu-system-x86_64 -enable-kvm -m 1024 -cpu host -smp 6,sockets=1,cores=6,threads=1 -vga none -device vfio-pci,host=06:00.0,x-vga=on -device vfio-pci,host=06:00.1
qemu-system-x86_64: -device vfio-pci,host=06:00.0,x-vga=on: vfio: error no iommu_group for device
qemu-system-x86_64: -device vfio-pci,host=06:00.0,x-vga=on: Device initialization failed.
qemu-system-x86_64: -device vfio-pci,host=06:00.0,x-vga=on: Device 'vfio-pci' could not be initialized
I first suspected it might be openSuSE, but I subsequently installed Arch and compiled the kernel as provided by nbhs with all the following instructions, yielding the same result.
My system config:
Motherboard HP ProLiant SE316M1
CPU 2 x Xeon X5650
Memory 24GB ECC DDR3 RAM
Primary GPU Nvidia 8800GT
Secondary GPU ATI R9 290X
Storage A Crucial 240GB SSD
Storage B Seagate 500GB HDD
A lot of digging led to the discovery that the 5520 chipset revision 0x13 has issues with interrupt remapping and is therefore disabled by the kernel.
http://www.intel.co.uk/content/dam/doc/ … update.pdf (Errata 47 & 53)
[ 0.025776] This system BIOS has enabled interrupt remapping
on a chipset that contains an erratum making that
feature unstable. To maintain system stability
interrupt remapping is being disabled. Please
contact your BIOS vendor for an update
Output of dmesg | grep -e DMAR -e IOMMU
[ 0.000000] ACPI: DMAR 0x00000000AF632E80 000136 (v01 HP ProLiant 00000001 \xffffffd2? 0000162E)
[ 0.000000] Intel-IOMMU: enabled
[ 0.025667] dmar: IOMMU 0: reg_base_addr b7ffe000 ver 1:0 cap c90780106f0462 ecap f0207e
For testing, I subsequently commented lines 212-213 in arch/x86/kernel/early-quirks.c to bypass disabling interrupt remapping.
New output of dmesg | grep -e DMAR -e IOMMU
[ 0.000000] ACPI: DMAR 0x00000000AF632E80 000136 (v01 HP ProLiant 00000001 \xffffffd2? 0000162E)
[ 0.000000] Intel-IOMMU: enabled
[ 0.025667] dmar: IOMMU 0: reg_base_addr b7ffe000 ver 1:0 cap c90780106f0462 ecap f0207e
[ 0.025772] IOAPIC id 8 under DRHD base 0xb7ffe000 IOMMU 0
[ 0.025774] IOAPIC id 0 under DRHD base 0xb7ffe000 IOMMU 0
However, my qemu output remains the same, and "find /sys/kernel/iommu_groups/ -type l" still returns nothing.
Full output of dmesg
Full output of lspci
Before I proceed further, I wanted to check whether anyone has gotten GPU passthrough to work with the 5520 chipset and said revision, or whether this has been ruled impossible/unusable.
From the snippets I gathered across forums, it seems that the interrupt remapping does work, but can lead to system lockups at times, which is why it is prudently disabled.
My servers will not be used for commercial purposes, so stability is not the greatest concern.
Offline
Hi everyone,
I've been pushing to get GPU passthrough working for a while using the great info found in this forum and elsewhere, but have so far not been able to get past this:
qemu-system-x86_64 -enable-kvm -m 1024 -cpu host -smp 6,sockets=1,cores=6,threads=1 -vga none -device vfio-pci,host=06:00.0,x-vga=on -device vfio-pci,host=06:00.1 qemu-system-x86_64: -device vfio-pci,host=06:00.0,x-vga=on: vfio: error no iommu_group for device qemu-system-x86_64: -device vfio-pci,host=06:00.0,x-vga=on: Device initialization failed. qemu-system-x86_64: -device vfio-pci,host=06:00.0,x-vga=on: Device 'vfio-pci' could not be initialized
Interrupt remapping is not the problem, VT-d is broken on your system and not being enabled for DMA:
[ 0.960573] dmar: Device scope type does not match for 0000:00:14.0
[ 0.960576] dmar: Device scope type does not match for 0000:00:14.1
[ 0.960578] dmar: Device scope type does not match for 0000:00:14.2
[ 0.960621] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Check for a BIOS update for the system
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
Wait, you got win7 booting in UEFI? And you have NVidia...
Something hints me that my GPUs have broken GOP for win7...
Time to dig the vbios again...
I'm sorry but it seems all I've accomplished was to find another reason why win7 + ovmf won't hamonize. As I struggled getting it to boot at all, I haven't had tested if the nvidia worked. I"ve catched up on that and nope, doesn't work for me either.
To summarize the installation procedure:
- prepare Win7 UEFI medium (google for USB UEFI Win7, use diskpart and copy a bootx64.efi to EFI/boot/)
- generate xml for libvirt by virt-manager (Windows 7)
- add uefi-bits
==> won't install (when boot animation should appear, it freezes with 1 cpu @ 100%)
- redhat bug report suggests that smp (>=2) + hv-time is responsible, but using only 1 core and disabling hypervclock timer in xml didn't help
- remove everything hyperv-related, activly disable hv-time via <qemu:arg value='-cpu'/><qemu:arg value='host,hv_time=off'/>
==> will install and afterwards boot sometimes (or switches off instead of boot animation without noticable dmesg errors - if you see the 4 dots appear instead, it should boot fine)
- add nvidia card, install drivers
- remove default qxl graphics
- if nvidia drivers installed: image freezes at the last part of the boot animation, system seems to act normal otherwise (sound etc.)
- without nvidia drivers: hangs during boot, in secure mode @classpnp.sys
All in all, win7 + ovmf works theoretically, in my experiences it's broken at the moment, because of timing problems (hv_time-related) and nvidia card doesn't work.
As in contradiction to my own memory, interrupt handling (usb audio, latency) is as bad as with newer windows, so I have no reason to try any further. I still have student win8 keys and the Win10-experience after one day (same xml as Win8) is quite good!
Offline
Hi everyone,
I have a Debian machine with 3 NVIDIA cards: two GTX 580s (EVGA Hydro Copper 2 FTW) and a 210 GT. I'm trying to get VGA passthrough working with Windows 7 in QEMU.
The 580s are on 0000:01:00.0/0000:01:00.1 and 0000:03:00.0/0000:03:00.1, and the 210 is on 0000:04:00.0/0000:04:00.1. My ultimate goal is to passthrough both 580s while using the 210 for the host.
I can get VGA passthrough working fine with the 580 card on 03 or with the 210 on 04, but not the 580 on 01. When I attempt to do so, the QEMU machine never reaches SeaBIOS and it uses 100% CPU on one core (this is the same behavior as if the GPU ROM is invalid).
I think that this might have something to do with the host BIOS claiming the card in some way. With both 01 and 03 stubbed and 04 used by nouveau, there is interesting behavior on boot. The host BIOS appears on the monitor connected to the 01 580, and the text remains on the screen after the OS boots (with the initial dmesg output and a flashing cursor). Despite this, Debian shows that the card has been claimed by pci-stub and later by vfio. When I attempt to start QEMU with the card, the physical monitor connected to it goes blank and loses signal, never to return. The VM locks up as mentioned earlier. As I wrote before, if I do this exact same process with the 580 in port 03, all works well (it has no physical monitors attached). If I boot with no physical monitors attached to the 01 580, it still does not work.
My host BIOS has no way to change the primary adapter.
Does anyone have any thoughts about this issue? Sorry if this has been covered before -- this thread is getting a bit unwieldy
Offline
Does anyone have any thoughts about this issue?
The Mainboard BIOS/UEFI really musn't initialize any of the cards you want to pass through.
Try to swap the PCIe slots. Put the 210 in the first slot and the 580 in the lower ones. Downside might be that the 580 get less PCIe bandwidth, but if the mainboard doesn't offer any software setting to choose its primary card it might be the only way. There is no onboard GPU that could become primary? Or change your mainboard as a last resort, recommendations within this thread are Gigabyte boards, but I have no experiences myself.
Offline
Try to swap the PCIe slots. Put the 210 in the first slot and the 580 in the lower ones.
Ok, I assumed that this would work but I wanted to know if there was a software fix before going this route. Moving the cards to other PCIe slots will require draining the system and then refilling, which is a process that I'd like to avoid if possible.
The motherboard is an ASUS Rampage IV Extreme, which does not come with any onboard GPU (the rationale being that this board is meant to appeal to gamers who will have dedicated GPUs anyway). As you suspect, only the first PCIe slot is x16 while the others are x8, but I'm pretty sure these cards aren't using x16 right now in Windows anyway. I've checked the BIOS several times and I am quite certain there is no option for changing the primary adapter.
Sounds like it is time to go get the bucket and hose...
Offline
Ok, I assumed that this would work but I wanted to know if there was a software fix before going this route. Moving the cards to other PCIe slots will require draining the system and then refilling, which is a process that I'd like to avoid if possible.
The only other way is if there is an option in bios setup to change the primary gpu and set it to the slot where the 210 is.
Offline
Tim_J wrote:Does anyone have any thoughts about this issue?
The Mainboard BIOS/UEFI really musn't initialize any of the cards you want to pass through.
This is not true (in my case both Intel and AMD are initialized; furthermore, UEFI settings are accessible only using AMD card, no output on Intel). You can't let their video driver to initialize on specific devices you want to pass.
Offline
The 580s are on 0000:01:00.0/0000:01:00.1 and 0000:03:00.0/0000:03:00.1, and the 210 is on 0000:04:00.0/0000:04:00.1. My ultimate goal is to passthrough both 580s while using the 210 for the host.
I can get VGA passthrough working fine with the 580 card on 03 or with the 210 on 04, but not the 580 on 01. When I attempt to do so, the QEMU machine never reaches SeaBIOS and it uses 100% CPU on one core (this is the same behavior as if the GPU ROM is invalid).
I think that this might have something to do with the host BIOS claiming the card in some way. With both 01 and 03 stubbed and 04 used by nouveau, there is interesting behavior on boot. The host BIOS appears on the monitor connected to the 01 580, and the text remains on the screen after the OS boots (with the initial dmesg output and a flashing cursor). Despite this, Debian shows that the card has been claimed by pci-stub and later by vfio. When I attempt to start QEMU with the card, the physical monitor connected to it goes blank and loses signal, never to return. The VM locks up as mentioned earlier. As I wrote before, if I do this exact same process with the 580 in port 03, all works well (it has no physical monitors attached). If I boot with no physical monitors attached to the 01 580, it still does not work.
My host BIOS has no way to change the primary adapter.
Does anyone have any thoughts about this issue? Sorry if this has been covered before -- this thread is getting a bit unwieldy
Well..
Let me tell you my story:
I have ASUS F2A55(so i can't really change my primary gpu) HD7750 in PCI-E x16 first slot(01:00.[0-1]), HD7750 in PCI-E x8 second slot(02:00.[0-1]), AND GT610 in PCI-E x1 zero(the top one) slot(it gets mapped as 04:00.[0-1] when both AMDs are plugged).
And i've got a signal out of every card connected that way:
01:00.0 - DVI 1st screen;
02:00.0 - D-SUB 1st screen;
04:00.0 - D-SUB 2nd screen.
So, what i get:
1. BIOS/UEFI setup menus and PCI Option ROMs get shown via 01:00.0 on the first screen AND i can't change it. That is normal.
2. Linux VTs are getting shown on 01:00.0 AND i can't change it. That is normal since it's using legacy VGA ports.
3. 01:00.0 and 02:00.0 are bound to pci-stub via kernel command line. And as the host boots - they get bound to vfio-pci by a simple script.
4. If there's another VT(like X) is activated - 1st will screen remain powered on, but empty. 02:00.0 is sleeping and not showing anything. It might not get cleared, AFAIR it depends on the distro. But if it isn't cleared - that's normal, VM boot up will clear it by resetting the device.
5. My distro(fedora) had plymouth installed, you have to check /proc/iomem for something like vesafb appearing on 01:00.0. Uninstalling plymouth and disabling KMS(kernel mode setting) via kernel command line helped, but i no longer have fancy console or a splash screen. Otherwise it won't work when booting the VM - check my very first posts for details.
6. I'm using proprietary NVIDIA driver, and it has a neat option in xorg.conf available(actually, it will crash without it on my system):
Section "Device"
Identifier "Device0"
Driver "nvidia"
VendorName "NVIDIA Corporation"
BoardName "GeForce GT 610"
BusID "PCI:04:00:00"
EndSection
BusID. That option clearly specifies which device nvidia driver should poke.
And now about your problems:
1. Give us your dmesg as the VM boots. Maybe there's something useful there.
2. You want to use nouveau, that's fine, but i don't know if there's BusID option available.
3. You're using nvidia host+nvidia guest. There was an ISSUES section in the op-post, but nbhs deleted it for some reason. I've tried to remember why the hell and how i've patched host nvidia drivers, but failed. I remember that there was something related to vgaarb. To nbhs:can you please bring that section back?
P.S.
Oh, and BTW, you might have some issues with pci-e bandwidth, like i have 02:00.[0-1] running on pci-e 2.0 x4 (when 04:00.[0-1] is plugged in) while physically it's x8. That depends on the motherboard and pci-e controller design(i have athlon x4 750k and it works that weird way).
Since you're doing SLI(right?) - there shouldn't be any performance drawbacks(the cards are connected via an external bridge), but for me - this is an issue, my crossfire runs via XDMA.
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline
2. You want to use nouveau, that's fine, but i don't know if there's BusID option available.
BusID is not nvidia specific, it should work for all drivers afaik.
3. You're using nvidia host+nvidia guest. There was an ISSUES section in the op-post, but nbhs deleted it for some reason. I've tried to remember why the hell and how i've patched host nvidia drivers, but failed. I remember that there was something related to vgaarb. To nbhs:can you please bring that section back?
The issue was that using the proprietary nvidia driver on the host would take the vga arbiter lock and never release it. This would make the guest hang on its first VGA region access as it waits on a lock that will never be released. Maybe nvidia has fixed this in their driver now?
Last edited by aw (2015-02-08 20:34:32)
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
3. You're using nvidia host+nvidia guest. There was an ISSUES section in the op-post, but nbhs deleted it for some reason. I've tried to remember why the hell and how i've patched host nvidia drivers, but failed. I remember that there was something related to vgaarb. To nbhs:can you please bring that section back?
Im using an nvidia gpu on my host, and i havent had that problem in a long time now, but yes why not.
Offline
BusID is not nvidia specific, it should work for all drivers afaik.
If that's so - awesome. I recall seeing it in nvidia's docs somewhere.
The issue was that using the proprietary nvidia driver on the host would take the vga arbiter lock and never release it. This would make the guest hang on its first VGA region access as it waits on a lock that will never be released. Maybe nvidia has fixed this in their driver now?
Well... yyeah, they've fixed it. I've had to patch 343.22, but it's
[ 3.573430] vgaarb: device changed decodes: PCI:0000:03:00.0,olddecodes=io+mem,decodes=none:owns=none
like that on 346.35.
Whatever, that's not the problem - i've just shared my experience with non-primary-VGA being used with VFIO. I hope he didn't drain his system yet.
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline
5. My distro(fedora) had plymouth installed, you have to check /proc/iomem for something like vesafb appearing on 01:00.0.
I don't see anything like that. Here's my /proc/iomem:
00000000-00000fff : reserved
00001000-0009e7ff : System RAM
0009e800-0009ffff : reserved
000a0000-000bffff : PCI Bus 0000:00
000c0000-000dffff : PCI Bus 0000:00
000c0000-000c7fff : Video ROM
000e0000-000fffff : reserved
000f0000-000fffff : System ROM
00100000-9beb8fff : System RAM
01000000-015124cf : Kernel code
015124d0-018eb8ff : Kernel data
01a1f000-01af0fff : Kernel bss
9beb9000-9c49ffff : reserved
9c4a0000-9c5adfff : ACPI Tables
9c5ae000-9c7d4fff : ACPI Non-volatile Storage
9c7d5000-9d74afff : reserved
9d74b000-9d74bfff : System RAM
9d74c000-9d7d1fff : ACPI Non-volatile Storage
9d7d2000-9dc11fff : System RAM
9dc12000-9dff3fff : reserved
9dff4000-9dffffff : System RAM
9e000000-9fffffff : RAM buffer
a0000000-ffffffff : PCI Bus 0000:00
a0000000-b1ffffff : PCI Bus 0000:04
a0000000-afffffff : 0000:04:00.0
b0000000-b1ffffff : 0000:04:00.0
b8000000-c1ffffff : PCI Bus 0000:03
b8000000-bfffffff : 0000:03:00.0
c0000000-c1ffffff : 0000:03:00.0
c8000000-d1ffffff : PCI Bus 0000:01
c8000000-cfffffff : 0000:01:00.0
d0000000-d1ffffff : 0000:01:00.0
e0000000-efffffff : PCI MMCONFIG 0000 [bus 00-ff]
e0000000-efffffff : reserved
f6000000-f70fffff : PCI Bus 0000:04
f6000000-f6ffffff : 0000:04:00.0
f7000000-f707ffff : 0000:04:00.0
f7080000-f7083fff : 0000:04:00.1
f7080000-f7083fff : ICH HD audio
f7100000-f711ffff : 0000:00:19.0
f7100000-f711ffff : e1000e
f7120000-f7123fff : 0000:00:1b.0
f7120000-f7123fff : ICH HD audio
f7124000-f71240ff : 0000:00:1f.3
f7125000-f71257ff : 0000:00:1f.2
f7125000-f71257ff : ahci
f7126000-f71263ff : 0000:00:1d.0
f7126000-f71263ff : ehci_hcd
f7127000-f71273ff : 0000:00:1a.0
f7127000-f71273ff : ehci_hcd
f7128000-f7128fff : 0000:00:19.0
f7128000-f7128fff : e1000e
f7129000-f712900f : 0000:00:16.0
f7129000-f712900f : mei_me
f712a000-f712afff : 0000:00:05.4
f8000000-f90fffff : PCI Bus 0000:03
f8000000-f8ffffff : 0000:03:00.0
f9000000-f907ffff : 0000:03:00.0
f9080000-f9083fff : 0000:03:00.1
fa000000-fb0fffff : PCI Bus 0000:01
fa000000-faffffff : 0000:01:00.0
fb000000-fb07ffff : 0000:01:00.0
fb080000-fb083fff : 0000:01:00.1
fb200000-fb2fffff : PCI Bus 0000:0c
fb200000-fb2001ff : 0000:0c:00.0
fb200000-fb2001ff : ahci
fb300000-fb3fffff : PCI Bus 0000:0b
fb300000-fb3001ff : 0000:0b:00.0
fb300000-fb3001ff : ahci
fb400000-fb4fffff : PCI Bus 0000:0a
fb400000-fb407fff : 0000:0a:00.0
fb400000-fb407fff : xhci_hcd
fb500000-fb5fffff : PCI Bus 0000:09
fb500000-fb507fff : 0000:09:00.0
fb500000-fb507fff : xhci_hcd
fb600000-fb6fffff : PCI Bus 0000:08
fb600000-fb607fff : 0000:08:00.0
fb600000-fb607fff : xhci_hcd
fb700000-fb7fffff : PCI Bus 0000:07
fb700000-fb707fff : 0000:07:00.0
fb700000-fb707fff : xhci_hcd
fbffc000-fbffcfff : dmar0
fc000000-fcffffff : pnp 00:00
fd000000-fdffffff : pnp 00:00
fe000000-feafffff : pnp 00:00
feb00000-febfffff : pnp 00:00
fec00000-fec003ff : IOAPIC 0
fec01000-fec013ff : IOAPIC 1
fed00000-fed003ff : HPET 0
fed00000-fed003ff : PNP0103:00
fed1c000-fed1ffff : reserved
fed1c000-fed1ffff : pnp 00:05
fed1f410-fed1f414 : iTCO_wdt
fed1f410-fed1f414 : iTCO_wdt
fed45000-fedfffff : pnp 00:00
fee00000-feefffff : pnp 00:00
fee00000-fee00fff : Local APIC
ff000000-ffffffff : reserved
ff000000-ffffffff : pnp 00:05
100000000-65fffffff : System RAM
1. Give us your dmesg as the VM boots. Maybe there's something useful there.
After booting, 01 and 03 are bound to pci-stub. I run a script to rebind them to vfio, and this is the only addition to dmesg:
VFIO - User Level meta-driver version: 0.3
When I attempt to start the VM, no new messages are written.
Here's an excerpt of my entire dmesg log, from boot to running the VM:
...
[ 0.000000] Linux version 3.16.0-4-amd64 (debian-kernel@lists.debian.org) (gcc version 4.8.3 (Debian 4.8.3-16) ) #1 SMP Debian 3.16.7-ckt2-1 (2014-12-08)
...
[ 0.000000] Command line: BOOT_IMAGE=/vmlinuz-3.16.0-4-amd64 root=/dev/mapper/vg1-lv.root ro quiet intel_iommu=on
...
[ 0.000000] DMI: System manufacturer System Product Name/RAMPAGE IV EXTREME, BIOS 4901 05/14/2014
...
[ 0.000000] Intel-IOMMU: enabled
...
[ 0.000000] Memory: 24661772K/25102988K available (5192K kernel code, 942K rwdata, 1828K rodata, 1200K init, 840K bss, 441216K reserved)
...
[ 0.053422] Freeing SMP alternatives memory: 20K (ffffffff81a19000 - ffffffff81a1e000)
[ 0.054163] ftrace: allocating 21561 entries in 85 pages
[ 0.060881] dmar: Host address width 46
[ 0.060883] dmar: DRHD base: 0x000000fbffc000 flags: 0x1
[ 0.060890] dmar: IOMMU 0: reg_base_addr fbffc000 ver 1:0 cap d2078c106f0466 ecap f020de
[ 0.060891] dmar: RMRR base: 0x0000009c475000 end: 0x0000009c482fff
[ 0.060892] dmar: ATSR flags: 0x0
[ 0.060893] dmar: RHSA base: 0x000000fbffc000 proximity domain: 0x0
[ 0.060982] IOAPIC id 0 under DRHD base 0xfbffc000 IOMMU 0
[ 0.060982] IOAPIC id 2 under DRHD base 0xfbffc000 IOMMU 0
[ 0.060983] HPET id 0 under DRHD base 0xfbffc000
[ 0.060984] Queued invalidation will be enabled to support x2apic and Intr-remapping.
[ 0.061108] Enabled IRQ remapping in x2apic mode
[ 0.061109] Enabling x2apic
[ 0.061110] Enabled x2apic
[ 0.061113] Switched APIC routing to cluster x2apic.
[ 0.061610] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
[ 0.101249] smpboot: CPU0: Intel(R) Core(TM) i7-4960X CPU @ 3.60GHz (fam: 06, model: 3e, stepping: 04)
[ 0.101254] TSC deadline timer enabled
[ 0.101270] Performance Events: PEBS fmt1+, 16-deep LBR, IvyBridge events, full-width counters, Intel PMU driver.
[ 0.101284] ... version: 3
[ 0.101285] ... bit width: 48
[ 0.101285] ... generic registers: 4
[ 0.101286] ... value mask: 0000ffffffffffff
[ 0.101286] ... max period: 0000ffffffffffff
[ 0.101287] ... fixed-purpose events: 3
[ 0.101287] ... event mask: 000000070000000f
[ 0.102318] x86: Booting SMP configuration:
[ 0.102319] .... node #0, CPUs: #1
[ 0.117314] NMI watchdog: enabled on all CPUs, permanently consumes one hw-PMU counter.
[ 0.117394] #2 #3 #4 #5 #6 #7 #8 #9 #10 #11
[ 0.255363] x86: Booted up 1 node, 12 CPUs
[ 0.255366] smpboot: Total of 12 processors activated (86439.09 BogoMIPS)
...
[ 0.270631] ACPI: bus type PCI registered
[ 0.270632] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
[ 0.270701] PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000)
[ 0.270702] PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820
[ 0.270903] PCI: Using configuration type 1 for base access
[ 0.282954] ACPI: Added _OSI(Module Device)
[ 0.282956] ACPI: Added _OSI(Processor Device)
[ 0.282957] ACPI: Added _OSI(3.0 _SCP Extensions)
[ 0.282957] ACPI: Added _OSI(Processor Aggregator Device)
[ 0.291313] ACPI: Executed 1 blocks of module-level executable AML code
[ 0.379221] ACPI: Interpreter enabled
[ 0.379226] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [\_S1_] (20140424/hwxface-580)
[ 0.379228] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [\_S2_] (20140424/hwxface-580)
[ 0.379236] ACPI: (supports S0 S3 S4 S5)
[ 0.379237] ACPI: Using IOAPIC for interrupt routing
[ 0.379258] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
[ 0.388311] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe])
[ 0.388315] acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI]
[ 0.388417] acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME AER]
[ 0.388510] acpi PNP0A08:00: _OSC: OS now controls [PCIeCapability]
[ 0.388747] PCI host bridge to bus 0000:00
[ 0.388749] pci_bus 0000:00: root bus resource [bus 00-fe]
[ 0.388751] pci_bus 0000:00: root bus resource [io 0x0000-0x03af]
[ 0.388752] pci_bus 0000:00: root bus resource [io 0x03e0-0x0cf7]
[ 0.388753] pci_bus 0000:00: root bus resource [io 0x03b0-0x03df]
[ 0.388753] pci_bus 0000:00: root bus resource [io 0x0d00-0xffff]
[ 0.388755] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff]
[ 0.388755] pci_bus 0000:00: root bus resource [mem 0x000c0000-0x000dffff]
[ 0.388756] pci_bus 0000:00: root bus resource [mem 0xa0000000-0xffffffff]
[ 0.388765] pci 0000:00:00.0: [8086:0e00] type 00 class 0x060000
[ 0.388820] pci 0000:00:00.0: PME# supported from D0 D3hot D3cold
[ 0.388882] pci 0000:00:01.0: [8086:0e02] type 01 class 0x060400
[ 0.388947] pci 0000:00:01.0: PME# supported from D0 D3hot D3cold
[ 0.388984] pci 0000:00:01.0: System wakeup disabled by ACPI
[ 0.389015] pci 0000:00:02.0: [8086:0e04] type 01 class 0x060400
[ 0.389080] pci 0000:00:02.0: PME# supported from D0 D3hot D3cold
[ 0.389116] pci 0000:00:02.0: System wakeup disabled by ACPI
[ 0.389147] pci 0000:00:03.0: [8086:0e08] type 01 class 0x060400
[ 0.389213] pci 0000:00:03.0: PME# supported from D0 D3hot D3cold
[ 0.389248] pci 0000:00:03.0: System wakeup disabled by ACPI
[ 0.389277] pci 0000:00:03.2: [8086:0e0a] type 01 class 0x060400
[ 0.389341] pci 0000:00:03.2: PME# supported from D0 D3hot D3cold
[ 0.389377] pci 0000:00:03.2: System wakeup disabled by ACPI
[ 0.389403] pci 0000:00:05.0: [8086:0e28] type 00 class 0x088000
[ 0.389492] pci 0000:00:05.2: [8086:0e2a] type 00 class 0x088000
[ 0.389579] pci 0000:00:05.4: [8086:0e2c] type 00 class 0x080020
[ 0.389588] pci 0000:00:05.4: reg 0x10: [mem 0xf712a000-0xf712afff]
[ 0.389691] pci 0000:00:11.0: [8086:1d3e] type 01 class 0x060400
[ 0.389776] pci 0000:00:11.0: PME# supported from D0 D3hot D3cold
[ 0.389847] pci 0000:00:16.0: [8086:1d3a] type 00 class 0x078000
[ 0.389867] pci 0000:00:16.0: reg 0x10: [mem 0xf7129000-0xf712900f 64bit]
[ 0.389934] pci 0000:00:16.0: PME# supported from D0 D3hot D3cold
[ 0.389995] pci 0000:00:19.0: [8086:1503] type 00 class 0x020000
[ 0.390009] pci 0000:00:19.0: reg 0x10: [mem 0xf7100000-0xf711ffff]
[ 0.390016] pci 0000:00:19.0: reg 0x14: [mem 0xf7128000-0xf7128fff]
[ 0.390023] pci 0000:00:19.0: reg 0x18: [io 0xf040-0xf05f]
[ 0.390076] pci 0000:00:19.0: PME# supported from D0 D3hot D3cold
[ 0.390105] pci 0000:00:19.0: System wakeup disabled by ACPI
[ 0.390136] pci 0000:00:1a.0: [8086:1d2d] type 00 class 0x0c0320
[ 0.390154] pci 0000:00:1a.0: reg 0x10: [mem 0xf7127000-0xf71273ff]
[ 0.390233] pci 0000:00:1a.0: PME# supported from D0 D3hot D3cold
[ 0.390264] pci 0000:00:1a.0: System wakeup disabled by ACPI
[ 0.390293] pci 0000:00:1b.0: [8086:1d20] type 00 class 0x040300
[ 0.390307] pci 0000:00:1b.0: reg 0x10: [mem 0xf7120000-0xf7123fff 64bit]
[ 0.390371] pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold
[ 0.390426] pci 0000:00:1c.0: [8086:1d10] type 01 class 0x060400
[ 0.390501] pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold
[ 0.390519] pci 0000:00:1c.0: Disabling UPDCR peer decodes
[ 0.390523] pci 0000:00:1c.0: Enabling MPC IRBNCE
[ 0.390525] pci 0000:00:1c.0: Intel PCH root port ACS workaround enabled
[ 0.390548] pci 0000:00:1c.0: System wakeup disabled by ACPI
[ 0.390576] pci 0000:00:1c.1: [8086:1d12] type 01 class 0x060400
[ 0.390648] pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold
[ 0.390667] pci 0000:00:1c.1: Enabling MPC IRBNCE
[ 0.390669] pci 0000:00:1c.1: Intel PCH root port ACS workaround enabled
[ 0.390691] pci 0000:00:1c.1: System wakeup disabled by ACPI
[ 0.390718] pci 0000:00:1c.2: [8086:1d14] type 01 class 0x060400
[ 0.390790] pci 0000:00:1c.2: PME# supported from D0 D3hot D3cold
[ 0.390808] pci 0000:00:1c.2: Enabling MPC IRBNCE
[ 0.390810] pci 0000:00:1c.2: Intel PCH root port ACS workaround enabled
[ 0.390831] pci 0000:00:1c.2: System wakeup disabled by ACPI
[ 0.390860] pci 0000:00:1c.3: [8086:1d16] type 01 class 0x060400
[ 0.390932] pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold
[ 0.390950] pci 0000:00:1c.3: Enabling MPC IRBNCE
[ 0.390951] pci 0000:00:1c.3: Intel PCH root port ACS workaround enabled
[ 0.390973] pci 0000:00:1c.3: System wakeup disabled by ACPI
[ 0.391000] pci 0000:00:1c.4: [8086:1d18] type 01 class 0x060400
[ 0.391072] pci 0000:00:1c.4: PME# supported from D0 D3hot D3cold
[ 0.391090] pci 0000:00:1c.4: Enabling MPC IRBNCE
[ 0.391092] pci 0000:00:1c.4: Intel PCH root port ACS workaround enabled
[ 0.391113] pci 0000:00:1c.4: System wakeup disabled by ACPI
[ 0.391140] pci 0000:00:1c.5: [8086:1d1a] type 01 class 0x060400
[ 0.391212] pci 0000:00:1c.5: PME# supported from D0 D3hot D3cold
[ 0.391231] pci 0000:00:1c.5: Enabling MPC IRBNCE
[ 0.391233] pci 0000:00:1c.5: Intel PCH root port ACS workaround enabled
[ 0.391254] pci 0000:00:1c.5: System wakeup disabled by ACPI
[ 0.391283] pci 0000:00:1c.7: [8086:1d1e] type 01 class 0x060400
[ 0.391355] pci 0000:00:1c.7: PME# supported from D0 D3hot D3cold
[ 0.391373] pci 0000:00:1c.7: Enabling MPC IRBNCE
[ 0.391375] pci 0000:00:1c.7: Intel PCH root port ACS workaround enabled
[ 0.391396] pci 0000:00:1c.7: System wakeup disabled by ACPI
[ 0.391427] pci 0000:00:1d.0: [8086:1d26] type 00 class 0x0c0320
[ 0.391445] pci 0000:00:1d.0: reg 0x10: [mem 0xf7126000-0xf71263ff]
[ 0.391524] pci 0000:00:1d.0: PME# supported from D0 D3hot D3cold
[ 0.391555] pci 0000:00:1d.0: System wakeup disabled by ACPI
[ 0.391581] pci 0000:00:1e.0: [8086:244e] type 01 class 0x060401
[ 0.391645] pci 0000:00:1e.0: System wakeup disabled by ACPI
[ 0.391675] pci 0000:00:1f.0: [8086:1d41] type 00 class 0x060100
[ 0.391818] pci 0000:00:1f.2: [8086:1d02] type 00 class 0x010601
[ 0.391833] pci 0000:00:1f.2: reg 0x10: [io 0xf090-0xf097]
[ 0.391839] pci 0000:00:1f.2: reg 0x14: [io 0xf080-0xf083]
[ 0.391845] pci 0000:00:1f.2: reg 0x18: [io 0xf070-0xf077]
[ 0.391851] pci 0000:00:1f.2: reg 0x1c: [io 0xf060-0xf063]
[ 0.391857] pci 0000:00:1f.2: reg 0x20: [io 0xf020-0xf03f]
[ 0.391863] pci 0000:00:1f.2: reg 0x24: [mem 0xf7125000-0xf71257ff]
[ 0.391900] pci 0000:00:1f.2: PME# supported from D3hot
[ 0.391951] pci 0000:00:1f.3: [8086:1d22] type 00 class 0x0c0500
[ 0.391963] pci 0000:00:1f.3: reg 0x10: [mem 0xf7124000-0xf71240ff 64bit]
[ 0.391981] pci 0000:00:1f.3: reg 0x20: [io 0xf000-0xf01f]
[ 0.392074] pci 0000:00:01.0: PCI bridge to [bus 02]
[ 0.392123] pci 0000:01:00.0: [10de:1080] type 00 class 0x030000
[ 0.392131] pci 0000:01:00.0: reg 0x10: [mem 0xfa000000-0xfaffffff]
[ 0.392138] pci 0000:01:00.0: reg 0x14: [mem 0xc8000000-0xcfffffff 64bit pref]
[ 0.392144] pci 0000:01:00.0: reg 0x1c: [mem 0xd0000000-0xd1ffffff 64bit pref]
[ 0.392149] pci 0000:01:00.0: reg 0x24: [io 0xe000-0xe07f]
[ 0.392154] pci 0000:01:00.0: reg 0x30: [mem 0xfb000000-0xfb07ffff pref]
[ 0.392216] pci 0000:01:00.1: [10de:0e09] type 00 class 0x040300
[ 0.392223] pci 0000:01:00.1: reg 0x10: [mem 0xfb080000-0xfb083fff]
[ 0.398491] pci 0000:00:02.0: PCI bridge to [bus 01]
[ 0.398494] pci 0000:00:02.0: bridge window [io 0xe000-0xefff]
[ 0.398496] pci 0000:00:02.0: bridge window [mem 0xfa000000-0xfb0fffff]
[ 0.398500] pci 0000:00:02.0: bridge window [mem 0xc8000000-0xd1ffffff 64bit pref]
[ 0.398546] pci 0000:03:00.0: [10de:1080] type 00 class 0x030000
[ 0.398553] pci 0000:03:00.0: reg 0x10: [mem 0xf8000000-0xf8ffffff]
[ 0.398559] pci 0000:03:00.0: reg 0x14: [mem 0xb8000000-0xbfffffff 64bit pref]
[ 0.398565] pci 0000:03:00.0: reg 0x1c: [mem 0xc0000000-0xc1ffffff 64bit pref]
[ 0.398569] pci 0000:03:00.0: reg 0x24: [io 0xd000-0xd07f]
[ 0.398573] pci 0000:03:00.0: reg 0x30: [mem 0xf9000000-0xf907ffff pref]
[ 0.398640] pci 0000:03:00.1: [10de:0e09] type 00 class 0x040300
[ 0.398648] pci 0000:03:00.1: reg 0x10: [mem 0xf9080000-0xf9083fff]
[ 0.406471] pci 0000:00:03.0: PCI bridge to [bus 03]
[ 0.406477] pci 0000:00:03.0: bridge window [io 0xd000-0xdfff]
[ 0.406482] pci 0000:00:03.0: bridge window [mem 0xf8000000-0xf90fffff]
[ 0.406500] pci 0000:00:03.0: bridge window [mem 0xb8000000-0xc1ffffff 64bit pref]
[ 0.406546] pci 0000:04:00.0: [10de:0a65] type 00 class 0x030000
[ 0.406553] pci 0000:04:00.0: reg 0x10: [mem 0xf6000000-0xf6ffffff]
[ 0.406559] pci 0000:04:00.0: reg 0x14: [mem 0xa0000000-0xafffffff 64bit pref]
[ 0.406565] pci 0000:04:00.0: reg 0x1c: [mem 0xb0000000-0xb1ffffff 64bit pref]
[ 0.406569] pci 0000:04:00.0: reg 0x24: [io 0xc000-0xc07f]
[ 0.406573] pci 0000:04:00.0: reg 0x30: [mem 0xf7000000-0xf707ffff pref]
[ 0.406639] pci 0000:04:00.1: [10de:0be3] type 00 class 0x040300
[ 0.406647] pci 0000:04:00.1: reg 0x10: [mem 0xf7080000-0xf7083fff]
[ 0.414468] pci 0000:00:03.2: PCI bridge to [bus 04]
[ 0.414474] pci 0000:00:03.2: bridge window [io 0xc000-0xcfff]
[ 0.414479] pci 0000:00:03.2: bridge window [mem 0xf6000000-0xf70fffff]
[ 0.414498] pci 0000:00:03.2: bridge window [mem 0xa0000000-0xb1ffffff 64bit pref]
[ 0.414548] pci 0000:00:11.0: PCI bridge to [bus 05]
[ 0.414602] pci 0000:00:1c.0: PCI bridge to [bus 06]
[ 0.414679] pci 0000:07:00.0: [1b21:1042] type 00 class 0x0c0330
[ 0.414709] pci 0000:07:00.0: reg 0x10: [mem 0xfb700000-0xfb707fff 64bit]
[ 0.414858] pci 0000:07:00.0: PME# supported from D3hot D3cold
[ 0.422467] pci 0000:00:1c.1: PCI bridge to [bus 07]
[ 0.422475] pci 0000:00:1c.1: bridge window [mem 0xfb700000-0xfb7fffff]
[ 0.422564] pci 0000:08:00.0: [1b21:1042] type 00 class 0x0c0330
[ 0.422594] pci 0000:08:00.0: reg 0x10: [mem 0xfb600000-0xfb607fff 64bit]
[ 0.422742] pci 0000:08:00.0: PME# supported from D3hot D3cold
[ 0.430463] pci 0000:00:1c.2: PCI bridge to [bus 08]
[ 0.430471] pci 0000:00:1c.2: bridge window [mem 0xfb600000-0xfb6fffff]
[ 0.430559] pci 0000:09:00.0: [1b21:1042] type 00 class 0x0c0330
[ 0.430589] pci 0000:09:00.0: reg 0x10: [mem 0xfb500000-0xfb507fff 64bit]
[ 0.430738] pci 0000:09:00.0: PME# supported from D3hot D3cold
[ 0.438460] pci 0000:00:1c.3: PCI bridge to [bus 09]
[ 0.438469] pci 0000:00:1c.3: bridge window [mem 0xfb500000-0xfb5fffff]
[ 0.438559] pci 0000:0a:00.0: [1b21:1042] type 00 class 0x0c0330
[ 0.438589] pci 0000:0a:00.0: reg 0x10: [mem 0xfb400000-0xfb407fff 64bit]
[ 0.438737] pci 0000:0a:00.0: PME# supported from D3hot D3cold
[ 0.446456] pci 0000:00:1c.4: PCI bridge to [bus 0a]
[ 0.446465] pci 0000:00:1c.4: bridge window [mem 0xfb400000-0xfb4fffff]
[ 0.446549] pci 0000:0b:00.0: [1b21:0612] type 00 class 0x010601
[ 0.446569] pci 0000:0b:00.0: reg 0x10: [io 0xb050-0xb057]
[ 0.446581] pci 0000:0b:00.0: reg 0x14: [io 0xb040-0xb043]
[ 0.446593] pci 0000:0b:00.0: reg 0x18: [io 0xb030-0xb037]
[ 0.446605] pci 0000:0b:00.0: reg 0x1c: [io 0xb020-0xb023]
[ 0.446617] pci 0000:0b:00.0: reg 0x20: [io 0xb000-0xb01f]
[ 0.446629] pci 0000:0b:00.0: reg 0x24: [mem 0xfb300000-0xfb3001ff]
[ 0.454453] pci 0000:00:1c.5: PCI bridge to [bus 0b]
[ 0.454459] pci 0000:00:1c.5: bridge window [io 0xb000-0xbfff]
[ 0.454464] pci 0000:00:1c.5: bridge window [mem 0xfb300000-0xfb3fffff]
[ 0.454548] pci 0000:0c:00.0: [1b21:0612] type 00 class 0x010601
[ 0.454568] pci 0000:0c:00.0: reg 0x10: [io 0xa050-0xa057]
[ 0.454580] pci 0000:0c:00.0: reg 0x14: [io 0xa040-0xa043]
[ 0.454592] pci 0000:0c:00.0: reg 0x18: [io 0xa030-0xa037]
[ 0.454604] pci 0000:0c:00.0: reg 0x1c: [io 0xa020-0xa023]
[ 0.454616] pci 0000:0c:00.0: reg 0x20: [io 0xa000-0xa01f]
[ 0.454628] pci 0000:0c:00.0: reg 0x24: [mem 0xfb200000-0xfb2001ff]
[ 0.462450] pci 0000:00:1c.7: PCI bridge to [bus 0c]
[ 0.462456] pci 0000:00:1c.7: bridge window [io 0xa000-0xafff]
[ 0.462461] pci 0000:00:1c.7: bridge window [mem 0xfb200000-0xfb2fffff]
[ 0.462536] pci 0000:00:1e.0: PCI bridge to [bus 0d] (subtractive decode)
[ 0.462543] pci 0000:00:1e.0: bridge window [io 0x0000-0x03af] (subtractive decode)
[ 0.462544] pci 0000:00:1e.0: bridge window [io 0x03e0-0x0cf7] (subtractive decode)
[ 0.462545] pci 0000:00:1e.0: bridge window [io 0x03b0-0x03df] (subtractive decode)
[ 0.462546] pci 0000:00:1e.0: bridge window [io 0x0d00-0xffff] (subtractive decode)
[ 0.462547] pci 0000:00:1e.0: bridge window [mem 0x000a0000-0x000bffff] (subtractive decode)
[ 0.462548] pci 0000:00:1e.0: bridge window [mem 0x000c0000-0x000dffff] (subtractive decode)
[ 0.462549] pci 0000:00:1e.0: bridge window [mem 0xa0000000-0xffffffff] (subtractive decode)
[ 0.462820] ACPI: PCI Root Bridge [UNC0] (domain 0000 [bus ff])
[ 0.462822] acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI]
[ 0.462836] acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]
[ 0.462869] PCI host bridge to bus 0000:ff
[ 0.462870] pci_bus 0000:ff: root bus resource [bus ff]
[ 0.462876] pci 0000:ff:08.0: [8086:0e80] type 00 class 0x088000
[ 0.462914] pci 0000:ff:09.0: [8086:0e90] type 00 class 0x088000
[ 0.462950] pci 0000:ff:0a.0: [8086:0ec0] type 00 class 0x088000
[ 0.462981] pci 0000:ff:0a.1: [8086:0ec1] type 00 class 0x088000
[ 0.463012] pci 0000:ff:0a.2: [8086:0ec2] type 00 class 0x088000
[ 0.463042] pci 0000:ff:0a.3: [8086:0ec3] type 00 class 0x088000
[ 0.463073] pci 0000:ff:0b.0: [8086:0e1e] type 00 class 0x088000
[ 0.463102] pci 0000:ff:0b.3: [8086:0e1f] type 00 class 0x088000
[ 0.463131] pci 0000:ff:0c.0: [8086:0ee0] type 00 class 0x088000
[ 0.463159] pci 0000:ff:0c.1: [8086:0ee2] type 00 class 0x088000
[ 0.463187] pci 0000:ff:0c.2: [8086:0ee4] type 00 class 0x088000
[ 0.463217] pci 0000:ff:0d.0: [8086:0ee1] type 00 class 0x088000
[ 0.463247] pci 0000:ff:0d.1: [8086:0ee3] type 00 class 0x088000
[ 0.463275] pci 0000:ff:0d.2: [8086:0ee5] type 00 class 0x088000
[ 0.463306] pci 0000:ff:0e.0: [8086:0ea0] type 00 class 0x088000
[ 0.463336] pci 0000:ff:0e.1: [8086:0e30] type 00 class 0x110100
[ 0.463371] pci 0000:ff:0f.0: [8086:0ea8] type 00 class 0x088000
[ 0.463412] pci 0000:ff:0f.1: [8086:0e71] type 00 class 0x088000
[ 0.463453] pci 0000:ff:0f.2: [8086:0eaa] type 00 class 0x088000
[ 0.463494] pci 0000:ff:0f.3: [8086:0eab] type 00 class 0x088000
[ 0.463534] pci 0000:ff:0f.4: [8086:0eac] type 00 class 0x088000
[ 0.463575] pci 0000:ff:0f.5: [8086:0ead] type 00 class 0x088000
[ 0.463616] pci 0000:ff:10.0: [8086:0eb0] type 00 class 0x088000
[ 0.463658] pci 0000:ff:10.1: [8086:0eb1] type 00 class 0x088000
[ 0.463701] pci 0000:ff:10.2: [8086:0eb2] type 00 class 0x088000
[ 0.463742] pci 0000:ff:10.3: [8086:0eb3] type 00 class 0x088000
[ 0.463784] pci 0000:ff:10.4: [8086:0eb4] type 00 class 0x088000
[ 0.463826] pci 0000:ff:10.5: [8086:0eb5] type 00 class 0x088000
[ 0.463868] pci 0000:ff:10.6: [8086:0eb6] type 00 class 0x088000
[ 0.463909] pci 0000:ff:10.7: [8086:0eb7] type 00 class 0x088000
[ 0.463950] pci 0000:ff:13.0: [8086:0e1d] type 00 class 0x088000
[ 0.463979] pci 0000:ff:13.1: [8086:0e34] type 00 class 0x110100
[ 0.464009] pci 0000:ff:13.4: [8086:0e81] type 00 class 0x088000
[ 0.464038] pci 0000:ff:13.5: [8086:0e36] type 00 class 0x110100
[ 0.464069] pci 0000:ff:16.0: [8086:0ec8] type 00 class 0x088000
[ 0.464098] pci 0000:ff:16.1: [8086:0ec9] type 00 class 0x088000
[ 0.464127] pci 0000:ff:16.2: [8086:0eca] type 00 class 0x088000
[ 0.464218] ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 7 10 *11 12 14 15)
[ 0.464251] ACPI: PCI Interrupt Link [LNKB] (IRQs 3 4 5 6 7 *10 11 12 14 15)
[ 0.464283] ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 5 6 11 12 14 *15)
[ 0.464314] ACPI: PCI Interrupt Link [LNKD] (IRQs 3 *4 5 6 10 11 12 14 15)
[ 0.464345] ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 5 6 7 10 11 12 *14 15)
[ 0.464376] ACPI: PCI Interrupt Link [LNKF] (IRQs 3 4 *5 6 7 10 11 12 14 15)
[ 0.464407] ACPI: PCI Interrupt Link [LNKG] (IRQs *3 4 5 6 7 10 11 12 14 15)
[ 0.464438] ACPI: PCI Interrupt Link [LNKH] (IRQs 3 4 5 6 *7 10 11 12 14 15)
[ 0.466723] ACPI: Enabled 3 GPEs in block 00 to 3F
[ 0.466804] vgaarb: setting as boot device: PCI:0000:01:00.0
[ 0.466806] vgaarb: device added: PCI:0000:01:00.0,decodes=io+mem,owns=io+mem,locks=none
[ 0.466808] vgaarb: device added: PCI:0000:03:00.0,decodes=io+mem,owns=none,locks=none
[ 0.466810] vgaarb: device added: PCI:0000:04:00.0,decodes=io+mem,owns=none,locks=none
[ 0.466814] vgaarb: loaded
[ 0.466815] vgaarb: bridge control possible 0000:04:00.0
[ 0.466816] vgaarb: bridge control possible 0000:03:00.0
[ 0.466816] vgaarb: bridge control possible 0000:01:00.0
[ 0.466857] PCI: Using ACPI for IRQ routing
[ 0.472217] PCI: pci_cache_line_size set to 64 bytes
[ 0.472319] e820: reserve RAM buffer [mem 0x0009e800-0x0009ffff]
[ 0.472320] e820: reserve RAM buffer [mem 0x9beb9000-0x9bffffff]
[ 0.472321] e820: reserve RAM buffer [mem 0x9d74c000-0x9fffffff]
[ 0.472322] e820: reserve RAM buffer [mem 0x9dc12000-0x9fffffff]
[ 0.472323] e820: reserve RAM buffer [mem 0x9e000000-0x9fffffff]
[ 0.472427] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0
[ 0.472430] hpet0: 8 comparators, 64-bit 14.318180 MHz counter
[ 0.474466] Switched to clocksource hpet
[ 0.478098] pnp: PnP ACPI init
[ 0.478106] ACPI: bus type PNP registered
[ 0.478172] system 00:00: [mem 0xfc000000-0xfcffffff] has been reserved
[ 0.478174] system 00:00: [mem 0xfd000000-0xfdffffff] has been reserved
[ 0.478175] system 00:00: [mem 0xfe000000-0xfeafffff] has been reserved
[ 0.478176] system 00:00: [mem 0xfeb00000-0xfebfffff] has been reserved
[ 0.478177] system 00:00: [mem 0xfed00400-0xfed3ffff] could not be reserved
[ 0.478179] system 00:00: [mem 0xfed45000-0xfedfffff] has been reserved
[ 0.478180] system 00:00: [mem 0xfee00000-0xfeefffff] has been reserved
[ 0.478182] system 00:00: Plug and Play ACPI device, IDs PNP0c01 (active)
[ 0.478236] system 00:01: [mem 0xfbffc000-0xfbffdfff] could not be reserved
[ 0.478237] system 00:01: Plug and Play ACPI device, IDs PNP0c02 (active)
[ 0.478304] system 00:02: [io 0x0290-0x029f] has been reserved
[ 0.478306] system 00:02: Plug and Play ACPI device, IDs PNP0c02 (active)
[ 0.478329] pnp 00:03: Plug and Play ACPI device, IDs PNP0b00 (active)
[ 0.478377] system 00:04: [io 0x04d0-0x04d1] has been reserved
[ 0.478380] system 00:04: Plug and Play ACPI device, IDs PNP0c02 (active)
[ 0.478538] system 00:05: [io 0x0400-0x0453] could not be reserved
[ 0.478539] system 00:05: [io 0x0458-0x047f] has been reserved
[ 0.478540] system 00:05: [io 0x1180-0x119f] has been reserved
[ 0.478541] system 00:05: [io 0x0500-0x057f] has been reserved
[ 0.478543] system 00:05: [mem 0xfed1c000-0xfed1ffff] has been reserved
[ 0.478544] system 00:05: [mem 0xfec00000-0xfecfffff] could not be reserved
[ 0.478546] system 00:05: [mem 0xff000000-0xffffffff] has been reserved
[ 0.478547] system 00:05: Plug and Play ACPI device, IDs PNP0c01 (active)
[ 0.478594] system 00:06: [io 0x0454-0x0457] has been reserved
[ 0.478596] system 00:06: Plug and Play ACPI device, IDs INT3f0d PNP0c02 (active)
[ 0.478764] pnp: PnP ACPI: found 7 devices
[ 0.478764] ACPI: bus type PNP unregistered
[ 0.484701] pci 0000:00:01.0: PCI bridge to [bus 02]
[ 0.484709] pci 0000:00:02.0: PCI bridge to [bus 01]
[ 0.484710] pci 0000:00:02.0: bridge window [io 0xe000-0xefff]
[ 0.484714] pci 0000:00:02.0: bridge window [mem 0xfa000000-0xfb0fffff]
[ 0.484716] pci 0000:00:02.0: bridge window [mem 0xc8000000-0xd1ffffff 64bit pref]
[ 0.484720] pci 0000:00:03.0: PCI bridge to [bus 03]
[ 0.484721] pci 0000:00:03.0: bridge window [io 0xd000-0xdfff]
[ 0.484725] pci 0000:00:03.0: bridge window [mem 0xf8000000-0xf90fffff]
[ 0.484727] pci 0000:00:03.0: bridge window [mem 0xb8000000-0xc1ffffff 64bit pref]
[ 0.484731] pci 0000:00:03.2: PCI bridge to [bus 04]
[ 0.484732] pci 0000:00:03.2: bridge window [io 0xc000-0xcfff]
[ 0.484735] pci 0000:00:03.2: bridge window [mem 0xf6000000-0xf70fffff]
[ 0.484738] pci 0000:00:03.2: bridge window [mem 0xa0000000-0xb1ffffff 64bit pref]
[ 0.484741] pci 0000:00:11.0: PCI bridge to [bus 05]
[ 0.484752] pci 0000:00:1c.0: PCI bridge to [bus 06]
[ 0.484761] pci 0000:00:1c.1: PCI bridge to [bus 07]
[ 0.484764] pci 0000:00:1c.1: bridge window [mem 0xfb700000-0xfb7fffff]
[ 0.484770] pci 0000:00:1c.2: PCI bridge to [bus 08]
[ 0.484774] pci 0000:00:1c.2: bridge window [mem 0xfb600000-0xfb6fffff]
[ 0.484780] pci 0000:00:1c.3: PCI bridge to [bus 09]
[ 0.484784] pci 0000:00:1c.3: bridge window [mem 0xfb500000-0xfb5fffff]
[ 0.484790] pci 0000:00:1c.4: PCI bridge to [bus 0a]
[ 0.484794] pci 0000:00:1c.4: bridge window [mem 0xfb400000-0xfb4fffff]
[ 0.484800] pci 0000:00:1c.5: PCI bridge to [bus 0b]
[ 0.484802] pci 0000:00:1c.5: bridge window [io 0xb000-0xbfff]
[ 0.484805] pci 0000:00:1c.5: bridge window [mem 0xfb300000-0xfb3fffff]
[ 0.484812] pci 0000:00:1c.7: PCI bridge to [bus 0c]
[ 0.484813] pci 0000:00:1c.7: bridge window [io 0xa000-0xafff]
[ 0.484817] pci 0000:00:1c.7: bridge window [mem 0xfb200000-0xfb2fffff]
[ 0.484823] pci 0000:00:1e.0: PCI bridge to [bus 0d]
[ 0.484831] pci_bus 0000:00: resource 4 [io 0x0000-0x03af]
[ 0.484832] pci_bus 0000:00: resource 5 [io 0x03e0-0x0cf7]
[ 0.484833] pci_bus 0000:00: resource 6 [io 0x03b0-0x03df]
[ 0.484834] pci_bus 0000:00: resource 7 [io 0x0d00-0xffff]
[ 0.484835] pci_bus 0000:00: resource 8 [mem 0x000a0000-0x000bffff]
[ 0.484836] pci_bus 0000:00: resource 9 [mem 0x000c0000-0x000dffff]
[ 0.484837] pci_bus 0000:00: resource 10 [mem 0xa0000000-0xffffffff]
[ 0.484838] pci_bus 0000:01: resource 0 [io 0xe000-0xefff]
[ 0.484839] pci_bus 0000:01: resource 1 [mem 0xfa000000-0xfb0fffff]
[ 0.484840] pci_bus 0000:01: resource 2 [mem 0xc8000000-0xd1ffffff 64bit pref]
[ 0.484841] pci_bus 0000:03: resource 0 [io 0xd000-0xdfff]
[ 0.484842] pci_bus 0000:03: resource 1 [mem 0xf8000000-0xf90fffff]
[ 0.484843] pci_bus 0000:03: resource 2 [mem 0xb8000000-0xc1ffffff 64bit pref]
[ 0.484844] pci_bus 0000:04: resource 0 [io 0xc000-0xcfff]
[ 0.484845] pci_bus 0000:04: resource 1 [mem 0xf6000000-0xf70fffff]
[ 0.484846] pci_bus 0000:04: resource 2 [mem 0xa0000000-0xb1ffffff 64bit pref]
[ 0.484847] pci_bus 0000:07: resource 1 [mem 0xfb700000-0xfb7fffff]
[ 0.484848] pci_bus 0000:08: resource 1 [mem 0xfb600000-0xfb6fffff]
[ 0.484849] pci_bus 0000:09: resource 1 [mem 0xfb500000-0xfb5fffff]
[ 0.484850] pci_bus 0000:0a: resource 1 [mem 0xfb400000-0xfb4fffff]
[ 0.484851] pci_bus 0000:0b: resource 0 [io 0xb000-0xbfff]
[ 0.484852] pci_bus 0000:0b: resource 1 [mem 0xfb300000-0xfb3fffff]
[ 0.484853] pci_bus 0000:0c: resource 0 [io 0xa000-0xafff]
[ 0.484854] pci_bus 0000:0c: resource 1 [mem 0xfb200000-0xfb2fffff]
[ 0.484855] pci_bus 0000:0d: resource 4 [io 0x0000-0x03af]
[ 0.484856] pci_bus 0000:0d: resource 5 [io 0x03e0-0x0cf7]
[ 0.484857] pci_bus 0000:0d: resource 6 [io 0x03b0-0x03df]
[ 0.484858] pci_bus 0000:0d: resource 7 [io 0x0d00-0xffff]
[ 0.484859] pci_bus 0000:0d: resource 8 [mem 0x000a0000-0x000bffff]
[ 0.484860] pci_bus 0000:0d: resource 9 [mem 0x000c0000-0x000dffff]
[ 0.484860] pci_bus 0000:0d: resource 10 [mem 0xa0000000-0xffffffff]
...
[ 0.526512] pci 0000:01:00.0: Video device with shadowed ROM
[ 0.526967] PCI: CLS 64 bytes, default 64
[ 0.526999] Unpacking initramfs...
[ 0.715857] Freeing initrd memory: 16148K (ffff880036066000 - ffff88003702b000)
[ 0.716025] IOMMU 0 0xfbffc000: using Queued invalidation
[ 0.716028] IOMMU: Setting RMRR:
[ 0.716038] IOMMU: Setting identity map for device 0000:00:1a.0 [0x9c475000 - 0x9c482fff]
[ 0.716063] IOMMU: Setting identity map for device 0000:00:1d.0 [0x9c475000 - 0x9c482fff]
[ 0.716077] IOMMU: Prepare 0-16MiB unity mapping for LPC
[ 0.716085] IOMMU: Setting identity map for device 0000:00:1f.0 [0x0 - 0xffffff]
[ 0.716098] PCI-DMA: Intel(R) Virtualization Technology for Directed I/O
...
[ 0.720581] ioapic: probe of 0000:00:05.4 failed with error -22
[ 0.720588] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
[ 0.720598] pciehp: PCI Express Hot Plug Controller Driver version: 0.4
...
[ 0.725407] AMD IOMMUv2 driver by Joerg Roedel <joerg.roedel@amd.com>
[ 0.725409] AMD IOMMUv2 functionality not available on this system
...
[ 1.039938] ehci-pci 0000:00:1a.0: EHCI Host Controller
[ 1.039942] ehci-pci 0000:00:1a.0: new USB bus registered, assigned bus number 9
[ 1.039953] ehci-pci 0000:00:1a.0: debug port 2
[ 1.043862] ehci-pci 0000:00:1a.0: cache line size of 64 is not supported
[ 1.043874] ehci-pci 0000:00:1a.0: irq 21, io mem 0xf7127000
[ 1.053896] ehci-pci 0000:00:1a.0: USB 2.0 started, EHCI 1.00
...
[ 1.054233] ehci-pci 0000:00:1d.0: EHCI Host Controller
[ 1.054237] ehci-pci 0000:00:1d.0: new USB bus registered, assigned bus number 10
[ 1.054247] ehci-pci 0000:00:1d.0: debug port 2
[ 1.058160] ehci-pci 0000:00:1d.0: cache line size of 64 is not supported
[ 1.058172] ehci-pci 0000:00:1d.0: irq 23, io mem 0xf7126000
[ 1.069906] ehci-pci 0000:00:1d.0: USB 2.0 started, EHCI 1.00
...
[ 18.164918] pci-stub: add 10DE:1080 sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000
[ 18.164926] pci-stub 0000:01:00.0: claimed by stub
[ 18.164933] pci-stub 0000:03:00.0: claimed by stub
[ 18.164938] pci-stub: add 10DE:0E09 sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000
[ 18.164944] pci-stub 0000:01:00.1: claimed by stub
[ 18.164950] pci-stub 0000:03:00.1: claimed by stub
...
[ 18.235038] nouveau 0000:04:00.0: enabling device (0004 -> 0007)
...
[ 18.235444] nouveau [ DEVICE][0000:04:00.0] BOOT0 : 0x0a8280b1
[ 18.235447] nouveau [ DEVICE][0000:04:00.0] Chipset: GT218 (NVA8)
[ 18.235448] nouveau [ DEVICE][0000:04:00.0] Family : NV50
[ 18.235470] nouveau [ VBIOS][0000:04:00.0] checking PRAMIN for image...
[ 18.235474] nouveau [ VBIOS][0000:04:00.0] ... signature not found
[ 18.235475] nouveau [ VBIOS][0000:04:00.0] checking PROM for image...
...
[ 18.348136] nouveau [ VBIOS][0000:04:00.0] ... appears to be valid
[ 18.348138] nouveau [ VBIOS][0000:04:00.0] using image from PROM
[ 18.348211] nouveau [ VBIOS][0000:04:00.0] BIT signature found
[ 18.348212] nouveau [ VBIOS][0000:04:00.0] version 70.18.64.00.05
[ 18.348385] nouveau [ DEVINIT][0000:04:00.0] adaptor not initialised
[ 18.348390] nouveau [ VBIOS][0000:04:00.0] running init tables
...
[ 18.416916] nouveau 0000:04:00.0: irq 103 for MSI/MSI-X
[ 18.416924] nouveau [ PMC][0000:04:00.0] MSI interrupts enabled
[ 18.416947] nouveau [ PFB][0000:04:00.0] RAM type: DDR3
[ 18.416948] nouveau [ PFB][0000:04:00.0] RAM size: 1024 MiB
[ 18.416949] nouveau [ PFB][0000:04:00.0] ZCOMP: 960 tags
[ 18.418379] nouveau [ VOLT][0000:04:00.0] GPU voltage: 900000uv
...
[ 18.936188] input: HDA NVidia HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:03.2/0000:04:00.1/sound/card1/input24
[ 18.936307] input: HDA NVidia HDMI/DP,pcm=7 as /devices/pci0000:00/0000:00:03.2/0000:04:00.1/sound/card1/input25
[ 18.937044] input: HDA NVidia HDMI/DP,pcm=8 as /devices/pci0000:00/0000:00:03.2/0000:04:00.1/sound/card1/input26
[ 18.937126] input: HDA NVidia HDMI/DP,pcm=9 as /devices/pci0000:00/0000:00:03.2/0000:04:00.1/sound/card1/input27
[ 19.695122] nouveau [ PTHERM][0000:04:00.0] FAN control: none / external
[ 19.695129] nouveau [ PTHERM][0000:04:00.0] fan management: automatic
[ 19.695131] nouveau [ PTHERM][0000:04:00.0] internal sensor: yes
[ 19.695144] nouveau [ CLK][0000:04:00.0] 03: core 135 MHz shader 270 MHz memory 135 MHz
[ 19.695146] nouveau [ CLK][0000:04:00.0] 07: core 405 MHz shader 810 MHz memory 405 MHz
[ 19.695147] nouveau [ CLK][0000:04:00.0] 0f: core 589 MHz shader 1402 MHz memory 600 MHz
[ 19.695161] nouveau [ CLK][0000:04:00.0] --: core 405 MHz shader 810 MHz memory 405 MHz
[ 19.695327] [TTM] Zone kernel: Available graphics memory: 12373126 kiB
[ 19.695328] [TTM] Zone dma32: Available graphics memory: 2097152 kiB
[ 19.695329] [TTM] Initializing pool allocator
[ 19.695332] [TTM] Initializing DMA pool allocator
[ 19.695338] nouveau [ DRM] VRAM: 1024 MiB
[ 19.695339] nouveau [ DRM] GART: 1048576 MiB
[ 19.695342] nouveau [ DRM] TMDS table version 2.0
[ 19.695343] nouveau [ DRM] DCB version 4.0
[ 19.695344] nouveau [ DRM] DCB outp 00: 01000302 00020030
[ 19.695345] nouveau [ DRM] DCB outp 01: 02000300 00000000
[ 19.695346] nouveau [ DRM] DCB outp 02: 02011362 00020010
[ 19.695347] nouveau [ DRM] DCB outp 03: 01022310 00000000
[ 19.695348] nouveau [ DRM] DCB conn 00: 00001030
[ 19.695349] nouveau [ DRM] DCB conn 01: 00002161
[ 19.695350] nouveau [ DRM] DCB conn 02: 00000200
[ 19.712678] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
[ 19.712679] [drm] Driver supports precise vblank timestamp query.
[ 19.731746] nouveau [ DRM] MM: using COPY for buffer copies
[ 19.803827] nouveau [ DRM] allocated 1920x1080 fb: 0x70000, bo ffff880641f6b800
[ 19.896487] Console: switching to colour frame buffer device 240x67
[ 19.899643] nouveau 0000:04:00.0: fb0: nouveaufb frame buffer device
[ 19.899644] nouveau 0000:04:00.0: registered panic notifier
[ 19.918915] [drm] Initialized nouveau 1.1.2 20120801 for 0000:04:00.0 on minor 0
...
[ 20.231080] vgaarb: device changed decodes: PCI:0000:04:00.0,olddecodes=io+mem,decodes=none:owns=none
...
[ 23.484645] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 262.576441] VFIO - User Level meta-driver version: 0.3
The only difference between 01 and 03 is that 01 appears to be selected by vgaarb as the boot device and something about "shadowed ROM". Here's a grep of the 01 card:
[ 0.392123] pci 0000:01:00.0: [10de:1080] type 00 class 0x030000
[ 0.392131] pci 0000:01:00.0: reg 0x10: [mem 0xfa000000-0xfaffffff]
[ 0.392138] pci 0000:01:00.0: reg 0x14: [mem 0xc8000000-0xcfffffff 64bit pref]
[ 0.392144] pci 0000:01:00.0: reg 0x1c: [mem 0xd0000000-0xd1ffffff 64bit pref]
[ 0.392149] pci 0000:01:00.0: reg 0x24: [io 0xe000-0xe07f]
[ 0.392154] pci 0000:01:00.0: reg 0x30: [mem 0xfb000000-0xfb07ffff pref]
[ 0.392216] pci 0000:01:00.1: [10de:0e09] type 00 class 0x040300
[ 0.392223] pci 0000:01:00.1: reg 0x10: [mem 0xfb080000-0xfb083fff]
[ 0.466804] vgaarb: setting as boot device: PCI:0000:01:00.0
[ 0.466806] vgaarb: device added: PCI:0000:01:00.0,decodes=io+mem,owns=io+mem,locks=none
[ 0.466816] vgaarb: bridge control possible 0000:01:00.0
[ 0.526512] pci 0000:01:00.0: Video device with shadowed ROM
[ 18.164926] pci-stub 0000:01:00.0: claimed by stub
[ 18.164944] pci-stub 0000:01:00.1: claimed by stub
...vs a grep of the 03 card:
[ 0.398546] pci 0000:03:00.0: [10de:1080] type 00 class 0x030000
[ 0.398553] pci 0000:03:00.0: reg 0x10: [mem 0xf8000000-0xf8ffffff]
[ 0.398559] pci 0000:03:00.0: reg 0x14: [mem 0xb8000000-0xbfffffff 64bit pref]
[ 0.398565] pci 0000:03:00.0: reg 0x1c: [mem 0xc0000000-0xc1ffffff 64bit pref]
[ 0.398569] pci 0000:03:00.0: reg 0x24: [io 0xd000-0xd07f]
[ 0.398573] pci 0000:03:00.0: reg 0x30: [mem 0xf9000000-0xf907ffff pref]
[ 0.398640] pci 0000:03:00.1: [10de:0e09] type 00 class 0x040300
[ 0.398648] pci 0000:03:00.1: reg 0x10: [mem 0xf9080000-0xf9083fff]
[ 0.466808] vgaarb: device added: PCI:0000:03:00.0,decodes=io+mem,owns=none,locks=none
[ 0.466816] vgaarb: bridge control possible 0000:03:00.0
[ 18.164933] pci-stub 0000:03:00.0: claimed by stub
[ 18.164950] pci-stub 0000:03:00.1: claimed by stub
Offline
Hi, I updated the qemu package today to version 2.2.0-2 and my win7 vm refuses to start now. I browsed through the last couple of pages here, to find some hint for a change or problem..something like that - but there isn't.
I`m running kernel 3.18.5-1-mainline of the op and the vm was fine since my upgrade today.
Here is what the terminal says
qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio_dma_map(0x7fb78cf646e0, 0x0, 0x80000000, 0x7fb65c000000) = -12 (Cannot allocate memory)
qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio_dma_map(0x7fb78cf646e0, 0x100000000, 0x80000000, 0x7fb6dc000000) = -12 (Cannot allocate memory)
qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: memory listener initialization failed for container
qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: failed to setup container for group 13
qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: failed to get group 13
qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device initialization failed.
qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device 'vfio-pci' could not be initialized
Qemu complains about something related to vfio now. Can anyone give me some background about this and how to fix this?
Regards apex
Offline
Hi, I updated the qemu package today to version 2.2.0-2 and my win7 vm refuses to start now. I browsed through the last couple of pages here, to find some hint for a change or problem..something like that - but there isn't.
I`m running kernel 3.18.5-1-mainline of the op and the vm was fine since my upgrade today.
Here is what the terminal saysqemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio_dma_map(0x7fb78cf646e0, 0x0, 0x80000000, 0x7fb65c000000) = -12 (Cannot allocate memory) qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio_dma_map(0x7fb78cf646e0, 0x100000000, 0x80000000, 0x7fb6dc000000) = -12 (Cannot allocate memory) qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: memory listener initialization failed for container qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: failed to setup container for group 13 qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: failed to get group 13 qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device initialization failed. qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device 'vfio-pci' could not be initialized
Qemu complains about something related to vfio now. Can anyone give me some background about this and how to fix this?
Does dmesg show RLIMIT_MEMLOCK errors?
Last edited by aw (2015-02-08 22:22:32)
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
apex8 wrote:Hi, I updated the qemu package today to version 2.2.0-2 and my win7 vm refuses to start now. I browsed through the last couple of pages here, to find some hint for a change or problem..something like that - but there isn't.
I`m running kernel 3.18.5-1-mainline of the op and the vm was fine since my upgrade today.
Here is what the terminal saysqemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio_dma_map(0x7fb78cf646e0, 0x0, 0x80000000, 0x7fb65c000000) = -12 (Cannot allocate memory) qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio_dma_map(0x7fb78cf646e0, 0x100000000, 0x80000000, 0x7fb6dc000000) = -12 (Cannot allocate memory) qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: memory listener initialization failed for container qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: failed to setup container for group 13 qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: failed to get group 13 qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device initialization failed. qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device 'vfio-pci' could not be initialized
Qemu complains about something related to vfio now. Can anyone give me some background about this and how to fix this?
Does dmesg show RLIMIT_MEMLOCK errors?
Yes, it does
[ 1052.293809] vfio_pin_pages: RLIMIT_MEMLOCK (40960000) exceeded
[ 1052.293817] vfio_pin_pages: RLIMIT_MEMLOCK (40960000) exceeded
[ 1052.303814] vfio_pin_pages: RLIMIT_MEMLOCK (40960000) exceeded
[ 1052.303821] vfio_pin_pages: RLIMIT_MEMLOCK (40960000) exceeded
Offline
aw wrote:Does dmesg show RLIMIT_MEMLOCK errors?
Yes, it does
[ 1052.293809] vfio_pin_pages: RLIMIT_MEMLOCK (40960000) exceeded [ 1052.293817] vfio_pin_pages: RLIMIT_MEMLOCK (40960000) exceeded [ 1052.303814] vfio_pin_pages: RLIMIT_MEMLOCK (40960000) exceeded [ 1052.303821] vfio_pin_pages: RLIMIT_MEMLOCK (40960000) exceeded
Nothing has changed here, the user always needs to be able to lock enough pages for the VM. You're not going to get very far with a locked memory limit of 40MB. If you use libvirt (and do not hide the vfio device in <qemu:arg>!) this happens automatically. If you run qemu by hand, the easiest solution is to run as root or use sudo.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
Nothing has changed here, the user always needs to be able to lock enough pages for the VM. You're not going to get very far with a locked memory limit of 40MB. If you use libvirt (and do not hide the vfio device in <qemu:arg>!) this happens automatically. If you run qemu by hand, the easiest solution is to run as root or use sudo.
Ah I remember that. I think I adjusted this via
cat /etc/security/limits.conf | grep @users
@users soft memlock 4301000
@users hard memlock 4301000
This was fine to run qemu as non root until now..
Offline
Hi everyone, i've been reading a lot of posts here, and i wanted to try the passthrough by myself, but seemed that i got some error here, and firstly according to the steps of the guide, i have been doing step by steps, and here's my build :
Processor : Xeon E3-1200 (VT-d Enabled in BIOS)
ubuntu display GPU : Radeon HD4870
Passthrough GPU : GeForce GTX 560 Ti Hawx
So steps i've been done so far :
1. Ubuntu Server 12.04, then do-release-upgrade to 14.04
2. Downloaded Kernel 3.18-5 from Linux homepage
3. Downloaded 3.18-5 linux-mainline from nhbs's first post
4. Patched ACS and i915_317 patch from linux-mainline from nbhs's post
5. Qemu 2.2.0, also Seabios standard and no patches
6. lspci | grep NVIDIA :
05:00.0 VGA compatible controller [0300]: NVIDIA Corporation GF114 [GeForce GTX 560 Ti] [10de:1200] (rev a1)
05:00.1 Audio device [0403]: NVIDIA Corporation GF114 HDMI Audio Controller [10de:0e0c] (rev a1)
7. edited /etc/default/grub :
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash intel_iommu=on i915.enable_hd_vgaarb=1 pci-stub.ids=10de:1200,10de:0e0c"
and our /proc/cmdline :
BOOT_IMAGE=/vmlinuz-3.18.5 root=/dev/mapper/SSLABVFIO--vg-root ro quiet splash intel_iommu=on i915.enable_hd_vgaarb=1 pci-stub.ids=10de:1200,10de:0e0c vt.handoff=7
8. also blacklisted radeon on /etc/modprobe.d/blacklist.conf
9. dmesg pci-stub :
[ 0.000000] Command line: BOOT_IMAGE=/vmlinuz-3.18.5 root=/dev/mapper/SSLABVFIO--vg-root ro quiet splash intel_iommu=on i915.enable_hd_vgaarb=1 pci-stub.ids=10de:1200,10de:0e0c vt.handoff=7
[ 0.000000] Kernel command line: BOOT_IMAGE=/vmlinuz-3.18.5 root=/dev/mapper/SSLABVFIO--vg-root ro quiet splash intel_iommu=on i915.enable_hd_vgaarb=1 pci-stub.ids=10de:1200,10de:0e0c vt.handoff=7
[ 2.787575] pci-stub: add 10DE:1200 sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000
[ 2.787587] pci-stub 0000:05:00.0: claimed by stub
[ 2.787593] pci-stub: add 10DE:0E0C sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000
[ 2.787597] pci-stub 0000:05:00.1: claimed by stub
10. i checked lspci -vnn for the graphics cards in the device :
03:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] R700 [Radeon HD 4870 X2] [1002:9441] (prog-if 00 [VGA controller])
Subsystem: ASUSTeK Computer Inc. Device [1043:0284]
Flags: bus master, fast devsel, latency 0, IRQ 11
Memory at d0000000 (64-bit, prefetchable) [size=256M]
Memory at f6220000 (64-bit, non-prefetchable) [size=64K]
I/O ports at b000 [size=256]
Expansion ROM at f6200000 [disabled] [size=128K]
Capabilities: [50] Power Management version 3
Capabilities: [58] Express Legacy Endpoint, MSI 00
Capabilities: [a0] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>03:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] RV770 HDMI Audio [Radeon HD 4850/4870] [1002:aa30]
Subsystem: ASUSTeK Computer Inc. Device [1043:aa30]
Flags: bus master, fast devsel, latency 0, IRQ 36
Memory at f6230000 (64-bit, non-prefetchable) [size=16K]
Capabilities: [50] Power Management version 3
Capabilities: [58] Express Legacy Endpoint, MSI 00
Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
Kernel driver in use: snd_hda_intel04:00.0 Display controller [0380]: Advanced Micro Devices, Inc. [AMD/ATI] R700 [Radeon HD 4870 X2] [1002:9441]
Subsystem: ASUSTeK Computer Inc. Device [1043:0284]
Flags: bus master, fast devsel, latency 0, IRQ 11
Memory at c0000000 (64-bit, prefetchable) [size=256M]
Memory at f6120000 (64-bit, non-prefetchable) [size=64K]
I/O ports at a000 [size=256]
Expansion ROM at f6100000 [disabled] [size=128K]
Capabilities: [50] Power Management version 3
Capabilities: [58] Express Legacy Endpoint, MSI 00
Capabilities: [a0] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>05:00.0 VGA compatible controller [0300]: NVIDIA Corporation GF114 [GeForce GTX 560 Ti] [10de:1200] (rev a1) (prog-if 00 [VGA controller])
Subsystem: Micro-Star International Co., Ltd. [MSI] Device [1462:2601]
Flags: fast devsel, IRQ 11
Memory at f4000000 (32-bit, non-prefetchable) [disabled] [size=32M]
Memory at e0000000 (64-bit, prefetchable) [disabled] [size=128M]
Memory at e8000000 (64-bit, prefetchable) [disabled] [size=64M]
I/O ports at e000 [disabled] [size=128]
Expansion ROM at f6000000 [disabled] [size=512K]
Capabilities: [60] Power Management version 3
Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [78] Express Endpoint, MSI 00
Capabilities: [b4] Vendor Specific Information: Len=14 <?>
Capabilities: [100] Virtual Channel
Capabilities: [128] Power Budgeting <?>
Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?>
Kernel driver in use: pci-stub05:00.1 Audio device [0403]: NVIDIA Corporation GF114 HDMI Audio Controller [10de:0e0c] (rev a1)
Subsystem: Micro-Star International Co., Ltd. [MSI] Device [1462:2601]
Flags: bus master, fast devsel, latency 0, IRQ 10
Memory at f6080000 (32-bit, non-prefetchable) [size=16K]
Capabilities: [60] Power Management version 3
Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [78] Express Endpoint, MSI 00
Kernel driver in use: pci-stub
11. after all went good, i create vfio-pci1.cfg in /etc/vfio-pci1.cfg :
0000:05:00.0
0000:05:00.1
12. Booting VM's Script :
#!/bin/bash
configfile=/etc/vfio-pci1.cfg
vfiobind() {
dev="$1"
vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
device=$(cat /sys/bus/pci/devices/$dev/device)
if [ -e /sys/bus/pci/devices/$dev/driver ]; then
echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
fi
echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id}
modprobe vfio-pci
cat $configfile | while read line;do
echo $line | grep ^# >/dev/null 2>&1 && continue
vfiobind $line
donesudo qemu-system-x86_64 -vga none -M q35 -hda /home/sslab719/VMimages/VM.img -enable-kvm -m 2048 -cpu host,kvm=off \
-smp 2,sockets=1,cores=2,threads=1 \
-device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
-drive file=/home/sslab719/VMimages/VM.img,id=disk,format=qcow2 \
-device vfio-pci,host=05:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on,romfile=/home/sslab719/MSI.GTX560Ti.1024.110825.rom \
-device vfio-pci,host=05:00.1,bus=root.1,addr=00.1 \
-net user,hostfwd=tcp::10022-:22 -net nic
#-boot menu=onexit 0
Here at this point, i am always having black QEMU Monitor with "compat_monitor0 console"
Qemu 2.2.0 monitor - type help for more information -- when i'm using -vga none right over here.
If i remove -vga none there, i can boot to the VMs normally, and i can see the lspci went in correctly, but it does seem like it doesn't have the real GPU power... the passthrough went there and it shows up in lspci but i am guessing that theres no GPU clock, shader, and memory power.. I tried to install heaven engine benchmark in linux, and i tried to run it on, and it fails, and here i am guessing that perhaps the problem might be the GPU doesn't really passthrough successfully, but instead its just a name showup on the VMs, and i can still see 00:01.0 VGA compatible Controller: Device 1234:1111, and the others are our passthrough'd GPU 01:00.0..
Here by reading i know that we should add -vga none in order to passthrough it normally, but it seemed that -vga none didn't bring a help for me, and instead its just gave me a weird black qemu monitor which i can type some commands of qemu itself.
i tried -vga std and others, but those didn't worked too..
so i checked my dmesg when turning my VM to see whether vfio works correctly, here are some important points at my dmesg:
[ 0.210689] vgaarb: setting as boot device: PCI:0000:03:00.0
[ 0.210691] vgaarb: device added: PCI:0000:03:00.0,decodes=io+mem,owns=io+mem,locks=none
[ 0.210695] vgaarb: device added: PCI:0000:05:00.0,decodes=io+mem,owns=none,locks=none
[ 0.210696] vgaarb: loaded
[ 0.210697] vgaarb: bridge control possible 0000:05:00.0
[ 0.210697] vgaarb: bridge control possible 0000:03:00.0
[ 0.210846] SCSI subsystem initialized
[ 0.210873] libata version 3.00 loaded.
[ 0.210889] ACPI: bus type USB registered
[ 0.210902] usbcore: registered new interface driver usbfs
[ 0.210908] usbcore: registered new interface driver hub
[ 0.210919] usbcore: registered new device driver usb
[ 0.211004] PCI: Using ACPI for IRQ routing
[ 0.212320] PCI: pci_cache_line_size set to 64 bytes
[ 0.212378] e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff]
[ 0.212380] e820: reserve RAM buffer [mem 0xbdf9c000-0xbfffffff]
[ 0.212381] e820: reserve RAM buffer [mem 0xbdfa3000-0xbfffffff]
[ 0.212382] e820: reserve RAM buffer [mem 0xbec81000-0xbfffffff]
[ 0.212383] e820: reserve RAM buffer [mem 0xbf000000-0xbfffffff]
[ 0.212384] e820: reserve RAM buffer [mem 0x83e000000-0x83fffffff]
[ 0.266687] pci 0000:03:00.0: Video device with shadowed ROM
[ 0.266730] PCI: CLS 64 bytes, default 64
[ 0.266768] Trying to unpack rootfs image as initramfs...
[ 2.132383] Freeing initrd memory: 156660K (ffff880024df6000 - ffff88002e6f3000)
[ 2.132421] DMAR: No ATSR found
[ 2.132438] IOMMU 0 0xfed90000: using Queued invalidation
[ 2.132439] IOMMU: Setting RMRR:
[ 2.132450] IOMMU: Setting identity map for device 0000:00:14.0 [0xbe51f000 - 0xbe53bfff]
[ 2.132468] IOMMU: Setting identity map for device 0000:00:1a.0 [0xbe51f000 - 0xbe53bfff]
[ 2.132481] IOMMU: Setting identity map for device 0000:00:1d.0 [0xbe51f000 - 0xbe53bfff]
[ 2.132489] IOMMU: Prepare 0-16MiB unity mapping for LPC
[ 2.132494] IOMMU: Setting identity map for device 0000:00:1f.0 [0x0 - 0xffffff]
[ 2.132571] PCI-DMA: Intel(R) Virtualization Technology for Directed I/O
[ 2.787575] pci-stub: add 10DE:1200 sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000
[ 2.787587] pci-stub 0000:05:00.0: claimed by stub
[ 2.787593] pci-stub: add 10DE:0E0C sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000
[ 2.787597] pci-stub 0000:05:00.1: claimed by stub
[ 3.495732] VFIO - User Level meta-driver version: 0.3
[ 3.529871] FS-Cache: Loaded
[ 3.692679] RPC: Registered named UNIX socket transport module.
[ 3.692682] RPC: Registered udp transport module.
[ 3.692683] RPC: Registered tcp transport module.
[ 3.692684] RPC: Registered tcp NFSv4.1 backchannel transport module.
[ 3.722255] ppdev: user-space parallel port driver
[ 3.731469] parport_pc 00:05: reported by Plug and Play ACPI
[ 3.731516] parport0: PC-style at 0x378, irq 5 [PCSPP]
[ 3.816825] lp0: using parport0 (interrupt-driven).
[ 3.821640] wmi: Mapper loaded
[ 3.866631] init: avahi-cups-reload main process (565) terminated with status 1
[ 3.878995] ACPI Warning: SystemIO range 0x0000000000000428-0x000000000000042f conflicts with OpRegion 0x0000000000000400-0x000000000000047f (\PMIO) (20140926/utaddress-258)
[ 3.879000] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
[ 3.879003] ACPI Warning: SystemIO range 0x0000000000000540-0x000000000000054f conflicts with OpRegion 0x0000000000000500-0x0000000000000563 (\GPIO) (20140926/utaddress-258)
[ 3.879005] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
[ 3.879006] ACPI Warning: SystemIO range 0x0000000000000530-0x000000000000053f conflicts with OpRegion 0x0000000000000500-0x0000000000000563 (\GPIO) (20140926/utaddress-258)
[ 3.879007] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
[ 3.879008] ACPI Warning: SystemIO range 0x0000000000000500-0x000000000000052f conflicts with OpRegion 0x0000000000000500-0x0000000000000563 (\GPIO) (20140926/utaddress-258)
[ 3.879010] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
[ 3.879010] lpc_ich: Resource conflict(s) found affecting gpio_ich
[ 3.962690] device-mapper: multipath: version 1.7.0 loaded
[ 3.962999] mei_me 0000:00:16.0: irq 34 for MSI/MSI-X
[ 3.977288] AVX version of gcm_enc/dec engaged.
[ 3.977290] AES CTR mode by8 optimization enabled
[ 4.182963] snd_hda_intel 0000:00:1b.0: irq 35 for MSI/MSI-X
[ 4.183050] snd_hda_intel 0000:03:00.1: Handle VGA-switcheroo audio client
[ 4.183071] snd_hda_intel 0000:03:00.1: irq 36 for MSI/MSI-X
[ 4.202764] input: HDA ATI HDMI HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:01.0/0000:01:00.0/0000:02:04.0/0000:03:00.1/sound/card1/input5
[ 4.221910] sound hdaudioC0D0: autoconfig: line_outs=1 (0x14/0x0/0x0/0x0/0x0) type:line
[ 4.221914] sound hdaudioC0D0: speaker_outs=0 (0x0/0x0/0x0/0x0/0x0)
[ 4.221915] sound hdaudioC0D0: hp_outs=1 (0x1b/0x0/0x0/0x0/0x0)
[ 4.221916] sound hdaudioC0D0: mono: mono_out=0x0
[ 4.221917] sound hdaudioC0D0: dig-out=0x11/0x0
[ 4.221918] sound hdaudioC0D0: inputs:
[ 4.221919] sound hdaudioC0D0: Front Mic=0x19
[ 4.221921] sound hdaudioC0D0: Rear Mic=0x18
[ 4.221922] sound hdaudioC0D0: Line=0x1a
[ 4.234307] input: HDA Intel PCH Front Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input6
[ 4.234879] input: HDA Intel PCH Rear Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input7
[ 4.234930] input: HDA Intel PCH Line as /devices/pci0000:00/0000:00:1b.0/sound/card0/input8
[ 4.235658] input: HDA Intel PCH Line Out as /devices/pci0000:00/0000:00:1b.0/sound/card0/input9
[ 4.236186] input: HDA Intel PCH Front Headphone as /devices/pci0000:00/0000:00:1b.0/sound/card0/input10
[ 4.237295] Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
[ 4.258359] init: Failed to obtain startpar-bridge instance: Unknown parameter: INSTANCE
[ 4.291101] r8169 0000:06:00.0 eth0: link down
[ 4.291120] r8169 0000:06:00.0 eth0: link down
[ 4.291144] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 6.646476] r8169 0000:06:00.0 eth0: link up
[ 6.646484] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 9.469643] init: failsafe main process (690) killed by TERM signal
[ 9.511040] audit_printk_skb: 21 callbacks suppressed
[ 10.109377] init: plymouth-upstart-bridge main process ended, respawning
[ 10.278910] cgroup: systemd-logind (574) created nested cgroup for controller "memory" which has incomplete hierarchy support. Nested cgroups may change behavior in the future.
[ 10.278913] cgroup: "memory" requires setting use_hierarchy to 1 on the root
[ 10.676638] random: nonblocking pool is initialized
[ 162.674652] vfio-pci 0000:05:00.0: enabling device (0000 -> 0003)
[ 166.777854] kvm: zapping shadow pages for mmio generation wraparound
i saw some kvm zapping shadow pages here, and these dmesg show up when its on -vga none..
and when i removed -vga none, and i can enter the VM heres the dmesg :
[ 162.674652] vfio-pci 0000:05:00.0: enabling device (0000 -> 0003)
[ 166.777854] kvm: zapping shadow pages for mmio generation wraparound
[ 262.768505] kvm: zapping shadow pages for mmio generation wraparound
[ 269.342192] kvm [2329]: vcpu0 ignored rdmsr: 0x345
[ 269.342216] kvm [2329]: vcpu0 ignored wrmsr: 0x680 data 0
[ 269.342218] kvm [2329]: vcpu0 ignored wrmsr: 0x681 data 0
[ 269.342219] kvm [2329]: vcpu0 ignored wrmsr: 0x682 data 0
[ 269.342220] kvm [2329]: vcpu0 ignored wrmsr: 0x683 data 0
[ 269.342222] kvm [2329]: vcpu0 ignored wrmsr: 0x684 data 0
[ 269.342223] kvm [2329]: vcpu0 ignored wrmsr: 0x685 data 0
[ 269.342224] kvm [2329]: vcpu0 ignored wrmsr: 0x686 data 0
[ 269.342225] kvm [2329]: vcpu0 ignored wrmsr: 0x687 data 0
[ 269.342227] kvm [2329]: vcpu0 ignored wrmsr: 0x688 data 0
[ 269.342228] kvm [2329]: vcpu0 ignored wrmsr: 0x689 data 0
Right over here i am stuck in order to try a GPU passthrough.. did i miss anything? or did my dmesg shows any incorrect steps that i've been through? And also, why would the -vga none goes black instead of going to the VMs directly..
So would someone help about this case? i've been stuck for two weeks and nothing came up with any good..
Thanks before !
Offline
NVIDIA GRID K2 passthrough successful, but with code 43 after install nvidia driver.
driver info: 347.25-quadro-grid-desktop-notebook-win8-win7-64bit-international-whql
here is my qemu commandline:
#!/bin/sh
qemu-system-x86_64 -cpu host,kvm=off -smp 4,sockets=2,cores=2,threads=1 \
-m 8192 -M q35 -enable-kvm \
-rtc base=localtime,clock=host \
-acpitable file=/var/lib/libvirt/images/LENOVO-TC-90-MSFT-2.1.BIN \
-device virtio-scsi-pci,id=scsi \
-drive file=/var/lib/libvirt/images/win7_nvidia_k2_clean.img,cache=writeback,\
if=none,format=qcow2,aio=native,id=virtio-scsi-disk0 \
-device scsi-hd,drive=virtio-scsi-disk0 \
-net nic,model=virtio,macaddr=52:54:00:1a:2b:3c \
-net tap,ifname=tap0,script=/etc/qemu-ifup,downscript=/etc/qemu-ifdown \
-vga none -nographic \
-device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
-device vfio-pci,host=04:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \
-monitor stdio
Offline
bpbastos wrote:aw wrote:Pin vCPUs and don't oversubscribe physical CPUs if you expect the guest to handle latency sensitive tasks. You may also need to move host device interrupts to other CPUs. Use isolcpus if you really want to have vCPU isolation guarantees.
Thank you aw.
I'm already using isolcpus=2-7 and pinning vcpus for my guest.
The only thing I'm not doing is moving my host device interrups. Do you have any script to do it?I don't have anything, but you want to manipulate /proc/irq/*/smp_affinity. You probably want to be careful only to do this for device interrupts (ie. things with IO-APIC or PCI-MSI in the type from /proc/interrupts). You'll also want to make sure irqbalance doesn't move interrutps back to your isolated CPUs, there's a IRQBALANCE_BANNED_CPUS environment variable that can be used to do that.
EDIT: I doubt an E3 v3 has it, but if /sys/module/kvm_intel/parameters/enable_apicv reports 'Y' then by making sure assigned device interrupts come to a CPU not running the guest, KVM can inject the interrupt into the guest with forcing a VM exit. I also recall someone was using (I think) the nohz_full= boot option to stop timer ticks on the isolated CPUs.
Thank you aw,
I did everything you suggested but in the end, disabling ASMedia SATA Controller did end my audio lag problems.
Now I have another little problem, I successfully installed openelec and passed through my GT 630, but shutdowns and restarts of the guest results in no video output.
The only solution I've found so far to this problem is to reboot the host.
Offline