You are not logged in.
Hello fellow Arch users, I've been trying to configure GPU passthrough on my system with one of my spare monitors but I've run into an issue where the monitor is detected by the guest OS but will not wake up. I'm completely stumped and would be very grateful for any help!
Here's some basic information about my setup:
Host OS: Arch Linux, stock kernel (up to date, just ran pacman -Syu)
Guest OS: Windows 10
PCI device to passthrough: (Integrated GPU) Intel HD Graphics 630
The error message that Windows 10 reports is: "Display 2 isn't active" (Display 1 is SpiceVMC). The display outputs no video (in neither Windows 10 or the Tianocore BIOS). Windows gives me an option to use the passthrough display/extend the displays but when I select the option, Windows reverts the change.
What I have tried so far:
Rebooting (duh...)
Rebooting with the monitor unplugged
Rebooting with the monitor plugged in
Power-cycling the monitor with the VM launched
Enabling/re-enabling the GPU in Windows 10
Setting the passthrough monitor to be the primary display in the BIOS, interesting result: BIOS screen persists on passthrough monitor until the VM boots, upon which the monitor goes blank
Swapping the passthrough monitor (HDMI) for a DVI monitor
Removing the Spice virtual hardware
Verbally insulting the monitor
Technical info:
IOMMU is supported and enabled:
$ dmesg | grep -e DMAR -e IOMMU
[ 0.000000] ACPI: DMAR 0x00000000BA384B28 0000A8 (v01 ALASKA A M I 00000001 INTL 00000001)
[ 0.000000] DMAR: IOMMU enabled
(output truncated)
Integrated GPU appears to be in it's own IOMMU group:
$ iommu.sh
IOMMU Group 0 00:00.0 Host bridge [0600]: Intel Corporation Intel Kaby Lake Host Bridge [8086:591f] (rev 05)
IOMMU Group 10 00:1c.0 PCI bridge [0604]: Intel Corporation 200 Series PCH PCI Express Root Port #3 [8086:a292] (rev f0)
IOMMU Group 11 00:1c.4 PCI bridge [0604]: Intel Corporation 200 Series PCH PCI Express Root Port #5 [8086:a294] (rev f0)
IOMMU Group 12 00:1d.0 PCI bridge [0604]: Intel Corporation 200 Series PCH PCI Express Root Port #9 [8086:a298] (rev f0)
IOMMU Group 13 00:1f.0 ISA bridge [0601]: Intel Corporation 200 Series PCH LPC Controller (H270) [8086:a2c4]
IOMMU Group 13 00:1f.2 Memory controller [0580]: Intel Corporation 200 Series PCH PMC [8086:a2a1]
IOMMU Group 13 00:1f.3 Audio device [0403]: Intel Corporation 200 Series PCH HD Audio [8086:a2f0]
IOMMU Group 13 00:1f.4 SMBus [0c05]: Intel Corporation 200 Series PCH SMBus Controller [8086:a2a3]
IOMMU Group 14 00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (2) I219-V [8086:15b8]
IOMMU Group 15 05:00.0 PCI bridge [0604]: Integrated Technology Express, Inc. IT8892E PCIe to PCI Bridge [1283:8892] (rev 71)
IOMMU Group 15 06:00.0 FireWire (IEEE 1394) [0c00]: VIA Technologies, Inc. VT6306/7/8 [Fire II(M)] IEEE 1394 OHCI Controller [1106:3044] (rev 46)
IOMMU Group 1 00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) [8086:1901] (rev 05)
IOMMU Group 1 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP107 [GeForce GTX 1050] [10de:1c81] (rev a1)
IOMMU Group 1 01:00.1 Audio device [0403]: NVIDIA Corporation GP107GL High Definition Audio Controller [10de:0fb9] (rev a1)
IOMMU Group 2 00:02.0 Display controller [0380]: Intel Corporation HD Graphics 630 [8086:5912] (rev 04)
IOMMU Group 3 00:08.0 System peripheral [0880]: Intel Corporation Xeon E3-1200 v5/v6 / E3-1500 v5 / 6th/7th Gen Core Processor Gaussian Mixture Model [8086:1911]
IOMMU Group 4 00:14.0 USB controller [0c03]: Intel Corporation 200 Series PCH USB 3.0 xHCI Controller [8086:a2af]
IOMMU Group 5 00:16.0 Communication controller [0780]: Intel Corporation 200 Series PCH CSME HECI #1 [8086:a2ba]
IOMMU Group 6 00:17.0 RAID bus controller [0104]: Intel Corporation SATA Controller [RAID mode] [8086:2822]
IOMMU Group 7 00:1b.0 PCI bridge [0604]: Intel Corporation 200 Series PCH PCI Express Root Port #19 [8086:a2e9] (rev f0)
IOMMU Group 8 00:1b.3 PCI bridge [0604]: Intel Corporation 200 Series PCH PCI Express Root Port #20 [8086:a2ea] (rev f0)
IOMMU Group 9 00:1b.4 PCI bridge [0604]: Intel Corporation 200 Series PCH PCI Express Root Port #21 [8086:a2eb] (rev f0)
GPU isolation configuration:
$ cat /etc/modprobe.d/vfio.conf
options vfio-pci ids=8086:5912
$ cat /etc/mkinitcpio.conf
...
MODULES="vfio_pci vfio vfio_iommu_type1 vfio_virqfd"
...
HOOKS="base udev resume autodetect modconf block filesystems keyboard fsck"
GPU Isolation appears to be working:
$ dmesg | grep -i vfio
[ 0.740166] VFIO - User Level meta-driver version: 0.3
[ 0.760289] vfio_pci: add [8086:5912[ffff:ffff]] class 0x000000/00000000
[ 80.436796] vfio-pci 0000:00:02.0: enabling device (0000 -> 0003)
[ 80.546828] vfio_ecap_init: 0000:00:02.0 hiding ecap 0x1b@0x100
$ lspci -nnk -d 8086:5912
00:02.0 Display controller [0380]: Intel Corporation HD Graphics 630 [8086:5912] (rev 04)
DeviceName: Onboard IGD
Subsystem: Gigabyte Technology Co., Ltd HD Graphics 630 [1458:d000]
Kernel driver in use: vfio-pci
Kernel modules: i915
libvirt configuration:
$ cat /etc/libvirt/qemu.conf
...
user = "root"
cgroup_device_acl = [
"/dev/input/by-id/usb-1017_Gaming_Keyboard-event-kbd",
"/dev/input/by-id/usb-G-SPY_USB_Gaming_Mouse-event-kbd",
"/dev/input/by-id/usb-G-SPY_USB_Gaming_Mouse-if01-event-mouse",
"/dev/input/event0",
"/dev/input/event1",
"/dev/input/event2",
"/dev/input/event3",
"/dev/input/event4",
"/dev/input/event5",
"/dev/input/event6",
"/dev/input/event7",
"/dev/input/event8",
"/dev/input/event9",
"/dev/input/event10",
"/dev/input/event11",
"/dev/input/event12",
"/dev/input/event13",
"/dev/input/event14",
"/dev/input/event15",
"/dev/input/event16",
"/dev/input/event17",
"/dev/input/event18",
"/dev/input/event19",
"/dev/input/event20",
"/dev/input/event21",
"/dev/input/event22",
"/dev/input/event23",
"/dev/input/event24",
"/dev/input/event25",
"/dev/input/event26",
"/dev/input/event27",
"/dev/input/event28",
"/dev/input/event29",
"/dev/input/event30",
"/dev/input/event31",
"/dev/input/event32",
"/dev/input/event33",
"/dev/input/event34",
"/dev/input/event35",
"/dev/input/event36",
"/dev/input/event37",
"/dev/input/event38",
"/dev/input/event39",
"/dev/input/mouse0",
"/dev/input/mice",
"/dev/null", "/dev/full", "/dev/zero",
"/dev/random", "/dev/urandom",
"/dev/ptmx", "/dev/kvm", "/dev/kqemu",
"/dev/rtc","/dev/hpet", "/dev/sev"
]
nvram = [
"/usr/share/ovmf/x64/OVMF_CODE.fd:/usr/share/ovmf/x64/OVMF_VARS.fd"
]
...
I specified user = "root" for evdev passthrough (permissions still didn't work, even through I was in the 'input' group). The big mess in the ACL is also for evdev passthrough.
Last edited by nulldev (2018-10-16 01:32:33)
Offline
I've resorted to passing my dedicated Nvidia GPU instead of the iGPU. This works perfectly for some reason apart from some audio crackling (which were then resolved using scream). I guess I'm just going to stick with this instead. I'm still welcome to any ideas why passing the iGPU doesn't work!
Last edited by nulldev (2018-10-17 00:51:53)
Offline
You likely have an intel/nvidia optimus system.
These systems have many limitations for selecting outputs and usally require the primary gpu to drive ALL outputs.
By setting the nvidia gpu to passthrough, you effectively got rid of the optimus part and now have a system with 2 independent videocards.
Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.
(A works at time B) && (time C > time B ) ≠ (A works at time C)
Offline
You likely have an intel/nvidia optimus system.
These systems have many limitations for selecting outputs and usally require the primary gpu to drive ALL outputs.By setting the nvidia gpu to passthrough, you effectively got rid of the optimus part and now have a system with 2 independent videocards.
Whoops, I forgot to mention it's a desktop (also not a pre-built). So yeah, it's not an optimus system. Just your typical Intel "gaming" setup.
Offline
Ahh, the very common misconception that hybrid graphics is only used on laptops.
Hybrid graphics, dynamic switching, nvidia optimus all describe the same situation :
processor with an integrated gpu
discrete video card
motherboard that uses a framebuffer to combine igp & dgp but lacking a hardware multiplexer managing outputs.
switching outputs is controlled by software
Check block diagrams for your motherboard how everything is connected.
Last edited by Lone_Wolf (2018-10-17 10:44:26)
Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.
(A works at time B) && (time C > time B ) ≠ (A works at time C)
Offline
Ahh, the very common misconception that hybrid graphics is only used on laptops.
Hybrid graphics, dynamic switching, nvidia optimus all describe the same situation :
processor with an integrated gpu
discrete video card
motherboard that uses a framebuffer to combine igp & dgp but lacking a hardware multiplexer managing outputs.
switching outputs is controlled by softwareCheck block diagrams for your motherboard how everything is connected.
Interesting, I wasn't actually aware that there were desktops with Optimius-like functionality!
But I'm still rather confident my system isn't an Optimus system. The motherboard (H270-HD3) was a super-budget, bottom-of-the-barrel model that definitely didn't advertise any sort of dynamic video card switching/hybrid graphics. I looked around and couldn't find any block diagrams for it either . Also, I installed the drivers for the mobo from the CD that came in the box onto a non-passhthrough instance of Windows and wasn't able to find any option to dynamically switch the GPU either.
Finally, passthrough works if I pass the discrete GPU instead of the integrated GPU so the primary GPU doesn't look like it's controlling all the outputs.
Offline
This motherboard https://www.gigabyte.com/Motherboard/GA … -rev-10#sp ?
It does appear to have a low price, but supports several advanced techniques like intel optane and crossfire / sli .
Also check out the supported processor list.
It doesn't even list one processor without an integrated GPU.
All recent processors with an integrated gpu since approx 2016 use dynamic switching.
For intel procs that basically means only high-end and server motherboards might not have dynamic switching.
Fortunately Amd went a different path, f.e. the ryzen 7 1xxx ands 2xxx are all without integrated GPU.
TL;DR :
any post-2016 system with an integrated gpu and an nividia card uses optimus even if it's not specifically mentioned.
Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.
(A works at time B) && (time C > time B ) ≠ (A works at time C)
Offline
Fortunately Amd went a different path, f.e. the ryzen 7 1xxx ands 2xxx are all without integrated GPU.
AMD is looking really, really good now but Ryzen came out a couple months after I built my PC .
This motherboard https://www.gigabyte.com/Motherboard/GA … -rev-10#sp ?
TL;DR :
any post-2016 system with an integrated gpu and an nividia card uses optimus even if it's not specifically mentioned.
Wow, that's news to me! So I can plug my monitors into my motherboard VGA/HDMI/DP ports and still get the PCI GPU to do the rendering?
Also, this means I can't pass my integrated GPU individually right?
Thanks for all your help!
Offline
Also, this means I can't pass my integrated GPU individually right?
Depends on the motherboard firmware.
If you really want to passthrough the intel gpu to a guest :
attach a monitor to each card
set the nvidia one as primary gpu in the bios/uefi firmware.
boot to multi-user, set vfio to handle the intel card
reboot to multi-user : only the monitor attached to the nvidia card should show something
start X (may need changes to configuration) and test.
--------------------------------------------
So I can plug my monitors into my motherboard VGA/HDMI/DP ports and still get the PCI GPU to do the rendering?
With some limitations, yes.
Check Nvidia Optimus wiki page . Follow the links to PRIME , Bumblebee and nvidia-xrun pages.
-----------------------
Why do i specifically state : boot to multi-user ?
systemd default target is graphical which tries to start X.
Changing primary gpu through firmware is similar to taking a videocard out of the system and adding another one of a different brand.
*nixes can deal with a lot of changes, but X has more trouble with changes.
(especially if customization was used as is common with nvidia proprietary driver)
multi-user target boots to console mode that uses a framebuffer and works 99% of the time.
Last edited by Lone_Wolf (2018-10-19 10:01:03)
Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.
(A works at time B) && (time C > time B ) ≠ (A works at time C)
Offline