You are not logged in.

#1 2019-04-09 18:21:59

Fran
Member
Registered: 2017-08-23
Posts: 11

qemu: is it possible to passthrough the Intel GPU (Haswell 4790K)?

I've been using passthrough of different AMD GPUs to virtual machines without problems for years, while keeping the Intel GPU for linux.

Now I decided to switch and use the AMD GPU in linux and pass the integrated GPU to windows, since I launch virtual machines very rarely (but still want to get decent performance from windows when I do; virtual GPUs are dreadful).

However, I always get a black screen when passing the IGP. Is it even possible to do what I want with my Haswell CPU? Or do I need a >=Broadwell CPU?

This is my setup:

IOMMU groups:

IOMMU Group 0 00:00.0 Host bridge [0600]: Intel Corporation 4th Gen Core Processor DRAM Controller [8086:0c00] (rev 06)
IOMMU Group 1 00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller [8086:0c01] (rev 06)
IOMMU Group 1 01:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Curacao XT / Trinidad XT [Radeon R7 370 / R9 270X/370X] [1002:6810]
IOMMU Group 1 01:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Cape Verde/Pitcairn HDMI Audio [Radeon HD 7700/7800 Series] [1002:aab0]
IOMMU Group 10 00:1c.3 PCI bridge [0604]: Intel Corporation 82801 PCI Bridge [8086:244e] (rev d0)
IOMMU Group 10 03:00.0 PCI bridge [0604]: ASMedia Technology Inc. ASM1083/1085 PCIe to PCI Bridge [1b21:1080] (rev 04)
IOMMU Group 11 00:1c.7 PCI bridge [0604]: Intel Corporation 9 Series Chipset Family PCI Express Root Port 8 [8086:8c9e] (rev d0)
IOMMU Group 12 00:1d.0 USB controller [0c03]: Intel Corporation 9 Series Chipset Family USB EHCI Controller #1 [8086:8ca6]
IOMMU Group 13 00:1f.0 ISA bridge [0601]: Intel Corporation Z97 Chipset LPC Controller [8086:8cc4]
IOMMU Group 13 00:1f.2 SATA controller [0106]: Intel Corporation 9 Series Chipset Family SATA Controller [AHCI Mode] [8086:8c82]
IOMMU Group 13 00:1f.3 SMBus [0c05]: Intel Corporation 9 Series Chipset Family SMBus Controller [8086:8ca2]
IOMMU Group 14 05:00.0 Network controller [0280]: Qualcomm Atheros AR9485 Wireless Network Adapter [168c:0032] (rev 01)
IOMMU Group 2 00:02.0 Display controller [0380]: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller [8086:0412] (rev 06)
IOMMU Group 3 00:03.0 Audio device [0403]: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller [8086:0c0c] (rev 06)
IOMMU Group 4 00:14.0 USB controller [0c03]: Intel Corporation 9 Series Chipset Family USB xHCI Controller [8086:8cb1]
IOMMU Group 5 00:16.0 Communication controller [0780]: Intel Corporation 9 Series Chipset Family ME Interface #1 [8086:8cba]
IOMMU Group 6 00:19.0 Ethernet controller [0200]: Intel Corporation Ethernet Connection (2) I218-V [8086:15a1]
IOMMU Group 7 00:1a.0 USB controller [0c03]: Intel Corporation 9 Series Chipset Family USB EHCI Controller #2 [8086:8cad]
IOMMU Group 8 00:1b.0 Audio device [0403]: Intel Corporation 9 Series Chipset Family HD Audio Controller [8086:8ca0]
IOMMU Group 9 00:1c.0 PCI bridge [0604]: Intel Corporation 9 Series Chipset Family PCI Express Root Port 1 [8086:8c90] (rev d0)

/etc/mkinitcpio.conf:

MODULES=(vfio_pci vfio vfio_iommu_type1 vfio_virqfd amdgpu)

/etc/modprobe.d/vfio.conf

options vfio-pci ids=8086:0412

lspci -nnk -d 8086:0412

00:02.0 Display controller [0380]: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller [8086:0412] (rev 06)
	DeviceName:  Onboard IGD
	Subsystem: ASUSTeK Computer Inc. Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller [1043:8534]
	Kernel driver in use: vfio-pci
	Kernel modules: i915

qemu command example:

sudo qemu-system-x86_64 \
    -machine q35,accel=kvm \
    -cpu host,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff,kvm=off \
    -smp sockets=1,cores=4,threads=2 \
    -m 1G \
    -vga none \
    -nographic \
    -device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
    -device vfio-pci,host=00:02.0,bus=root.1,addr=00.0 \
    -drive file=/home/fran/isos/systemrescuecd-x86-5.1.1.iso,if=ide,media=cdrom

Things I've tried:

- Removing the ioh3420 device and not specifying bus=root.1,addr=00.0 (no difference)
- Using OVMF instead of SeaBIOS (no difference):

    -drive if=pflash,format=raw,readonly,file=/home/fran/qemu/ovmf/OVMF_CODE.fd \
    -drive if=pflash,format=raw,file=/home/fran/qemu/ovmf/OVMF_VARS.fd \

Note that if I replace:

- Main GPU in the motherboard UEFI: PCI-E -> IGP
- mkinitcpio.conf: "amdgpu" -> "i915"
- modprobe.d/vfio.conf: "ids=8086:0412" -> "ids=1002:6810,1002:aab0"
- qemu command: "-device vfio-pci,host=00:02.0,bus=root.1,addr=00.0" -> "device vfio-pci,host=01:00.0,multifunction=on,bus=root.1,addr=00.0,x-vga=on,romfile=Sapphire.R9270X.2048.131209.rom -device vfio-pci,host=01:00.1,multifunction=on,bus=root.1,addr=00.1"

... then the IGD is used by linux and the AMD gpu is passed through without problems to the virtual machine, so VT-x works fine in my motherboard. Just not for the IGP.

Last edited by Fran (2019-04-09 18:23:18)

Offline

#2 2019-04-10 11:53:08

Lone_Wolf
Member
From: Netherlands, Europe
Registered: 2005-10-04
Posts: 11,911

Re: qemu: is it possible to passthrough the Intel GPU (Haswell 4790K)?

Maybe it's a windows / windows graphics driver issue .

Did you try multiple windows versions ?
Have you tried passingthrough the igp to a linux client ?


Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.


(A works at time B)  && (time C > time B ) ≠  (A works at time C)

Offline

#3 2019-04-10 12:38:53

Fran
Member
Registered: 2017-08-23
Posts: 11

Re: qemu: is it possible to passthrough the Intel GPU (Haswell 4790K)?

I don't get any output at any moment, not even at boot from the VM bios... so not an OS/driver problem. I'm trying with a sysresccd iso.

I've been reading this: http://vfio.blogspot.com/2016/07/ and it seems definitely possible to pass a Haswell GPU, as long as I use legacy mode. Following that and the mailing lists I did some changes to what I was trying:

- Set the Intel GPU as primary card in the host UEFI (I was setting the PCI-E AMD gpu as main card, since that was the one I was using in linux).
- Add video=efifb:off,vesafb:off to the boot options to make sure nothing touches the IGD
- Use pc (i440FX) instead of q35 as machine in the qemu command

However, still nothing. If I add a serial device to debug the VM bios boot process I get this:

SeaBIOS (version 1.12.0-20181126_142135-anatol)
BUILD: gcc: (GCC) 8.2.1 20180831 binutils: (GNU Binutils) 2.31.1
No Xen hypervisor found.
RamSize: 0x40000000 [cmos]
Relocating init from 0x000d8a60 to 0x3ffac3c0 (size 80800)
Found QEMU fw_cfg
QEMU fw_cfg DMA interface supported
RamBlock: addr 0x0000000000000000 len 0x0000000040000000 [e820]
Moving pm_base to 0x600
=== PCI bus & bridge init ===
PCI: pci_bios_init_bus_rec bus = 0x0
=== PCI device probing ===
Found 6 PCI devices (max PCI bus is 00)
=== PCI new allocation pass #1 ===
PCI: check devices
=== PCI new allocation pass #2 ===
PCI: IO: c000 - c04f
PCI: 32: 0000000080000000 - 00000000fec00000
PCI: map device bdf=00:02.0  bar 4, addr 0000c000, size 00000040 [io]
PCI: map device bdf=00:01.1  bar 4, addr 0000c040, size 00000010 [io]
PCI: map device bdf=00:02.0  bar 0, addr fe400000, size 00400000 [mem]
PCI: map device bdf=00:02.0  bar 6, addr fe800000, size 00020000 [mem]
PCI: map device bdf=00:02.0  bar 2, addr e0000000, size 10000000 [prefmem]
PCI: init bdf=00:00.0 id=8086:1237
PCI: init bdf=00:01.0 id=8086:7000
PIIX3/PIIX4 init: elcr=00 0c
PCI: init bdf=00:01.1 id=8086:7010
PCI: init bdf=00:01.3 id=8086:7113
Using pmtimer, ioport 0x608
PCI: init bdf=00:02.0 id=8086:0412
Intel IGD OpRegion enabled at 0x3fffe000, size 8KB, dev 00:02.0
Intel IGD BDSM enabled at 0x3fd00000, size 2MB, dev 00:02.0
PCI: init bdf=00:1f.0 id=8086:8cc4
PCI: Using 00:02.0 for primary VGA
handle_smp: apic_id=0x3
handle_smp: apic_id=0x6
handle_smp: apic_id=0x5
handle_smp: apic_id=0x4
handle_smp: apic_id=0x1
handle_smp: apic_id=0x7
handle_smp: apic_id=0x2
Found 8 cpu(s) max supported 8 cpu(s)
Copying PIR from 0x3ffbfc60 to 0x000f5da0
Copying MPTABLE from 0x00006e10/3ffa30c0 to 0x000f5cc0
Copying SMBIOS entry point from 0x00006e10 to 0x000f5b10
WARNING - Timeout at wait_reg8:81!
WARNING - Timeout at wait_reg8:81!
Scan for VGA option rom
Running option rom at c000:0003

and it gets stuck there. I even dumped the Haswell ROM and pass it as option (romfile=vbios.rom), but I get the same result: stuck at "Running option rom at c000:0003"...

Last edited by Fran (2019-04-10 13:05:40)

Offline

#4 2019-04-11 11:34:51

Lone_Wolf
Member
From: Netherlands, Europe
Registered: 2005-10-04
Posts: 11,911

Re: qemu: is it possible to passthrough the Intel GPU (Haswell 4790K)?

-machine q35,accel=kvm \
-cpu host,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff,kvm=off \

accel=kvm + kvm=off is mainly (only?) needed for nvidia consumer cards[1]
Are you sure it's needed for intel / amd cards ?




[1]
Nvidia proprietary drivers only allow professional (quadro and similar) cards to be used in a VM.
Consumergrade cards like Geforce are blocked.
There's no technical reason for the block at all.

As far as I know Amd and Intel don't impose such artificial limits.


Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.


(A works at time B)  && (time C > time B ) ≠  (A works at time C)

Offline

#5 2019-04-11 17:33:17

aldyrius
Member
Registered: 2015-12-31
Posts: 39

Re: qemu: is it possible to passthrough the Intel GPU (Haswell 4790K)?

I agree with Lone_Wolf, try kvm=on or just -enable-kvm.

Also, have you tried gfx_passthrough=on in your -machine line? This "Enables IGD GFX passthrough support for the chosen machine when available" and defaults to 0, which is bad news for legacy mode.

Offline

#6 2019-04-11 19:20:43

Fran
Member
Registered: 2017-08-23
Posts: 11

Re: qemu: is it possible to passthrough the Intel GPU (Haswell 4790K)?

Thanks, but I've tried all the combinations to no avail. I'm starting to think there is something wrong with my dumped vbios. I'll try to download some other vbios and see if it works...

This is what happens in dmesg when I start qemu:

[   30.902657] L1TF CPU bug present and SMT on, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/l1tf.html for details.
[   30.928823] DMAR: DRHD: handling fault status reg 3
[   30.928828] DMAR: [DMA Read] Request device [00:02.0] fault addr aba00000 [fault reason 06] PTE Read access is not set
[   30.929346] DMAR: DRHD: handling fault status reg 3
[   30.929351] DMAR: [DMA Read] Request device [00:02.0] fault addr ab200000 [fault reason 06] PTE Read access is not set
[   30.929358] DMAR: DRHD: handling fault status reg 3
[   30.929360] DMAR: [DMA Read] Request device [00:02.0] fault addr ab216000 [fault reason 12] non-zero reserved fields in PTE
[   30.929997] DMAR: DRHD: handling fault status reg 3
[   31.110291] resource sanity check: requesting [mem 0x000c0000-0x000dffff], which spans more than PCI Bus 0000:00 [mem 0x000d0000-0x000d3fff window]
[   31.110295] caller pci_map_rom+0x6a/0x1b0 mapping multiple BARs
[   31.110301] resource sanity check: requesting [mem 0x000c0000-0x000dffff], which spans more than PCI Bus 0000:00 [mem 0x000d0000-0x000d3fff window]
[   31.110302] caller pci_map_rom+0x6a/0x1b0 mapping multiple BARs
[   31.110308] resource sanity check: requesting [mem 0x000c0000-0x000dffff], which spans more than PCI Bus 0000:00 [mem 0x000d0000-0x000d3fff window]
[   31.110310] caller pci_map_rom+0x6a/0x1b0 mapping multiple BARs
[   31.110316] resource sanity check: requesting [mem 0x000c0000-0x000dffff], which spans more than PCI Bus 0000:00 [mem 0x000d0000-0x000d3fff window]
[   31.110318] caller pci_map_rom+0x6a/0x1b0 mapping multiple BARs
[   32.484920] vfio-pci 0000:00:02.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem
[   32.484923] amdgpu 0000:01:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:owns=none

Offline

#7 2019-10-28 00:54:59

jt730
Member
Registered: 2019-10-27
Posts: 1

Re: qemu: is it possible to passthrough the Intel GPU (Haswell 4790K)?

Yes it is possible. There is documentation here https://raw.githubusercontent.com/qemu/ … assign.txt and that says that legacy mode is the only mode available. This didn't work for me as I noticed that the windows driver was being loaded without anything being display and it was poking around some strange memory locations. This memory location was the OPREGION  that is mapped by you bios. To get an UEFI solution working I needed to change the kernel a little bit and also change qemu to map in the igd region. I am not saying that it work work by some other way, but this still works for me today.

My qemu command line doesn't use i440 machine, but is a  like this:
       

 qemu-system-x86_64   -L /etc/bios -drive if=pflash,format=raw,readonly,file=/etc/bios/OVMF.fd    -machine q35,accel=kvm,usb=off,vmport=off,kernel_irqchip=on    -cpu host,+kvm-pv-unhalt,+kvmclock,+kvm-asyncpf,+kvm-steal-time,+kvm-pv-eoi,+kvmclock-stable-bit,+invtsc,vmware-cpuid-freq=on,kvm=on,vendor=GenuineIntel -smp cpus=4,sockets=1,cores=4,threads=1      -device vfio-pci,host=00:00:02.0,x-vga=off,addr=0x2,rombar=1,x-igd-opregion=on 

You also need to hack qemu and the vfio kernel module. The reason is that the way the IGD driver is implemented is not like a PCI device but a PCI device that also has other memory mapped in from elsewhere. You will know this because your will get DMAR errors and lots of them if you don't map this memory.

I didn't bother automating the way that this memory was found. It depends on your bios. If you look at your dmesg output you will find a line like

PM: Registering ACPI NVS region [mem 0xc4b3b000-0xc5063fff] (5410816 bytes)

. This depends on your bios settings initial DVMT settings changes this on my BIOS) and the address will correlate with another address, which is the IGD opregion. First to find this address you need to run, as root:

lspci -xxx -s 0:2 

This is what I get (but yours will almost certainly be diffrent but constant if you don't change your bios settings):

00:02.0 VGA compatible controller: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller (rev 06)
00: 86 80 12 04 07 04 90 00 06 00 00 03 00 00 00 00
10: 04 00 80 f7 00 00 00 00 0c 00 00 d0 00 00 00 00
20: 01 f0 00 00 00 00 00 00 00 00 00 00 43 10 34 85
30: 00 00 00 00 90 00 00 00 00 00 00 00 0b 01 00 00
40: 09 00 0c 01 6d a0 00 62 d0 00 44 36 00 00 00 00
50: 21 02 00 00 39 00 00 00 00 00 00 00 01 00 20 c7
60: 00 00 02 01 00 00 00 00 00 00 00 00 00 00 00 00
70: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
90: 05 d0 01 00 18 00 e0 fe 00 00 00 00 00 00 00 00
a0: 00 00 00 00 13 00 06 03 00 00 00 00 00 00 00 00
b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
d0: 01 a4 22 00 00 00 00 00 00 00 00 00 00 00 00 00
e0: 00 00 00 00 00 00 00 00 00 80 00 00 00 00 00 00
f1: 00 00 00 00 00 00 00 00 00 00 06 00 18 c0 d5 c4

If you look at the last four octets and reverse the order, this will be our IGD OpRegion. ie. 0xc4d5c018. This must be in the range that you found the large NVS region found above, as we need to map this ACPI NVS region in the VFIO driver. To do this you need to patch the vfio kernel module to allow qemu to use this OpRegion and to also allow the iommu/DMAR range to be accesible to the qemu process that is using the IGD vfio.
Patch for vfio_pci_igd.c

diff --git a/drivers/vfio/pci/vfio_pci_igd.c b/drivers/vfio/pci/vfio_pci_igd.c
index 53d97f459252..6de28a319391 100644
--- a/drivers/vfio/pci/vfio_pci_igd.c
+++ b/drivers/vfio/pci/vfio_pci_igd.c
@@ -58,7 +58,9 @@ static int vfio_pci_igd_opregion_init(struct vfio_pci_device *vdev)
 	u32 addr, size;
 	void *base;
 	int ret;
-
+	u32 offset;
+	u32 offsetMask = ((1u << PAGE_SHIFT) - 1u);
+	u32 pageMask = ~offsetMask;
 	ret = pci_read_config_dword(vdev->pdev, OPREGION_PCI_ADDR, &addr);
 	if (ret)
 		return ret;
@@ -66,16 +68,22 @@ static int vfio_pci_igd_opregion_init(struct vfio_pci_device *vdev)
 	if (!addr || !(~addr))
 		return -ENODEV;
 
-	base = memremap(addr, OPREGION_SIZE, MEMREMAP_WB);
+	base = memremap(addr & pageMask, 8192, MEMREMAP_WB);
+	printk("MASK is 0x%08x", pageMask);
+
+	if (!base)
+		base = memremap(addr & pageMask, 8192, MEMREMAP_WB);
 	if (!base)
 		return -ENOMEM;
 
-	if (memcmp(base, OPREGION_SIGNATURE, 16)) {
+	offset = addr & offsetMask;
+	printk("OFFSET is 0x%08x", offset);
+	if (memcmp(base + offset, OPREGION_SIGNATURE, 16)) {
 		memunmap(base);
 		return -EINVAL;
 	}
 
-	size = le32_to_cpu(*(__le32 *)(base + 16));
+	size = le32_to_cpu(*(__le32 *)(base + 16 + offset ));
 	if (!size) {
 		memunmap(base);
 		return -EINVAL;
@@ -93,7 +101,7 @@ static int vfio_pci_igd_opregion_init(struct vfio_pci_device *vdev)
 	ret = vfio_pci_register_dev_region(vdev,
 		PCI_VENDOR_ID_INTEL | VFIO_REGION_TYPE_PCI_VENDOR_TYPE,
 		VFIO_REGION_SUBTYPE_INTEL_IGD_OPREGION,
-		&vfio_pci_igd_regops, size, VFIO_REGION_INFO_FLAG_READ, base);
+		&vfio_pci_igd_regops, 8192,  VFIO_REGION_INFO_FLAG_MMAP | VFIO_REGION_INFO_FLAG_READ | VFIO_REGION_INFO_FLAG_WRITE, base);
 	if (ret) {
 		memunmap(base);
 		return ret;

Patch for vfio_pci.c with hardcoded device  8086:0412 which is present on your haswell i4790K CPU.

diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
index 703948c9fbe1..d0bd7b62dcf0 100644
--- a/drivers/vfio/pci/vfio_pci.c
+++ b/drivers/vfio/pci/vfio_pci.c
@@ -327,10 +327,10 @@ static int vfio_pci_enable(struct vfio_pci_device *vdev)
 	if (!vfio_vga_disabled() && vfio_pci_is_vga(pdev))
 		vdev->has_vga = true;

-
-	if (vfio_pci_is_vga(pdev) &&
-	    pdev->vendor == PCI_VENDOR_ID_INTEL &&
+	if (
+	    pdev->vendor == PCI_VENDOR_ID_INTEL && (pdev->device == 0x0412 || pdev->device ==  0x0166) &&
 	    IS_ENABLED(CONFIG_VFIO_PCI_IGD)) {
+		printk("initializing vfio_pci_igd_init");
 		ret = vfio_pci_igd_init(vdev);
 		if (ret) {
 			pci_warn(pdev, "Failed to setup Intel IGD regions\n");
@@ -1197,7 +1197,17 @@ static int vfio_pci_mmap(void *device_data, struct vm_area_struct *vma)
 		return -EINVAL;
 	if ((vma->vm_flags & VM_SHARED) == 0)
 		return -EINVAL;
-	if (index >= VFIO_PCI_NUM_REGIONS) {
+	if (index == VFIO_PCI_NUM_REGIONS) {
+		u32 addr;
+        	ret = pci_read_config_dword(vdev->pdev, 0xfcu, &addr);
+		if(ret != 0) {
+			printk("Could not read OpReg address");
+			return ret;
+		}
+		return remap_pfn_range(vma,vma->vm_start, addr >> PAGE_SHIFT,
+			       8192, vma->vm_page_prot);
+	}
+	if (index > VFIO_PCI_NUM_REGIONS) {
 		int regnum = index - VFIO_PCI_NUM_REGIONS;
 		struct vfio_pci_region *region = vdev->region + regnum;

This patch needs to be modified with the address range for

YOUR

BIOS and machine settings with the address range described above.

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 054391f30fa8..ec0a25d6b2f1 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -1474,6 +1474,10 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
 	if (ret)
 		goto out_domain;
 
+	ret = iommu_map(domain->domain, 0xc4000000u,0xc4000000u, 0xC000000,  IOMMU_READ | IOMMU_WRITE);
+	if (ret)
+		goto out_domain;
+
 	resv_msi = vfio_iommu_has_sw_msi(iommu_group, &resv_msi_base);
 
 	INIT_LIST_HEAD(&domain->group_list);

Notice that I have rounded down the start address and added a lot more to the end address. This was due to DMAR errors when using the GPU for accelerated graphics, to its a bit of a trail and error thing.

And these are the changes to qemu version 4.1 to allow the OpRegion to be found by windows.

--- orig/hw/vfio/pci-quirks.c.o	2017-10-02 21:23:41.000000000 +0000
+++ mods/hw/vfio/pci-quirks.c	2017-12-26 03:07:39.599038434 +0000
@@ -1060,11 +1060,17 @@
                                struct vfio_region_info *info, Error **errp)
 {
     int ret;
+    hwaddr offsetMask = (1u << 12u) - 1u;
+    hwaddr offsetOffset = 0x0u;
+    hwaddr opregPos = 0xee000u;

+    offsetOffset = pci_get_long(vdev->pdev.config + IGD_ASLS) & offsetMask;
+    fprintf(stderr,"address is %08x\n", offsetOffset);
     vdev->igd_opregion = g_malloc0(info->size);
     ret = pread(vdev->vbasedev.fd, vdev->igd_opregion,
-                info->size, info->offset);
-    if (ret != info->size) {
+                info->size, info->offset + offsetOffset);
+
+    if (ret <= 0) {
         error_setg(errp, "failed to read IGD OpRegion");
         g_free(vdev->igd_opregion);
         vdev->igd_opregion = NULL;
@@ -1087,9 +1093,25 @@
     fw_cfg_add_file(fw_cfg_find(), "etc/igd-opregion",
                     vdev->igd_opregion, info->size);

+    MemoryRegion *system_memory = get_system_memory();
+    MemoryRegion *opr = g_new(MemoryRegion, 1);
+    void *p = NULL;
+    char buf[16];
+    //memory_region_init_io(opr, OBJECT(vdev), &vfio_opregion_ops, vdev, "opr", 8192);
+    p = mmap(NULL, 8192, PROT_WRITE | PROT_READ, MAP_SHARED, vdev->vbasedev.fd, info->offset);
+    memcpy(buf, p + offsetOffset, 16);
+    buf[15] = '\0';
+    fprintf(stderr, "MMAP %d returned for offset %llx ret %p %s\n", vdev->vbasedev.fd, info->offset, p, buf);
+    memory_region_init_ram_ptr(opr, OBJECT(vdev),"opr", 8192, p);
+
+    memory_region_add_subregion_overlap(system_memory,
+                                        opregPos,
+                                        opr,
+                                        1);
+    pci_set_long(vdev->pdev.config + IGD_ASLS, opregPos + offsetOffset);
     trace_vfio_pci_igd_opregion_enabled(vdev->vbasedev.name);

-    pci_set_long(vdev->pdev.config + IGD_ASLS, 0);
+//    pci_set_long(vdev->pdev.config + IGD_ASLS, 0);
     pci_set_long(vdev->pdev.wmask + IGD_ASLS, ~0);
     pci_set_long(vdev->emulated_config_bits + IGD_ASLS, ~0);

So don't forget to use your own address is the ioremap of drivers/vfio/vfio_iommu_type1.c. And if you want to run MacOs or don't want to boot up to a blank screen there is a sepcific USEFI BIOS for Haswells with 4600 IGDs for download from here https://github.com/jam3st/edk2/releases … VMF.fd.zst. Use this instead of the OVMF that arch supplies (ie. the -drive if=pflash,format=raw,readonly,file=/etc/bios/OVMF.fd points to this BIOS (this is just EDK2 built with the Intel IGD drivers from Coreboot), something I did for a bit of fun, seeing ADA for the first time.

Hope you find this of some use.

Offline

#8 2020-01-06 07:52:04

antiarch
Banned
Registered: 2020-01-06
Posts: 3

Re: qemu: is it possible to passthrough the Intel GPU (Haswell 4790K)?

Hello,

I try doing this but I have problem. Everything works good but this message in kernel logs.

 
Jan 06 07:35:42 arch kernel: dmar_fault: 56 callbacks suppressed
Jan 06 07:35:42 arch kernel: DMAR: DRHD: handling fault status reg 2
Jan 06 07:35:42 arch kernel: DMAR: [DMA Write] Request device [00:02.0] PASID ffffffff fault addr 64577d8000 [fault reason 05] PTE Write access is not set
Jan 06 07:36:00 arch kernel: DMAR: DRHD: handling fault status reg 2
Jan 06 07:36:00 arch kernel: DMAR: [DMA Write] Request device [00:02.0] PASID ffffffff fault addr 64577d8000 [fault reason 05] PTE Write access is not set
Jan 06 07:36:17 arch kernel: DMAR: DRHD: handling fault status reg 2
Jan 06 07:36:17 arch kernel: DMAR: [DMA Write] Request device [00:02.0] PASID ffffffff fault addr 64577d8000 [fault reason 05] PTE Write access is not set
Jan 06 07:36:19 arch kernel: DMAR: DRHD: handling fault status reg 2
Jan 06 07:36:19 arch kernel: DMAR: [DMA Write] Request device [00:02.0] PASID ffffffff fault addr 64577d8000 [fault reason 05] PTE Write access is not set
Jan 06 07:36:28 arch kernel: DMAR: DRHD: handling fault status reg 2
Jan 06 07:36:28 arch kernel: DMAR: [DMA Write] Request device [00:02.0] PASID ffffffff fault addr 64577d8000 [fault reason 05] PTE Write access is not set
Jan 06 07:36:29 arch kernel: DMAR: DRHD: handling fault status reg 2
Jan 06 07:36:29 arch kernel: DMAR: [DMA Write] Request device [00:02.0] PASID ffffffff fault addr 64577d8000 [fault reason 05] PTE Write access is not set
Jan 06 07:36:29 arch kernel: DMAR: DRHD: handling fault status reg 2
Jan 06 07:36:29 arch kernel: DMAR: [DMA Write] Request device [00:02.0] PASID ffffffff fault addr 64577d8000 [fault reason 05] PTE Write access is not set
Jan 06 07:36:32 arch kernel: DMAR: DRHD: handling fault status reg 2
Jan 06 07:36:38 arch kernel: dmar_fault: 2 callbacks suppressed
Jan 06 07:36:38 arch kernel: DMAR: DRHD: handling fault status reg 2
Jan 06 07:36:38 arch kernel: DMAR: [DMA Write] Request device [00:02.0] PASID ffffffff fault addr 64577d8000 [fault reason 05] PTE Write access is not set
Jan 06 07:37:23 arch kernel: DMAR: DRHD: handling fault status reg 2
Jan 06 07:37:23 arch kernel: DMAR: [DMA Write] Request device [00:02.0] PASID ffffffff fault addr 64577d8000 [fault reason 05] PTE Write access is not set
Jan 06 07:37:40 arch kernel: DMAR: DRHD: handling fault status reg 2
Jan 06 07:37:40 arch kernel: DMAR: [DMA Write] Request device [00:02.0] PASID ffffffff fault addr 64577d8000 [fault reason 05] PTE Write access is not set
Jan 06 07:37:51 arch kernel: DMAR: DRHD: handling fault status reg 2
Jan 06 07:37:51 arch kernel: DMAR: [DMA Write] Request device [00:02.0] PASID ffffffff fault addr 64577d8000 [fault reason 05] PTE Write access is not set
Jan 06 07:37:51 arch kernel: DMAR: DRHD: handling fault status reg 2
Jan 06 07:37:51 arch kernel: DMAR: [DMA Write] Request device [00:02.0] PASID ffffffff fault addr 64577d8000 [fault reason 05] PTE Write access is not set
Jan 06 07:37:51 arch kernel: DMAR: DRHD: handling fault status reg 2
Jan 06 07:37:51 arch kernel: DMAR: [DMA Write] Request device [00:02.0] PASID ffffffff fault addr 64577d8000 [fault reason 05] PTE Write access is not set
Jan 06 07:37:51 arch kernel: DMAR: DRHD: handling fault status reg 2
Jan 06 07:38:41 arch kernel: dmar_fault: 8 callbacks suppressed
Jan 06 07:38:41 arch kernel: DMAR: DRHD: handling fault status reg 2
Jan 06 07:38:41 arch kernel: DMAR: [DMA Write] Request device [00:02.0] PASID ffffffff fault addr 64577d8000 [fault reason 05] PTE Write access is not set
Jan 06 07:38:50 arch kernel: DMAR: DRHD: handling fault status reg 2
Jan 06 07:38:50 arch kernel: DMAR: [DMA Write] Request device [00:02.0] PASID ffffffff fault addr 64577d8000 [fault reason 05] PTE Write access is not set
Jan 06 07:39:00 arch kernel: DMAR: DRHD: handling fault status reg 2
Jan 06 07:39:00 arch kernel: DMAR: [DMA Write] Request device [00:02.0] PASID ffffffff fault addr 64577d8000 [fault reason 05] PTE Write access is not set
Jan 06 07:44:07 arch kernel: DMAR: DRHD: handling fault status reg 2
Jan 06 07:44:07 arch kernel: DMAR: [DMA Write] Request device [00:02.0] PASID ffffffff fault addr 64577d8000 [fault reason 05] PTE Write access is not set
Jan 06 07:45:17 arch kernel: DMAR: DRHD: handling fault status reg 2
Jan 06 07:45:17 arch kernel: DMAR: [DMA Write] Request device [00:02.0] PASID ffffffff fault addr 64577d8000 [fault reason 05] PTE Write access is not set
Jan 06 07:45:25 arch kernel: DMAR: DRHD: handling fault status reg 2
Jan 06 07:45:25 arch kernel: DMAR: [DMA Write] Request device [00:02.0] PASID ffffffff fault addr 64577d8000 [fault reason 05] PTE Write access is not set

What this means? And how to fix?

Offline

Board footer

Powered by FluxBB