You are not logged in.
evilsephiroth wrote:so no one passing through intel gpu?
You need KvmGT (which isn't released for wide public yet) or XenGT.
So they intentionally excluded igpus for passthrough...
Now it is clear...
I think will rely on another pci-ex gpu...
Offline
evilsephiroth wrote:so no one passing through intel gpu?
You need KvmGT (which isn't released for wide public yet) or XenGT.
Heh. Heh-heh. Cutting-edge technology.
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline
nbhs wrote:That's what the acs override patch is for
Yes that is what I understand as well... I have installed your version of the kernel (3.17 + acs override patch + i915 vga arbiter fixes), but it still gives me the same error.
Don't know if anybody else can help me...
When running the test, I get the following error:
qemu-system-x86_64: -device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: error, group 1 is not viable, please ensure all devices within the iommu_group are bound to their vfio bus driver.
qemu-system-x86_64: -device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: failed to get group 1
qemu-system-x86_64: -device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device initialization failed.
qemu-system-x86_64: -device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device 'vfio-pci' could not be initialized
In the same iommu_group are the following:
00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller [8086:0c01] (rev 06)
00:01.1 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x8 Controller [8086:0c05] (rev 06)
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK104 [GeForce GTX 770] [10de:1184] (rev a1)
01:00.1 Audio device [0403]: NVIDIA Corporation GK104 HDMI Audio Controller [10de:0e0a] (rev a1)
02:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:13c0] (rev a1)
02:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:0fbb] (rev a1)
Of course I can't just bind my main card that is running on the host (the GTX 770) to vfio: it just crashes my system and I have to force restart by pressing the power button.
As I said, i'm running the kernel provided in the first post, so the acs patch is enabled.
Offline
PureTryOut wrote:nbhs wrote:That's what the acs override patch is for
Yes that is what I understand as well... I have installed your version of the kernel (3.17 + acs override patch + i915 vga arbiter fixes), but it still gives me the same error.
Don't know if anybody else can help me...
When running the test, I get the following error:
qemu-system-x86_64: -device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: error, group 1 is not viable, please ensure all devices within the iommu_group are bound to their vfio bus driver. qemu-system-x86_64: -device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: failed to get group 1 qemu-system-x86_64: -device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device initialization failed. qemu-system-x86_64: -device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device 'vfio-pci' could not be initialized
In the same iommu_group are the following:
00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller [8086:0c01] (rev 06) 00:01.1 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x8 Controller [8086:0c05] (rev 06) 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK104 [GeForce GTX 770] [10de:1184] (rev a1) 01:00.1 Audio device [0403]: NVIDIA Corporation GK104 HDMI Audio Controller [10de:0e0a] (rev a1) 02:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:13c0] (rev a1) 02:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:0fbb] (rev a1)
Of course I can't just bind my main card that is running on the host (the GTX 770) to vfio: it just crashes my system and I have to force restart by pressing the power button.
As I said, i'm running the kernel provided in the first post, so the acs patch is enabled.
You need to enable it with a kernel parameter, you can see it mentioned on almost every page on this thread
Offline
I'm sorry I am a newb and I feel like a newb...
Searching for a few pages the only thing I could find was this article explaining iommu groups, and the kernel parameter "acs_override=downstream", which did nothing.
I'm sorry if I am irritating you with these dumb questions, but I really want to fix this.
Could you give me the right parameter I have to use?
Offline
I'm sorry I am a newb and I feel like a newb...
Searching for a few pages the only thing I could find was this article explaining iommu groups, and the kernel parameter "acs_override=downstream", which did nothing.
I'm sorry if I am irritating you with these dumb questions, but I really want to fix this.Could you give me the right parameter I have to use?
i believe the correct parameter is "pcie_acs_override=downstream"
pcie_acs_override =
[PCIE] Override missing PCIe ACS support for:
downstream
All downstream ports - full ACS capabilties
multifunction
All multifunction devices - multifunction ACS subset
id:nnnn:nnnn
Specfic device - full ACS capabilities
Specified as vid:did (vendor/device ID) in hex
Last edited by nbhs (2014-10-15 22:26:08)
Offline
To everyone who is interested in running OVMF :
You need a seperate OVMF.bin file for each VM you are running .
Took me 2 hours to realize that every VM modifies the "/usr/share/ovmf/x64/ovmf_x64.bin" in a way that makes it incompatible with other VMs .
So I took 3 copies of ovmf_x64.bin file and named it after each VM I had :
ovmf_main.bin
ovmf_gaming.bin
ovmf_winsrv.bin
and pointed each VM to its corresponding OVMF file . and they all worked .
If you're using a split OVMF package , you need a seperate copy of ovmf_vars_x64.bin for each VM , while the ovmf_code_x64.bin can be used for multiple VMs safely .
If you're indeed using the split package use :
-drive if=pflash,format=raw,readonly,file=/usr/share/ovmf/x64/ovmf_code_x64.bin \
-drive if=pflash,format=raw,file=/usr/share/ovmf/x64/A_COPY_OF_ovmf_vars_x64.bin_FOR_THIS_SPECIFIC_VM.bin \
and you should be good to go .
Last edited by Denso (2014-10-16 01:21:53)
Offline
i believe the correct parameter is "pcie_acs_override=downstream"
pcie_acs_override = [PCIE] Override missing PCIe ACS support for: downstream All downstream ports - full ACS capabilties multifunction All multifunction devices - multifunction ACS subset id:nnnn:nnnn Specfic device - full ACS capabilities Specified as vid:did (vendor/device ID) in hex
Thanks! That worked perfectly, and I can now succesfully run the test.
Now up to the next newb question/problem (sorry ):
At this moment I have a dualboot configuration:
lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT
sda
├─sda1 ext4 a0e2f143-41fc-4a4f-905d-6a12ac5b0237 /
├─sda2 ntfs System Reserved 8E4242894242764D
│ └─md0
│ ├─md0p1 ntfs System Reserved 8E4242894242764D
│ └─md0p2 ntfs 7AC4495FC4491EAF
└─sda3 ntfs 7AC4495FC4491EAF
└─md0
├─md0p1 ntfs System Reserved 8E4242894242764D
└─md0p2 ntfs 7AC4495FC4491EAF
sdb
├─sdb1 ext4 Home 34e2c056-db9e-45a1-879d-90bd5840929f /home
├─sdb2 ext4 ExtraLinux 5b069a30-2bf2-4bc5-991b-1f3a54c03a69 /games2
└─sdb3 swap Swap 137477b2-bc03-4029-b579-a9120ed058ee [SWAP]
sdc
├─sdc1 ntfs Games Windows 4DAA5B4E7792ADD1
└─sdc3 ntfs Data Windows 472386C540D36DE7
loop0
└─md0
├─md0p1 ntfs System Reserved 8E4242894242764D
└─md0p2 ntfs 7AC4495FC4491EAF
sda1 is my Linux root, sda2 is the Windows Recovery and sda3 is the Windows partition.
I would like to virtualize the physical installation, so I benefit of the SSD read and write speeds.
As you can see I succesfully setup NRAID, which I can mount and browse files on.
Now I want to add this RAID and the whole sdc disk to the VM but i'm having some problems.
I use the following command:
sudo qemu-system-x86_64 -enable-kvm -M q35 -m 1024 -cpu host \
-smp 6,sockets=1,cores=6,threads=1 \
-bios /usr/share/qemu/bios.bin -vga none \
-device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
-device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \
-device vfio-pci,host=02:00.1,bus=root.1,addr=00.1 \
-cdrom "/home/bart/Documents/ISO's/32 & 64-bit Windows 7 Ultimate - Service Packet 1 - Multilanguage.iso" -boot order=d \
-drive file=/dev/md0,id=diskboot,format=raw -device ide-hd,bus=ahci.0,drive=diskboot \
-drive file=/dev/sdc,id=diskdata,format=raw -device ide-hd,bus=ahci.1,drive=diskdata
Now I guess I just don't understand really well what "bus" and "ahci" means, but it's giving me the following error:
qemu-system-x86_64: -device ide-hd,bus=ahci.0,drive=diskboot: Bus 'ahci.0' not found
If I try to run without "bus=ahci.0" (and only 1 disk otherwise it says bus supports only 1 unit), it just gives me the qemu command line in which I have no clue what to do.
I'm really sorry for bothering you with all these newbie (and probably simple) questions, but i'm almost there and it would be a shame if I drop it now.
Offline
nbhs wrote:i believe the correct parameter is "pcie_acs_override=downstream"
pcie_acs_override = [PCIE] Override missing PCIe ACS support for: downstream All downstream ports - full ACS capabilties multifunction All multifunction devices - multifunction ACS subset id:nnnn:nnnn Specfic device - full ACS capabilities Specified as vid:did (vendor/device ID) in hex
Thanks! That worked perfectly, and I can now succesfully run the test.
Now up to the next newb question/problem (sorry ):
At this moment I have a dualboot configuration:lsblk -f NAME FSTYPE LABEL UUID MOUNTPOINT sda ├─sda1 ext4 a0e2f143-41fc-4a4f-905d-6a12ac5b0237 / ├─sda2 ntfs System Reserved 8E4242894242764D │ └─md0 │ ├─md0p1 ntfs System Reserved 8E4242894242764D │ └─md0p2 ntfs 7AC4495FC4491EAF └─sda3 ntfs 7AC4495FC4491EAF └─md0 ├─md0p1 ntfs System Reserved 8E4242894242764D └─md0p2 ntfs 7AC4495FC4491EAF sdb ├─sdb1 ext4 Home 34e2c056-db9e-45a1-879d-90bd5840929f /home ├─sdb2 ext4 ExtraLinux 5b069a30-2bf2-4bc5-991b-1f3a54c03a69 /games2 └─sdb3 swap Swap 137477b2-bc03-4029-b579-a9120ed058ee [SWAP] sdc ├─sdc1 ntfs Games Windows 4DAA5B4E7792ADD1 └─sdc3 ntfs Data Windows 472386C540D36DE7 loop0 └─md0 ├─md0p1 ntfs System Reserved 8E4242894242764D └─md0p2 ntfs 7AC4495FC4491EAF
sda1 is my Linux root, sda2 is the Windows Recovery and sda3 is the Windows partition.
I would like to virtualize the physical installation, so I benefit of the SSD read and write speeds.As you can see I succesfully setup NRAID, which I can mount and browse files on.
Now I want to add this RAID and the whole sdc disk to the VM but i'm having some problems.I use the following command:
sudo qemu-system-x86_64 -enable-kvm -M q35 -m 1024 -cpu host \ -smp 6,sockets=1,cores=6,threads=1 \ -bios /usr/share/qemu/bios.bin -vga none \ -device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \ -device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \ -device vfio-pci,host=02:00.1,bus=root.1,addr=00.1 \ -cdrom "/home/bart/Documents/ISO's/32 & 64-bit Windows 7 Ultimate - Service Packet 1 - Multilanguage.iso" -boot order=d \ -drive file=/dev/md0,id=diskboot,format=raw -device ide-hd,bus=ahci.0,drive=diskboot \ -drive file=/dev/sdc,id=diskdata,format=raw -device ide-hd,bus=ahci.1,drive=diskdata
Now I guess I just don't understand really well what "bus" and "ahci" means, but it's giving me the following error:
qemu-system-x86_64: -device ide-hd,bus=ahci.0,drive=diskboot: Bus 'ahci.0' not found
If I try to run without "bus=ahci.0" (and only 1 disk otherwise it says bus supports only 1 unit), it just gives me the qemu command line in which I have no clue what to do.
I'm really sorry for bothering you with all these newbie (and probably simple) questions, but i'm almost there and it would be a shame if I drop it now.
I dont know where you got ahci.0 from, the correct bus using the default q35 ahci controller is ide.*
Offline
Can anyone who have tried using OVMF confirm that initial OVMF bootup messages and EFI shell are shown on a monitor connected to a passed-through video card? I can seem to get even this basic thing to work with OVMF.
Command line I've tried:
qemu-system-x86_64 -enable-kvm -machine pc-i440fx-2.1,accel=kvm,usb=off -device vfio-pci,host=01:00.0,bus=pci.0,addr=7,multifunction=on,romfile=/home/user/NV280MS1-EFI.rom -device vfio-pci,host=01:00.1,bus=pci.0,addr=7.1 -drive if=pflash,format=raw,readonly,file=/tmp/usr/share/edk2.git/ovmf-x64/OVMF-pure-efi.fd -vga none -nographic
All messages, including EFI shell, are shown on QEMU's curses console. Nothing is shown on a passed-though video card. Does it mean EFI ROM that performs video card initialization doesn't run properly? If I type `pci' command in EFI shell, I see that correct VID/PID pair is listed for the video card, thus it is clearly detected.
To answer my own question since no one bothered: yes, OVMF messages are shown both on QEMU's curses console and a physical monitor once the video card is correctly initialized. Also `drivers' EFI shell command should list proper driver being loaded, e.g.:
T D
Y C I
P F A
DRV VERSION E G G #D #C DRIVER NAME IMAGE PATH
=== ======== = = = === === =================================== ==========
89 0001000B B N N 2 5 NVIDIA GPU UEFI Driver (80.07.35.00 PciRoot(0x0)/Pci(0x3,0x0)/Offset(0xFC00,0x1F9FF)
Will try to install EFI-compatible OS next.
The ROM file I use (NV280MS1-EFI.rom) does have a EFI part:
Valid ROM signature found @400h, PCIR offset 190h PCIR: type 0, vendor: 10de, device: 0fc6, class: 030000 PCIR: revision 0, vendor revision: 1 Valid ROM signature found @10000h, PCIR offset 1ch PCIR: type 3, vendor: 10de, device: 0fc6, class: 030000 PCIR: revision 3, vendor revision: 0 EFI: Signature Valid Last image
Video card in question is a NVidia GTX 650 (MSI N650 PE 1GD5/OC). Initially it came with hybrid (EFI/Legacy) BIOS flashed, later update was just Legacy. I'm trying to use original ROM I've fortunately backed up before reflashing.
It took me a while to discover that video BIOS images read and written by nvflash.exe tool can't be directly passed to QEMU's vfio-pci device `romfile=' option. The part starting with `NVGI' signature up to the first `0x55 0xaa' signature has to be skipped (that was 0x400 bytes in my case). Strange that QEMU didn't complain about invalid ROM.
Offline
I dont know where you got ahci.0 from, the correct bus using the default q35 ahci controller is ide.*
Ooh well I got it from your post in the "Using a physical disk or partition" section
Well it seems to work? Gives no errors but only shows me a console.
I've tried "boot order=d" (got it from the Arch Wiki) and "boot menu=on", none made any difference.
Offline
nbhs wrote:I dont know where you got ahci.0 from, the correct bus using the default q35 ahci controller is ide.*
Ooh well I got it from your post in the "Using a physical disk or partition" section
Well it seems to work? Gives no errors but only shows me a console.
I've tried "boot order=d" (got it from the Arch Wiki) and "boot menu=on", none made any difference.
First: I guess you should have windows' loader(NTLDR before Vista, BCD or what was it after?..) somewhere on disk. Usually it gets into MBR. You could also chainload it via GRUB. There may be major differencies if using GPT.
Second: I think you could benifit from use of virtio, either blk or scsi. But...
Third: You'll need to migrate your windows installation correctly. Windows won't boot with bsod 7B(if i remember correctly) when you change the disk controller to something else not present @ system installation. If you would like to use qemu's AHCI - windows should have drivers for it, you'll just need to activate them. If you wish to use virtio(it does improve my disk performance somewhat, but my image resides in a file on my HDD) - you must pass down virtio cd iso via most compatible interface and install drivers manually.
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline
Ok, so restart from the base...
I'm getting some problems setting up a VM (using OVMF) with Windows 8.1 and a GeForce GTX 650 passed through.
Software setup:
- Debian 7 (headless pc, managed through ssh)
- kernel 3.16.6 + ACS patch + i915 arbitrer patch + VGA arbitrer patch
- boot parameters "i915.enable_hd_vgaarb=1 intel_iommu=on pci-stub.ids=10de:0fc6,10de:0e1b"
- qemu 2.1.2
- seabios 1.7.5
- OVMF latest-git
Hardware setup:
- MSI ZH77A-G43
- Intel i5-3470
- IGD as primary VGA (for host)
- nVidia GeForce GTX 650 as secondary VGA (for guest)
Steps I followed:
- enabled VT-d in motherboard
- installed Windows 8.1 through VNC using "-vga std" (keeping GTX 650 as secondary graphics card in guest)
- installed tightVNC inside Windows 8.1
- installed nVidia graphics driver (version 340.52)
- booted Windows 8.1 with "-vga none"
- cannot connect to VNC server inside Windows 8.1 (without any error from qemu)
Script used to start VM:
# Bind VGA
DEVICES="0000:01:00.0 0000:01:00.1"
for dev in $DEVICES ; do
vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
device=$(cat /sys/bus/pci/devices/$dev/device)
if [ -e /sys/bus/pci/devices/$dev/driver ]; then
echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
fi
echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id
done
# Start QEMU with UEFI (OVMF)
qemu-system-x86_64 -enable-kvm -m 4096 -cpu host,hv_time,kvm=off \
-smp 2,sockets=1,cores=2,threads=1 \
-vga none -rtc base=localtime \
-device vfio-pci,host=01:00.0,multifunction=on,x-vga=on \
-device vfio-pci,host=01:00.1 \
-drive file=/dev/sdc,id=disk,format=raw,if=none -device ide-hd,bus=ide.0,drive=disk \
-drive file=/root/Windows8.1.iso,id=isocd,format=raw,if=none -device ide-cd,bus=ide.1,drive=isocd \
-drive if=pflash,format=raw,readonly,file=/usr/share/qemu/OVMF.fd \
-drive if=pflash,format=raw,file=/root/OVMF_vars.fd \
-k it -boot order=dc,menu=on \
-netdev tap,ifname=qemu0,id=qemu_tap -device e1000,netdev=qemu_tap,mac=00:16:3E:12:34:56 \
-usb -usbdevice tablet
Output after launching qemu:
vfio-pci 0000:01:00.0: enabling device (0000 -> 0003)
vfio_ecap_init: 0000:01:00.0 hiding ecap 0x19@0x900
kvm: zapping shadow pages for mmio generation wraparound
Some dmesg messages about vga:
# dmesg | grep -i vga
Command line: BOOT_IMAGE=/boot/vmlinuz-3.16.6 root=UUID=a4837900-ca63-43d4-bc2c-cf3f37cff7c6 ro i915.enable_hd_vgaarb=1 intel_iommu=on pci-stub.ids=10de:0fc6,10de:0e1b quiet
Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.16.6 root=UUID=a4837900-ca63-43d4-bc2c-cf3f37cff7c6 ro i915.enable_hd_vgaarb=1 intel_iommu=on pci-stub.ids=10de:0fc6,10de:0e1b quiet
Console: colour VGA+ 80x25
vgaarb: setting as boot device: PCI:0000:00:02.0
vgaarb: device added: PCI:0000:00:02.0,decodes=io+mem,owns=io+mem,locks=none
vgaarb: device added: PCI:0000:01:00.0,decodes=io+mem,owns=none,locks=none
vgaarb: loaded
vgaarb: bridge control possible 0000:01:00.0
vgaarb: no bridge control possible 0000:00:02.0
[drm] Replacing VGA console driver
vgaarb: device changed decodes: PCI:0000:00:02.0,olddecodes=io+mem,decodes=io:owns=io+mem
Output from rom-parser:
$ ./rom-parser gtx650.rom
Valid ROM signature found @0h, PCIR offset 190h
PCIR: type 0, vendor: 10de, device: 0fc6, class: 030000
PCIR: revision 0, vendor revision: 1
Valid ROM signature found @f000h, PCIR offset 1ch
PCIR: type 3, vendor: 10de, device: 0fc6, class: 030000
PCIR: revision 3, vendor revision: 0
EFI: Signature Valid
Last image
I'm quite sure the VM is booting because I can arping its ip, but I can't connect to its VNC server (as I said above, I installed tightVNC into Windows).
I will also try connecting a physical monitor to the GTX 650 ASAP (hoping to see some graphical output).
Any help/though is appreciated, thanks.
Offline
Ok, so restart from the base...
I'm getting some problems setting up a VM (using OVMF) with Windows 8.1 and a GeForce GTX 650 passed through.Script used to start VM:
# Start QEMU with UEFI (OVMF) qemu-system-x86_64 -enable-kvm -m 4096 -cpu host,hv_time,kvm=off \ -smp 2,sockets=1,cores=2,threads=1 \ -vga none -rtc base=localtime \ -device vfio-pci,host=01:00.0,multifunction=on,x-vga=on \ -device vfio-pci,host=01:00.1 \ -drive file=/dev/sdc,id=disk,format=raw,if=none -device ide-hd,bus=ide.0,drive=disk \ -drive file=/root/Windows8.1.iso,id=isocd,format=raw,if=none -device ide-cd,bus=ide.1,drive=isocd \ -drive if=pflash,format=raw,readonly,file=/usr/share/qemu/OVMF.fd \ -drive if=pflash,format=raw,file=/root/OVMF_vars.fd \ -k it -boot order=dc,menu=on \ -netdev tap,ifname=qemu0,id=qemu_tap -device e1000,netdev=qemu_tap,mac=00:16:3E:12:34:56 \ -usb -usbdevice tablet
Your launching code seems ok , but I don't think you need "x-vga=on" with OVMF . I'm running this VM without it . I don't know if it's related to your issue , but try to remove x-vga=on option and see if it works .
EDIT :
Also , try to configure tightVNC to accept connections without passwords , it's easier .
Last edited by Denso (2014-10-17 15:33:17)
Offline
I get this message, when I try to start qemu:
qemu-system-x86_64 -enable-kvm -M q35 -m 1024 -cpu host -smp 3 -bios /usr/share/qemu/bios.bin -vga none -device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 -device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on -device vfio-pci,host=02:00.1,bus=root.1,addr=00.1
qemu-system-x86_64: -device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: error no iommu_group for device
qemu-system-x86_64: -device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device initialization failed.
qemu-system-x86_64: -device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device 'vfio-pci' could not be initialized
I use the standard kernel (3.16.4-1-ARCH) with neither acs override patch nor i915 vga arbiter fixes patch. Can this be the issue? I do not want to compile the kernel myself, unless realy needed.
zgrep CONFIG_VFIO_PCI_VGA /proc/config.gz
CONFIG_VFIO_PCI_VGA=y
Offline
I get this message, when I try to start qemu:
qemu-system-x86_64 -enable-kvm -M q35 -m 1024 -cpu host -smp 3 -bios /usr/share/qemu/bios.bin -vga none -device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 -device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on -device vfio-pci,host=02:00.1,bus=root.1,addr=00.1 qemu-system-x86_64: -device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: error no iommu_group for device qemu-system-x86_64: -device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device initialization failed. qemu-system-x86_64: -device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device 'vfio-pci' could not be initialized
I use the standard kernel (3.16.4-1-ARCH) with neither acs override patch nor i915 vga arbiter fixes patch. Can this be the issue? I do not want to compile the kernel myself, unless realy needed.
zgrep CONFIG_VFIO_PCI_VGA /proc/config.gz CONFIG_VFIO_PCI_VGA=y
Check if IOMMU is supported by your CPU/Motherboard , and that it is probably supported/enabled in BIOS . Also make sure you added intel_iommu=on OR amd_iommu=on to your boot parameters .
Offline
- qemu 2.1.2
Could you please try out the latest git master instead of 2.1.2 (My qemu gives "2.1.50" on "-version")? May not help, but I have never tried this method with any of the official releases.
- OVMF latest-git
Their git repo is just a mirror of the actual development subversion trunk, so it may lag behind.
If the git repo does lack behind, you might want to try the latest svn trunk.
- kernel 3.16.6 + ACS patch + i915 arbitrer patch + VGA arbitrer patch
- boot parameters "i915.enable_hd_vgaarb=1 intel_iommu=on pci-stub.ids=10de:0fc6,10de:0e1b"
This may not be the solution, but it should reduce complexity:
First, you should not need these patches when going the UEFI way with OVMF, so please try to use a vanilla 3.16.6 kernel. I've used both unpatched 3.16.3 and 3.17.1 with OVMF successfully (unpatched meaning without these three patches, they did have Gentoo-specific patches).
Second, the boot paramater "i915.enable_hd_vgaarb=1" should also not be necessary when going the UEFI way with OVMF, I only have "intel_iommu=on" and "pci-stub.ids=[GPU and GPU AUDIO]".
-drive file=/dev/sdc,id=disk,format=raw,if=none -device ide-hd,bus=ide.0,drive=disk \ -drive file=/root/Windows8.1.iso,id=isocd,format=raw,if=none -device ide-cd,bus=ide.1,drive=isocd \
Try to use SCSI instead of IDE for these two drives
-boot order=dc,menu=on \
Try to remove this. OVMF is a UEFI firmware, so IIRC it should store the UEFI boot options in your VM's individual copy of the OVMF_vars.fd template file.
Output from rom-parser:
$ ./rom-parser gtx650.rom Valid ROM signature found @0h, PCIR offset 190h PCIR: type 0, vendor: 10de, device: 0fc6, class: 030000 PCIR: revision 0, vendor revision: 1 Valid ROM signature found @f000h, PCIR offset 1ch PCIR: type 3, vendor: 10de, device: 0fc6, class: 030000 PCIR: revision 3, vendor revision: 0 EFI: Signature Valid Last image
Please forgive the potentially stupid question, but how did you get this rom file? After I flashed my GTX 660 TI with the Asus tool to a hybrid firmware (which worked and I can use the card perfectly fine) the
# echo 1 > /sys/bus/pci/devices/0000:01:00.0/rom
# cat /sys/bus/pci/devices/0000:01:00.0/rom
just returns the following for me:
cat: rom: Input/output error
So I assumed that this way doesn't work with slightly older NVIDIA cards. Or does this still work for you?
I'm quite sure the VM is booting because I can arping its ip,
What does the serial console show? Your VM might hang at the EFI shell (with the emulated network card already initialised and connected to th TAP device). To see the serial console open the popup windows that starts when you execute the qemu command, and navigate to the menu item "View -> serial0". if this shows a bunch of slightly garbled yellow text on black, your VM might hang at the EFI shell.
Also, if none of the above helps you, I would like to know what the monitor your wrote you intent to connect to the graphics card shows from the moment you boot the machine.
Offline
Your launching code seems ok , but I don't think you need "x-vga=on" with OVMF .
You're both right and wrong about that if I understand this correctly (third to last paragraph).
You need the quirks this option enables if you have an NVIDIA card like him (and me) regardless of whether you're using OVMF or not, but if your QEMU has this commit, then you don't need to specify the option explicitly, because the commit ensures that the quirks are automatically enabled for NVIDIA cards. I haven't checked, but it might be that this commit is not in qemu 2.1.2 (in which case he needs to specify it).
Last edited by Calrama (2014-10-17 16:02:47)
Offline
Denso wrote:Your launching code seems ok , but I don't think you need "x-vga=on" with OVMF .
You're both right and wrong about that if I understand this correctly (third to last paragraph).
You need the quirks this option enables if you have an NVIDIA card like him (and me) regardless of whether you're using OVMF or not, but if your QEMU has this commit, then you don't need to specify the option explicitly, because the commit ensures that the quirks are automatically enabled for NVIDIA cards. I haven't checked, but it might be that this commit is not in qemu 2.1.2 (in which case he needs to specify it).
That's right, this is why anyone trying to do nvidia+ovmf should be using qemu.git until QEMU 2.2. I haven't used, and don't plan to try, x-vga=on with ovmf.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
According to that commit , those quirks needed are activated regardless of x-vga=on , so anyone using a recent compiled qemu-git should be fine without it when using OVMF .
Offline
According to that commit , those quirks needed are activated regardless of x-vga=on , so anyone using a recent compiled qemu-git should be fine without it when using OVMF .
Yes, but he's using qemu 2.1.2, not qemu-git, which is why I wrote
I haven't checked, but it might be that this commit is not in qemu 2.1.2 (in which case he needs to specify it).
at the end (and also one of the reasons I wrote in my reply to him, that he should try qemu-git instead).
Last edited by Calrama (2014-10-17 16:27:17)
Offline
Check if IOMMU is supported by your CPU/Motherboard , and that it is probably supported/enabled in BIOS . Also make sure you added intel_iommu=on OR amd_iommu=on to your boot parameters .
Thank you for your quick reply.
I have a Phenom2 x4 and a ASUS M4A89GTD PRO.
This mainboard has a AMD 890GX chipset.
If I believe this list: https://en.wikipedia.org/wiki/AMD_800_c … ries#890GX
My Chipset does not support IOMMU
Sad. Thanks for your help
Edit: Is there no other way expect buying a new mainboard?
Last edited by Rommy (2014-10-17 16:55:48)
Offline
Your launching code seems ok , but I don't think you need "x-vga=on" with OVMF . I'm running this VM without it . I don't know if it's related to your issue , but try to remove x-vga=on option and see if it works .
EDIT :
Also , try to configure tightVNC to accept connections without passwords , it's easier .
Tried both, but nothing changed.
Could you please try out the latest git master instead of 2.1.2 (My qemu gives "2.1.50" on "-version")?
I would like to stay on "-stable", so next test will be with 2.1.3 or 2.2.0.
Their git repo is just a mirror of the actual development subversion trunk, so it may lag behind.
If the git repo does lack behind, you might want to try the latest svn trunk.
I think it's not a real problem to be a couple of commit behind head.
This may not be the solution, but it should reduce complexity:
First, you should not need these patches when going the UEFI way with OVMF, so please try to use a vanilla 3.16.6 kernel. I've used both unpatched 3.16.3 and 3.17.1 with OVMF successfully (unpatched meaning without these three patches, they did have Gentoo-specific patches).Second, the boot paramater "i915.enable_hd_vgaarb=1" should also not be necessary when going the UEFI way with OVMF
If you say it's not a problem to have them enabled, I will leave them untouched (just in case I need to switch back to Windows 7).
Maybe I will try to remove "i915.enable_hd_vgaarb=1" option.
Try to use SCSI instead of IDE
I tried installing with SCSI, but can't get drives recognized, so switched back to IDE.
Is it a big problem, or simply performance related?
how did you get this rom file?
I got the file with the method you described:
# echo 1 > /sys/bus/pci/devices/0000:01:00.0/rom
# cat /sys/bus/pci/devices/0000:01:00.0/rom > /tmp/file.rom
# echo 0 > /sys/bus/pci/devices/0000:01:00.0/rom
Important thing is to NOT set the card as primary graphic output.
What does the serial console show?
It prompts a "(qemu)" terminal.
Also, if none of the above helps you, I would like to know what the monitor your wrote you intent to connect to the graphics card shows from the moment you boot the machine.
I'll keep you informed.
Thanks for all the help.
Offline
Calrama wrote:Could you please try out the latest git master instead of 2.1.2 (My qemu gives "2.1.50" on "-version")?
I would like to stay on "-stable", so next test will be with 2.1.3 or 2.2.0.
Please try out qemu-git. The only people I know of who got OVMF to work used qemu-git. There should be no need to leave your debian stable, simply do the following as your normal user:
mkdir -p ~/qemu-git/root
cd ~/qemu-git/
git clone git://git.qemu-project.org/qemu.git src
cd src
./configure --prefix=~/qemu-git/root --enable-gtk --enable-sdl --enable-kvm --audio-drv-list=alsa,pa
make
make install
Now ~/qemu-git/root/bin/qemu-system-x86_64 -version should return "2.1.50" and you can simply use ~/qemu-git/root/bin/qemu-system-x86_64 instead of qemu-system-x86_64 for starting your VM.
Calrama wrote:Their git repo is just a mirror of the actual development subversion trunk, so it may lag behind.
If the git repo does lack behind, you might want to try the latest svn trunk.I think it's not a real problem to be a couple of commit behind head.
If it's only a few commits over the last few days, sure. If it lags behind enough, that the latest git commit is older than last saturday, I cannot make any guess about it, because that's the earliest svn trunk I tried. You might want to check.
Calrama wrote:This may not be the solution, but it should reduce complexity:
First, you should not need these patches when going the UEFI way with OVMF, so please try to use a vanilla 3.16.6 kernel. I've used both unpatched 3.16.3 and 3.17.1 with OVMF successfully (unpatched meaning without these three patches, they did have Gentoo-specific patches).Second, the boot paramater "i915.enable_hd_vgaarb=1" should also not be necessary when going the UEFI way with OVMF
If you say it's not a problem to have them enabled, I will leave them untouched (just in case I need to switch back to Windows 7).
You could always have two kernels, one patched, one unpatched. And just because it didn't change anything for me doesn't necessarily mean the same for you. There are enough differences in our setups that it might be worth a try.
Calrama wrote:Try to use SCSI instead of IDE
I tried installing with SCSI, but can't get drives recognized, so switched back to IDE.
Is it a big problem, or simply performance related?
Mostly perfomance, but I had some trouble with OVMF not recognising non-scsi devices, so I have everything on scsi now.
Calrama wrote:how did you get this rom file?
I got the file with the method you described:
# echo 1 > /sys/bus/pci/devices/0000:01:00.0/rom # cat /sys/bus/pci/devices/0000:01:00.0/rom > /tmp/file.rom # echo 0 > /sys/bus/pci/devices/0000:01:00.0/rom
Important thing is to NOT set the card as primary graphic output.
Okay, thanks.
Calrama wrote:What does the serial console show?
It prompts a "(qemu)" terminal.
Sorry if this sound condescending, but are you sure you're on the serial console and not the compat monitor? That's what the compat monitor should be showing, not the serial console.
You should have three views to choose from (if your qemu is compiled with GTK and SDL at least):
View -> compatmonitor0
View -> serial0 <---- This is the one
View -> parallel0
Last edited by Calrama (2014-10-17 18:09:44)
Offline
Please try out qemu-git. The only people I know of who got OVMF to work used qemu-git. There should be no need to leave your debian stable, simply do the following as your normal user:
mkdir -p ~/qemu-git/root cd ~/qemu-git/ git clone git://git.qemu-project.org/qemu.git src cd src ./configure --prefix=~/qemu-git/root --enable-gtk --enable-sdl --enable-kvm --audio-drv-list=alsa,pa make make install
Now ~/qemu-git/root/bin/qemu-system-x86_64 -version should return "2.1.50" and you can simply use ~/qemu-git/root/bin/qemu-system-x86_64 instead of qemu-system-x86_64 for starting your VM.
Ok, I will give it a try.
If it's only a few commits over the last few days, sure. If it lags behind enough, that the latest git commit is older than last saturday, I cannot make any guess about it, because that's the earliest svn trunk I tried. You might want to check.
Checked and git seems not too behind svn (latest commit is "ArmVirtualizationPkg: FdtPL011SerialPortLib: support UEFI_APPLICATION" which should be even with svn-r16219).
You could always have two kernels, one patched, one unpatched. And just because it didn't change anything for me doesn't necessarily mean the same for you. There are enough differences in our setups that it might be worth a try.
Ok.
Mostly perfomance, but I had some trouble with OVMF not recognising non-scsi devices, so I have everything on scsi now.
That's strange: I had the opposite problem.
If I want to use SCSI, will I also need "-device virtio-scsi-pci,id=scsi" option?
Sorry if this sound condescending, but are you sure you're on the serial console and not the compat monitor? That's what the compat monitor should be showing, not the serial console.
You should have three views to choose from (if your qemu is compiled with GTK and SDL at least):
View -> compatmonitor0
View -> serial0 <---- This is the one
View -> parallel0
My qemu is compiled without SDL nor GTK.
The explanation of this is that qemu reside on a headless pc (Xorg not installed at all) and all the work is done through ssh.
EDIT: just found this, so I can say I'm accessing the qemu monitor console (hope it can be useful).
Again thanks for all these info.
Offline