You are not logged in.
aw wrote:@carterghill
Remove the <graphics> and <video> sections.
Sorry I should've clarified, I have tried without them as well. I only I had them in just now so that I could actually use windows. Without those tags, there is still no output on my second monitor.
Isn't is kvm hidden state=ON? not OFF? And you must get output from passthrough gpu regradless of driver working.
Offline
aw wrote:@gg
I can't really figure out whether the patch is an improvement for you. There's really not much difference for VM shutdown vs VM reset as far as the types of resets that occur for the device. My testing on a Bonaire seemed to indicate that resets vs shutdown/restart had about the same success rate, ie. mostly working but occasionally not. The SMC firmware must be running for the reset to work, otherwise it reverts to previous behavior, which in my experience gives you a black screen rather than bsod.
Before using the patch on qemu 2.3 I was using 2.1 and was not once able to restart or shutdown|start a guest machine without suspending or rebooting the host. It seems from what I'm experiencing that the patch has finally provided a reset (under certain conditions). Is there something else in qemu, the kernel, and the vfio-pci module in particular, that might be making this possible if not the patch you provided for qemu?
Nope, if there's an improvement, it's definitely from this patch. I just couldn't figure out from the previous post whether things were actually better for you with the patch. Thanks
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
gg wrote:aw wrote:@gg
I can't really figure out whether the patch is an improvement for you. There's really not much difference for VM shutdown vs VM reset as far as the types of resets that occur for the device. My testing on a Bonaire seemed to indicate that resets vs shutdown/restart had about the same success rate, ie. mostly working but occasionally not. The SMC firmware must be running for the reset to work, otherwise it reverts to previous behavior, which in my experience gives you a black screen rather than bsod.
Before using the patch on qemu 2.3 I was using 2.1 and was not once able to restart or shutdown|start a guest machine without suspending or rebooting the host. It seems from what I'm experiencing that the patch has finally provided a reset (under certain conditions). Is there something else in qemu, the kernel, and the vfio-pci module in particular, that might be making this possible if not the patch you provided for qemu?
Nope, if there's an improvement, it's definitely from this patch. I just couldn't figure out from the previous post whether things were actually better for you with the patch. Thanks
Yes, was definitely a huge step. Thanks!
Offline
carterghill wrote:aw wrote:@carterghill
Remove the <graphics> and <video> sections.
Sorry I should've clarified, I have tried without them as well. I only I had them in just now so that I could actually use windows. Without those tags, there is still no output on my second monitor.
Isn't is kvm hidden state=ON? not OFF? And you must get output from passthrough gpu regradless of driver working.
Yes, hidden state='on' Good catch
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
slis wrote:carterghill wrote:Sorry I should've clarified, I have tried without them as well. I only I had them in just now so that I could actually use windows. Without those tags, there is still no output on my second monitor.
Isn't is kvm hidden state=ON? not OFF? And you must get output from passthrough gpu regradless of driver working.
Yes, hidden state='on' Good catch
Ah okay, I changed that and there's still no output on my second monitor . I sort of expected this, seeing as how I purposely tried with earlier drivers initially and had no output then either. So we know that it's being passed through because I can see it in the device manager when I view it through VNC, right? And since apparently I should get output regardless of drivers (I should've known that...) what else could be causing this issue? Oh, and thanks for your help so far you guys
Last edited by carterghill (2015-05-01 05:38:55)
Offline
If you dont have output your i915 patch is not working, or you need to patch host nvidia driver for vga, or your gpu does not have uefi bios for ovmf.
Offline
Hellow im finish my simple config.
IT WORKS...
Sound terible crakling*^&$$. I am working on it
Ungine Valley and Heaven benchmark pass.
MSI Z77 + i7-3770 VT-d
NVIDIA EVGA GTX660 ( rom flash to support UEFI )
Host on 3.19-3-3-ARCH witch QEMU 2.2.1 + OVMF + intel GPU + 2 monitor (old one and new model with multi input 27'' )
Guest OS Windows Server 2012 x64. with GTX660 + 1 monitor (shared with Host OS)
I have 2 sets of mouse and keyboard, one for OS.
my simple bash script:
#!/bin/bash
export QEMU_ALSA_DAC_BUFFER="512"
export QEMU_ALSA_DAC_PERIOD_SIZE="1024"
export QEMU_AUDIO_DRV="alsa"
cp -f /usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd /tmp/my_var.fd
qemu-system-x86_64 \
-enable-kvm \
-cpu host,kvm=off,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff \
-smp 4,sockets=1,cores=4,threads=1 \
-m 4096 \
-soundhw hda \
-vga none \
-device vfio-pci,host=01:00.0 -device vfio-pci,host=01:00.1 \
-drive if=pflash,format=raw,readonly,file="/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd" \
-drive if=pflash,format=raw,file="./efi_var.fd" \
-hda ./winbox01.img \
-cdrom /dev/sr0 \
-usb -usbdevice host:046d:c316 -usbdevice host:1bcf:0005
What do you think -M q35 will be bettr for my VM ?
Last edited by ZaoX64 (2015-05-01 19:26:05)
[MSI Z77A-G43 i7-3770 32GB GTX660]
Offline
-cpu host,kvm=off,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff \
Are you running an older Nvidia driver to be able to use the hyper-v extensions? Generally all those hv_foo options need to be removed to avoid Code 43 on latest Nvidia driver.
What do you think -m q35 will be bettr for my VM ?
100% no
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
If you dont have output your i915 patch is not working, or you need to patch host nvidia driver for vga, or your gpu does not have uefi bios for ovmf.
i915 patch shouldn't be relevant since they're not using IGD, but I do agree that something is wrong with vga arbitration/routing since there isn't any output to the screen. I'd try a newer kernel.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
slis wrote:If you dont have output your i915 patch is not working, or you need to patch host nvidia driver for vga, or your gpu does not have uefi bios for ovmf.
i915 patch shouldn't be relevant since they're not using IGD, but I do agree that something is wrong with vga arbitration/routing since there isn't any output to the screen. I'd try a newer kernel.
I am on the latest kernel (to my knowledge) 3.19. Maybe I should try the i915 patch again. How exactly do I run the patch? I've had a lot of trouble finding a decent guide that shows how to run a patch, and most of these threads just say where do find the patch and not what to do with it... I know that GTX 760 should work, I've seen other people succeed with it. How would I patch my host nvidia driver for VGA?
Last edited by carterghill (2015-05-01 15:56:16)
Offline
Are you running an older Nvidia driver to be able to use the hyper-v extensions? Generally all those hv_foo options need to be removed to avoid Code 43 on latest Nvidia driver.
Currently NV driver version are 331.58 (I know to old - but safe) but i planing to successivly update driver to higher posible.
I don't konw highest working version.
(mayby someday all nv driver 350+ will work )
I would like use this enviroment to develop my multiplatform game.
Last edited by ZaoX64 (2015-05-01 19:17:25)
[MSI Z77A-G43 i7-3770 32GB GTX660]
Offline
aw wrote:Are you running an older Nvidia driver to be able to use the hyper-v extensions? Generally all those hv_foo options need to be removed to avoid Code 43 on latest Nvidia driver.
Currently NV driver version are 331.58 (I know to old - but safe) but i planing to successivly update driver to higher posible.
I don't konw highest working version.
(mayby someday all nv driver 350+ will work )I would like use this enviroment to develop my multiplatform game.
At 331.58, you don't even need kvm=off, that change didn't happen until 338.77. If you go up to 344.11 you'll need to remove all those hv_foo enablers. IME, the graphics performance gain from newer drivers trumps the benefit of the hyper-v extensions. If graphics performance isn't your #1 priority then maybe there are cases for using older drivers.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
ZaoX64 wrote:aw wrote:Are you running an older Nvidia driver to be able to use the hyper-v extensions? Generally all those hv_foo options need to be removed to avoid Code 43 on latest Nvidia driver.
Currently NV driver version are 331.58 (I know to old - but safe) but i planing to successivly update driver to higher posible.
I don't konw highest working version.
(mayby someday all nv driver 350+ will work )I would like use this enviroment to develop my multiplatform game.
At 331.58, you don't even need kvm=off, that change didn't happen until 338.77. If you go up to 344.11 you'll need to remove all those hv_foo enablers. IME, the graphics performance gain from newer drivers trumps the benefit of the hyper-v extensions. If graphics performance isn't your #1 priority then maybe there are cases for using older drivers.
I remove Hyper-V and use 347.88. With good result.
I will try 350...
Last edited by ZaoX64 (2015-05-01 19:50:41)
[MSI Z77A-G43 i7-3770 32GB GTX660]
Offline
Hey just wanted to post a small success story,
Intel 5960X w/ Asus Strix 970 GTX (radeon HD as primary), latest nvidia drivers works when using kvm=off, cannot get Geforce Experience to recognize that i have a nvidia card, but the driver works, have anyone got passed this? edit: it just started working, weird, maybe some windows update...
Win 8.1 using ovmf (ovmf-git/svn and qemu 2.3.50 (built from git 3 nights ago)
GTA 5 running quite smooth @ 4K High, ~47fps
however, there seem to be some 60 fps limitation, even if i set heaven and valley benchmarks at ridiculously low resolution with all settings on low or none, it cannot get past 60 fps?!
Thanks for the good job guys, now I can have a gaming rig, docker host, and dev setup on the same computer
Host:
Kernel: 4.0.1-1-ARCH (no patches!)
MB: Gigabyte X99 Gaming 5
CPU: Intel 5960X
Guest GFX: ASUS STRIX Geforce 970 GTX
Host GFX: Radeon HD4350 (set as primary in BIOS)
Last edited by toxster (2015-05-04 13:21:06)
Offline
Same as above, wanna share some progress so far I got.
Hardware: msi z97 gaming-5, i5-4690k, evga gtx 750 ti
What have succeeded:
stock kernel, qemu-system-x86_64, ovmf, windows 8.1 64bit, nvidia driver 338
linux-vfio, qemu-system-x86_64, seabios, windows 8.1 64bit, nvidia driver 338
linux-vfio, qemu-system-i386, seabios, windows 8.1 32bit, nvidia driver 338
stock kernel, qemu-system-x86_64, ovmf, windows 10 64bit 9926 build, nvidia driver 350
linux-vfio, qemu-system-x86_64, seabios, windows 10 64bit 9926 build, nvidia driver 350
linux-vfio, qemu-system-i386, seabios, windows 10 32bit 9926 build, nvidia driver 350
What have failed:
Windows 10 32bit/64bit 10041 build in all conditions fail to install as BSOD "system_thread_exception_not_handled". The install went through without vfio passthrough under both seabios and ovmf (still not able to boot if switch to vfio after installation gets done).
Windows 10 9926 build with 338 driver has some glitches during my planetside 2 gaming. It got fixed after upgrading to 344 driver, although I'm not sure if it's caused by the 338 driver or something else. Haven't got a chance to verify.
Nvidia 352 beta under windows 10 9926 build gets code 43 error with kvm=off and no hv_* (arms racing has begun?).
It seems ovmf under qemu-system-i386 doesn't work with vfio enabled, that's why 32bit with ovmf is missing in the list.
Thanks to all the people working on qemu and vfio project, especially to the OP for working on this guide. Really blows my mind how much potential power I can get from my archlinux PC.
Sorry about my english as a non-native speaker
Update: from the post on page 209, Windows 10 beyond 9926 would need CPU either core2duo or kvm64 with +lahf_lm,+cx16. Nvidia driver 352 and 358 requires 10041+ build to run.
Last edited by shawnt (2015-05-31 07:23:11)
Offline
with your help, I run Battlefield 4 and Battlefield Hardline in KVM Win 7 Pro x64 on a Debian 8.0 Jessie, without patches or special modifications, with good FPS, 50-60 in FHD and 30-40 in UHD.
here is a short video https://youtu.be/N9QwDmYBRgg
Info's
Host:
Debian 8.0 Jessie 01.05.2015
Guest:
KVM Win7 Pro amd64 with Nvidia whql Driver 350.12 64bit
Hardware:
CPU i7 4820K socket 2011
Mainboard Gigabyte GA-X79-UP4 Rev. 1.0
Graphic card:
Host: Gigabyte GV-N75TOC-2GI
Guest: Asus GTX780-DC2OC-3GD5
Patches:
none, Debian standard
Modifications:
/etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on pci_stub.ids=(ids from GTX780)"
/etc/initramfs-tools/modules
add "pci_stub" at the end
after change, start this as root:
update-initramfs -u
update-grub2
and reboot
After boot:
First bind graphic card to vfio:
lspci give this information for my GTX780
04:00.0 VGA compatible controller: NVIDIA Corporation GK110 GeForce GTX 780 rev a1
04:00.1 Audio device: NVIDIA Corporation GK110 HDMI Audio rev a1
vfio-bind from https://bbs.archlinux.org/viewtopic.php?id=162768
as root start "vfio-bind 0000:04:00.0 0000:04:00.1"
now you can start the virtual machine ---
qemu-system-x86_64 -enable-kvm -m 8192 -cpu host,kvm=off \
-smp 4,sockets=1,cores=4,threads=1 \
-machine q35,accel=kvm \
-soundhw hda \
-device vfio-pci,host=0x:00.0,rombar=1,x-vga=on \
-device vfio-pci,host=0x:00.1 \
-bios /usr/share/seabios/bios.bin \
-device virtio-net-pci,netdev=user.0,mac=52:54:00:03:02:01 \
-netdev user,id=user.0 \
-drive file=win7-x64_system.qcow2,if=none,id=drive-virtio-disk0,format=qcow2 \
-device virtio-blk-pci,scsi=off,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 \
-drive file=win7-games.qcow2,if=none,id=drive-virtio-disk1,format=qcow2 \
-device virtio-blk-pci,scsi=off,addr=0x8,drive=drive-virtio-disk1,id=virtio-disk1 \
-rtc base=localtime,driftfix=slew \
-usbdevice host:x.x \
-usbdevice host:y.y \
-vga none
Thanks for your Help.
Last edited by scoobydog (2015-05-03 12:24:51)
Offline
aw wrote:gg wrote:Before using the patch on qemu 2.3 I was using 2.1 and was not once able to restart or shutdown|start a guest machine without suspending or rebooting the host. It seems from what I'm experiencing that the patch has finally provided a reset (under certain conditions). Is there something else in qemu, the kernel, and the vfio-pci module in particular, that might be making this possible if not the patch you provided for qemu?
Nope, if there's an improvement, it's definitely from this patch. I just couldn't figure out from the previous post whether things were actually better for you with the patch. Thanks
Yes, was definitely a huge step. Thanks!
Today I'm using a fresh install of Windows 8.1 (and kernel 4.1-rc2, qemu 2.3, and OVMF) and the R9 290 is resetting on each reboot whether or not a Quadro 6000 is also passed-through.
My longstanding issue with resetting Hawaii appears to be largely resolved!
Offline
Today I'm using a fresh install of Windows 8.1 (and kernel 4.1-rc2, qemu 2.3, and OVMF) and the R9 290 is resetting on each reboot whether or not a Quadro 6000 is also passed-through.
My longstanding issue with resetting Hawaii appears to be largely resolved!
QEMU 2.3 or qemu.git. The reset patch didn't go in until after 2.3.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
gg wrote:Today I'm using a fresh install of Windows 8.1 (and kernel 4.1-rc2, qemu 2.3, and OVMF) and the R9 290 is resetting on each reboot whether or not a Quadro 6000 is also passed-through.
My longstanding issue with resetting Hawaii appears to be largely resolved!
QEMU 2.3 or qemu.git. The reset patch didn't go in until after 2.3.
That was with qemu.git branch/tag v2.3.0 and your patch. I've now built again from qemu.git head, which includes your patch, and it also works the same (well).
Offline
I am going now to move the guest-assigned card to another PCI-e x16 slot and attempt again, not expecting any success.
It seems this has fixed my issue completely. I am now successfully passing through an AMD R9 290 while leaving an NVIDIA 9800 GT as the host device (nouveau driver).
For future reference, this is the exact hardware I am using:
Athlon X4 860K - AMD AD860KXBJABOX
Radeon R9 290 - Sapphire "tri-x oc" 100362-2SR
A88X Bolton D4 motherboard - ASRock "fatal1ty" 90-MXGT60-A0UAY1Z
I had previously had my card installed in the PCI-e x16 3.0 slot; my fix was to move it to the x4 2.0 slot (physically x16, labeled as "x4 mode"). The labels in the motherboard user guide for these slots are PCIE2 and PCIE4 respectively. It is unfortunate that I could not get the R9 into the 3.0 slot since the other card can't do 3.0, and I don't do much graphics-wise on the host. Still, I am very pleased with the results.
This is my current working QEMU script, using the older windows install which had been previously running on a bare metal AM2+ phenom system
#!/bin/sh
export QEMU_AUDIO_DRV=sdl
export QEMU_AUDIO_DAC_FIXED_SETTINGS=1
export QEMU_AUDIO_DAC_FIXED_FREQ=44100
export QEMU_AUDIO_DAC_FIXED_FMT=S16
export QEMU_SDL_SAMPLES=1024
qemu-system-x86_64 \
-machine pc-i440fx-2.1,accel=kvm,usb=off \
-cpu host \
-drive file=OVMF.fd,format=raw,if=pflash \
-drive file=/dev/mapper/mass-win--uefi,format=raw,if=virtio \
-m 4096 \
-smp 3 \
-device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x5.0x7 \
-device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x5 \
-device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x5.0x1 \
-device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x5.0x2 \
-device usb-host,id=usb-keyboard-mouse,bus=usb.0,vendorid=0x13ba,productid=0x0018 \
-netdev user,id=net \
-device rtl8139,netdev=net,bus=pci.0,addr=0x6 \
-device vfio-pci,host=02:00.0,bus=pci.0,addr=0x8 \
-device vfio-pci,host=02:00.1,bus=pci.0,addr=0x9 \
-vga none \
-serial none \
-soundhw hda \
-nographic \
$*
Thanks for this thread, and thanks Duelist for providing a lot of help and information.
Offline
...
I had previously had my card installed in the PCI-e x16 3.0 slot; my fix was to move it to the x4 2.0 slot (physically x16, labeled as "x4 mode"). The labels in the motherboard user guide for these slots are PCIE2 and PCIE4 respectively. It is unfortunate that I could not get the R9 into the 3.0 slot since the other card can't do 3.0, and I don't do much graphics-wise on the host. Still, I am very pleased with the results.This is my current working QEMU script, using the older windows install which had been previously running on a bare metal AM2+ phenom system
...
Thanks for this thread, and thanks Duelist for providing a lot of help and information.
AFAIR, all pci-e slots on Kaveri C/APUs are 3.0, and x1 3.0 has the same bandwidth as x16 2.0 or something. Trinity and Richland, on the other hand, can't do PCI-E 3.0 at all.
Try running lspci -vv as root to see the actual pci-e bandwidth speed. I think it won't bottleneck much.
LnkCap: Port #0, Speed 8GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 5GT/s, Width x16, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
DevCap2: Completion Timeout: Not Supported, TimeoutDis-, LTR-, OBFF Not Supported
DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
Compliance De-emphasis: -6dB
This is HD7750 as 01:00.0 @ first 16x 2.0 on 750K Trinity in F2A55(Hudson D2?..).
LnkCap: Port #1, Speed 8GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 5GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
DevCap2: Completion Timeout: Not Supported, TimeoutDis-, LTR-, OBFF Not Supported
DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
Compliance De-emphasis: -6dB
This is the second HD7750 as 02:00.0 @ second 16x 2.0. It's routed as 8x, although i can't recall if there was any warning about it being only an x4 port. Since all the power lines are fed through the first 12 or so pins, it'd be weird to route the port, but not use it's full bandwidth.
LnkCap: Port #0, Speed 2.5GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <256ns, L1 <4us
ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp-
LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+
ExtSynch- ClockPM+ AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
DevCap2: Completion Timeout: Not Supported, TimeoutDis+, LTR-, OBFF Not Supported
DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis-
Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
Compliance De-emphasis: -6dB
And this is GT610 as 04:00.0 @ first 1x 2.0 through the powered(12V line with molex aka IDE power connector) 1x->16x riser.
I'm just curious what it looks like in your system. And if you will consider putting R9 290 in a riser card, please, could you give me a photo of result? It's quite a lenghty card, so i suppose it'd be sitting is some weird place and/or position inside the case. I've stuffed my GT610 that way that it's metal bracket is centimeter away from the CPU fan, this fact gives me some lulz.
Also, how did you move the baremetal setup to the VM? Was there BSOD 7B? Migrating to ESP+EFI loader?
OH! And don't forget to fix your entry!
Last edited by Duelist (2015-05-05 13:34:53)
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline
Try running lspci -vv as root to see the actual pci-e bandwidth speed. I think it won't bottleneck much.
Capabilities: [78] Express (v1) Endpoint, MSI 00
DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <512ns, L1 <4us
ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset-
DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop+
MaxPayload 128 bytes, MaxReadReq 512 bytes
DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-
LnkCap: Port #0, Speed 2.5GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <512ns, L1 <1us
ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp-
LnkCtl: ASPM Disabled; RCB 128 bytes Disabled- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 2.5GT/s, Width x16, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
This is the 9800 GT host card as 01:00.0 @ x16 3.0
Capabilities: [58] Express (v2) Legacy Endpoint, MSI 00
DevCap: MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset-
DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
RlxdOrd- ExtTag+ PhantFunc- AuxPwr- NoSnoop+
MaxPayload 256 bytes, MaxReadReq 512 bytes
DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-
LnkCap: Port #1, Speed 8GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
DevCap2: Completion Timeout: Not Supported, TimeoutDis-, LTR-, OBFF Not Supported
DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
Compliance De-emphasis: -6dB
LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete-, EqualizationPhase1-
EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
This is the R9 290 as 02:00.0 @ x16 2.0 "x4 mode". Looking at this it seems like it's actually running at x1? That even worse.
The board's user manual itself says this:
PCIE2 (PCIe 3.0 x16 slot) is used for PCI Express x16 lane width graphics cards.
PCIE4 (PCIe 2.0 x16 slot) is used for PCI Express x4 lane width graphics cards.
And if you will consider putting R9 290 in a riser card, please, could you give me a photo of result? It's quite a lenghty card, so i suppose it'd be sitting is some weird place and/or position inside the case.
The case is pretty big, there's 10 slots in the back so I just use the spare slots and stick a piece of cardboard under the riser to insulate it. I had a string of cable ties through the plastic fan cooling assembly thing to hold the card straight with more than just the backplate since the riser sits loose in the case. I'll take a picture of it when I get around to trying that setup again.
Also, how did you move the baremetal setup to the VM? Was there BSOD 7B? Migrating to ESP+EFI loader?
The original setup was a ~80 G partition on a 1TB MBR drive, with some kind of 100M system partiton alongside it. Ignoring the 100M partition, I created a new LVM volume just large enough for the first partition and an MBR part table. I used fdisk on that and then copied the original partition right in. Apparently you can't just dd NTFS filesystems around wherever you want, there's something in the NTFS header which specifies the first drive sector/block/whatever for the filesystem, so I ended up modifying some part of the FS header with a hex editor to get it to work. Since I only copied that partition and not the 100M other thing, I used some boot repair tools on the win7 install disc which copied in the boot manager garbage and got it working after a few runs of it. I fixed the 0x7B BSOD by making some registry edit change using regedit from the install disc. I did something similar to this http://jsmcomputers.biz/wp/?p=249, also the same trick to get virtio working but changing the virtio key in that location.
Switching to UEFI, I made a new 100G LV, formatted it in fdisk as gpt and added the original partition to the start and a 100M "EFI System Partition" to the end, which is apparently fat32 with a special UUID. Running the same boot fixing thing on the win7 disc a couple times got that working, I don't remember running any command line programs but I might have. This time I used gparted instead of pv (or dd) which probably did make some changes to the partition table NTFS header, so I didn't need to use a hex editor this time.
I have to say the windows boot stuff is really rather ugly. I suppose it's better than it was for 2k and XP, but it's still pretty bad.
OH! And don't forget to fix your entry!
Interesting, I don't think I submitted anything, it was probably added by someone else? The main page says the sheet is locked because of vandalism or something, so I don't think I can edit it myself.
Last edited by ughman (2015-05-05 14:53:43)
Offline
Hi,
I am new to this topic. So I have some questions.
What is "better"? The qemu default chipset or q35? Which one is better for Windows 8?
Which bios should I use? Seabios or OVMF? Which one is better for Windows 8? (I want to import my Windows 8 Key from the ACPI table)
Which combination would have the best performance?
In a lot of posts of qemu commands I saw something like this:
qemu-system-x86_64 --enable-kvm -M q35 -vga none \
-device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
-device vfio-pci,host=07:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \
-device vfio-pci,host=07:00.1,bus=root.1,addr=00.1
What is the function of the bus and the addr parameter?
I read somewhere if I use the OVMF bios I wouldn't need the following parameter. Is that right?
multifunction=on,x-vga=on
Thanks you
Edit:
One more question: Why should I add a rom parameter to the graphics card?
Last edited by wulfspider (2015-05-05 15:17:14)
Offline
There is no better, performance is same, 440fx is recommended.
Seabios is for old legacy vga mode (needs kernel patch maybe), ovmf is uefi (no patches needed).
Bus and addr tells qemu where to put it on virtual bus/port or smth like that
Yes x-vga can be avoided using ovmf.
Sometimes rebooting vm doesn't work well, and using rom helps. It did with my old 560ti.
Offline
5000! (sorry, had to do it)
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline