You are not logged in.
Thanks for the great guide here. I've been trying to set this up lately and have had some success.
Hardware:
Asus Z97-A motherboard
Intel Core i7-4790K cpu
AMD Radeon HD 7850
Host system:
Arch Linux x86_64
default kernel and qemu, unpatched
Windows 7 test, with default BIOS:
Command line: QEMU_AUDIO_DRV=alsa QEMU_ALSA_DAC_SIZE_IN_USEC=100 QEMU_ALSA_DAC_DEV=dac qemu-system-x86_64 -m 4G -enable-kvm -machine type=pc,accel=kvm -cpu host -smp cores=4,threads=2 -drive file=w7.img,media=disk,cache=none,format=raw,discard=unmap,detect-zeroes=unmap,if=virtio -vga none -nographic -net nic,model=virtio -net user -soundhw hda -vga none -nographic -device vfio-pci,host=01:00.0,x-vga=on -usb -device usb-host,hostbus=3,hostport=13 -device usb-host,hostbus=3,hostport=14 -monitor telnet::4444,server,nowait
Results: Mostly flawless. GPU is recognized fully by its drivers, games run well. 3Dmark crashes but I'm not concerned about that really. The only issue is an occasional crackle/skip in the audio. It doesn't happen frequently enough to really bother me, but I'd still like to get rid of it if possible.
Windows 8 test, with OVMF (using https://aur.archlinux.org/packages/ovmf-bin/ ):
Command line: QEMU_AUDIO_DRV=alsa QEMU_ALSA_DAC_SIZE_IN_USEC=100 QEMU_ALSA_DAC_DEV=dac qemu-system-x86_64 -m 4G -enable-kvm -machine type=pc,accel=kvm -cpu host -smp cores=4,threads=2 -drive file=w8.img,media=disk,cache=none,format=raw,discard=unmap,detect-zeroes=unmap,if=virtio -vga none -nographic -net nic,model=virtio -net user -soundhw hda -vga none -nographic -device vfio-pci,host=01:00.0,x-vga=on -usb -device usb-host,hostbus=3,hostport=13 -device usb-host,hostbus=3,hostport=14 -monitor telnet::4444,server,nowait
Results: No video from the AMD GPU when using gpu passthrough with vfio. Just a black screen. No errors from QEMU or in dmesg. VM boots fine without gpu passthrough, with -vga std. Leaving out x-vga=on didn't change anything.
Audio:
I get an occasional crackle/skip in the audio. It isn't a huge bother. But what I'd really like to do is to avoid going through alsa and instead pass my USB DAC to the VM using USB passthrough. That doesn't seem to work though. QEMU gives me this:
qemu-system-x86_64: libusb_claim_interface: -5 [NOT_FOUND]
qemu-system-x86_64: libusb_set_configuration: -6 [BUSY]
qemu-system-x86_64: libusb_set_configuration: -6 [BUSY]
qemu-system-x86_64: libusb_set_configuration: -6 [BUSY]
And in dmesg I get lots of this:
[ +0.201511] usb 3-4: reset full-speed USB device number 2 using xhci_hcd
[ +0.211482] xhci_hcd 0000:00:14.0: xHCI xhci_drop_endpoint called with disabled ep ffff8804396d53c0
[ +0.169216] usb 3-4: usbfs: interface 0 claimed by usbfs while 'qemu-system-x86' sets config #1
[ +0.158978] usb 3-4: reset full-speed USB device number 2 using xhci_hcd
[ +0.211597] xhci_hcd 0000:00:14.0: xHCI xhci_drop_endpoint called with disabled ep ffff8804396d53c0
[ +0.194835] usb 3-4: reset full-speed USB device number 2 using xhci_hcd
[ +0.211542] xhci_hcd 0000:00:14.0: xHCI xhci_drop_endpoint called with disabled ep ffff8804396d53c0
Any ideas on this? The DAC shouldn't be in use from anything after I've removed the snd-usb-audio module. Inside the VM, I get "This device cannot start (Code 10)."
Offline
after avoiding archlinux and testing with debian, ubuntu and proxmox, out of a fear of having to do too much fine tuning to make things work, I was pleasantly surprised. In the other mentioned distributions I couldn't get passthrough to work or ran into issues when it worked.
@nbhs thank you for the guide, with the kernel you provide and the settings you suggest everything works more or less "out of the box"
I am not savvy on using a custom kernel though, how likely is it that updating the host will break the setup?
I have tested two windows 7 VMs (one for each graphics card). Both work fine, benchmark results are great, not far off from the ones I got with vmware esxi 5.5
my hardware:
i7-4790k
Asrock z97 Pro4
Radeon R9 280, msi twin frozr
Radeon 4850, asus
The only issue: as soon as hdtune (in windows) is started, the vm locks up and the host reports a segfault. Can anyone point me in the direction where to best report this behaviour/bug or even how to fix it?
Last edited by gmk (2015-03-08 11:21:25)
Offline
after avoiding archlinux and testing with debian, ubuntu and proxmox, out of a fear of having to do too much fine tuning to make things work, I was pleasantly surprised. In the other mentioned distributions I couldn't get passthrough to work or ran into issues when it worked.
@nbhs thank you for the guide, with the kernel you provide and the settings you suggest everything works more or less "out of the box"
I am not savvy on using a custom kernel though, how likely is it that updating the host will break the setup?I have tested two windows 7 VMs (one for each graphics card). Both work fine, benchmark results are great, not far off from the ones I got with vmware esxi 5.5
my hardware:
i7-4790k
Asrock z97 Pro4
Radeon R9 280, msi twin frozr
Radeon 4850, asus
You can have as many kernels installed on arch as you want, upgrading is not an issue, it wont break anything
The only issue: as soon as hdtune (in windows) is started, the vm locks up and the host reports a segfault. Can anyone point me in the direction where to best report this behaviour/bug or even how to fix it?
Post your logs, i assume you've done this already
echo "options kvm ignore_msrs=1" >> /etc/modprobe.d/kvm.conf
Offline
Passing a secondary device (from host perspective) is indeed easier.
However, after some fiddling, i managed to passthrough the primary card on my system, but it does involve some extra work.
Apart from the usual driver blacklisting (radeon in my case) i had to do the following as well:
- Switch grub to text-only mode by adding "GRUB_GFXPAYLOAD_LINUX=text" to /etc/defaults/grub
- Switch linux boot to text-only mode by adding "nomodeset nofb" to GRUB_CMDLINE_LINUX_DEFAULT=... in /etc/defaults/grubAfter adding these parameters, I was able to passthrough the host primary graphics to qemu guest in the same way as i passthrough my secondary graphics.
Note that you will completely lose your host console when you do this, so make sure you can ssh in, and perhaps have a serial terminal ready as well.
Hi . I am also interested at running a headless server .
I tried your method but the whole machine hangs as soon as I start the VM that uses the host's primary GPU .
I blacklisted nouveau and added "nomodeset nofb" to my kernel options . Still no go
I'm using UEFI host with Gummiboot boot manager if that would matter .
EDIT : Setting rombar=0 solves the hanging issue !!! The VM boots fine and I can even use VNC to access it m but no HDMI output .
I must obtain a ROM file and test ASAP
EDIT2 : Still no luck :
vfio-pci 0000:01:00.0: BAR 3: can't reserve [mem 0xd8000000-0xd9ffffff 64bit pref]
Last edited by Denso (2015-03-08 17:35:26)
Offline
hurenkam wrote:Passing a secondary device (from host perspective) is indeed easier.
However, after some fiddling, i managed to passthrough the primary card on my system, but it does involve some extra work.
Apart from the usual driver blacklisting (radeon in my case) i had to do the following as well:
- Switch grub to text-only mode by adding "GRUB_GFXPAYLOAD_LINUX=text" to /etc/defaults/grub
- Switch linux boot to text-only mode by adding "nomodeset nofb" to GRUB_CMDLINE_LINUX_DEFAULT=... in /etc/defaults/grubAfter adding these parameters, I was able to passthrough the host primary graphics to qemu guest in the same way as i passthrough my secondary graphics.
Note that you will completely lose your host console when you do this, so make sure you can ssh in, and perhaps have a serial terminal ready as well.
Hi . I am also interested at running a headless server .
I tried your method but the whole machine hangs as soon as I start the VM that uses the host's primary GPU .
I blacklisted nouveau and added "nomodeset nofb" to my kernel options . Still no go
I'm using UEFI host with Gummiboot boot manager if that would matter .
EDIT : Setting rombar=0 solves the hanging issue !!! The VM boots fine and I can even use VNC to access it m but no HDMI output .
I must obtain a ROM file and test ASAP
EDIT2 : Still no luck :
vfio-pci 0000:01:00.0: BAR 3: can't reserve [mem 0xd8000000-0xd9ffffff 64bit pref]
Boot with video=efifb:off, i think i explained somewhere in the previous 100 pages or so how i did it.
Edit: here https://bbs.archlinux.org/viewtopic.php … 2#p1427202
Last edited by nbhs (2015-03-08 17:44:06)
Offline
aw wrote:Specifying a CPU topology with threads is generally only recommended in combination with pinning. Using 100% of the host CPU resources for a guest is also likely to cause glitches when the host needs some CPU resources. There's plenty of information online for vCPU pinning. libvirt can do this for you and is detailed on their site.
Could you elaborate on that? I always assumed the point of cpu pinning was to reserve/restrict cpu usage, regardless of topology. For example to preventing a test vm from stealing resources from a production vm.
The reason I'm asking is that I'm also using threads without pinning. When I first tried kvm with vfio I did some benchmarks(cinebench) and threads=8,cores=1,sockets=1 gave me the best results for a quad-core cpu with ht.
It's a complicated issue, probably worthy of research papers, but the thing to keep in mind is that physically, threads on the same core share execution units. A process that can run at 100% on an otherwise idle core may only run at 75% performance if the other thread of that core is under load. We can theoretically run two instances of that process simultaneously on a single core with hyperthreading, but our aggregate throughput would be only 150% whereas running those two instances on separate cores would result in 200% throughput. This is where we rely on the scheduler knowing the difference between threads on the same core and separate cores.
Moving that to a virtual machine, the host doesn't implicitly know how any given vCPU is exposed in the guest, they're all equal processes for host scheduling. By exposing threads to the guest without pinning, the guest is making scheduling decisions that don't necessarily make sense. Maybe these work out, maybe they don't.
If we pin vCPUs in the host and expose the topology in the guest, then the topology we expose to the guest actually holds true to the vCPU processes on the host. The guest also has more visibility than the host to running process threads from the same application on vCPU threads and cores that share cache. The host can't know that vCPUs are running related code and therefore may make poor decisions when migrating a vCPU task to another core or thread.
In your case, you've told the guest that it has a single core with 8 threads, which I suspect the guest scheduler would handle the same as a single 8-core processor or the same as 8 single-core processors. IOW, it doesn't really matter that they're threads vs cores vs sockets because they're all equal. In your case, maybe it is worthwhile to ignore the locality of threads to cores since the guest can't make any useful scheduling decisions between cores with un-pinned vCPUs anyway. Exposing 8-cores or 8-sockets to the guest also implies to the guest that each execution unit is independent of others, which is not actually true of the physical hardware. The 8-core or 8-socket model would effectively be over-committing the host resources whereas 8-threads seems closer to accurate. Maybe you're on to something, but I'd still expect pinning with an accurate physical topology exposed to the guest to provide a slight advantage.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
gmk wrote:The only issue: as soon as hdtune (in windows) is started, the vm locks up and the host reports a segfault. Can anyone point me in the direction where to best report this behaviour/bug or even how to fix it?
Post your logs, i assume you've done this already
echo "options kvm ignore_msrs=1" >> /etc/modprobe.d/kvm.conf
yes, "ignore_msrs=1" is in use as it was required for passmark to work, otherwise it would cause a bsod (but no segfault/crash of qemu)
the relevant logs I found:
Mar 08 20:07:57 archy kernel: qemu-system-x86[9157]: segfault at 7ffffcb17000 ip 00007f8643f8c5be sp 00007ffffcb15c88 error 6 in libc-2.21.so[7f8643e65000+199000]
Mar 08 20:08:05 archy systemd-coredump[9190]: Coredump of 9157 (qemu-system-x86) is larger than configured processing limit, refusing.
Mar 08 20:08:05 archy systemd-coredump[9190]: Process 9157 (qemu-system-x86) of user 0 dumped core.
the vm is using a raw-imagefile that resides on an ext4 partition (dd'ing that image to a lvm-partition yielded the same result)
Offline
Boot with video=efifb:off, i think i explained somewhere in the previous 100 pages or so how i did it.
Edit: here https://bbs.archlinux.org/viewtopic.php … 2#p1427202
Hi . Thanks for sharing this !
Indeed "video=efifb:off" solves the hanging issue without editing the VM script , but now I'm faced with with :
vfio-pci 0000:01:00.0: Invalid ROM contents
vfio-pci 0000:01:00.0: Invalid ROM contents
I used a ROM file dumped from GPU-Z as well as a downloaded one from TPU but without any luck .
Trying to dump the ROM from host while GPU isn't initialized produces 0 byte file .
EDIT : This is a GTX 770 by the way .
Last edited by Denso (2015-03-08 20:28:24)
Offline
nbhs wrote:Boot with video=efifb:off, i think i explained somewhere in the previous 100 pages or so how i did it.
Edit: here https://bbs.archlinux.org/viewtopic.php … 2#p1427202
Hi . Thanks for sharing this !
Indeed "video=efifb:off" solves the hanging issue without editing the VM script , but now I'm faced with with :
vfio-pci 0000:01:00.0: Invalid ROM contents vfio-pci 0000:01:00.0: Invalid ROM contents
I used a ROM file dumped from GPU-Z as well as a downloaded one from TPU but without any luck .
Trying to dump the ROM from host while GPU isn't initialized produces 0 byte file .
EDIT : This is a GTX 770 by the way .
I had this problem before with my 470gtx, reboot your pc, start your vm, the first vm boot should start ok, dump your rom with gpu-z, transfer it to your host, then add romfile=... to your script
Last edited by nbhs (2015-03-08 20:38:54)
Offline
I had this problem before with my 470gtx, reboot your pc, start your vm, the first vm boot should start ok, dump your rom with gpu-z, transfer it to your host, then add romfile=... to your script
GPU-Z won't produce a full "pre-POST" ROM . the only way to do so is to install the card as a secondary card on host , then dump the image from there .
Please feel free to correct me if I'm wrong .
Offline
Hi.
Anyone can confirm that ASRock Z97 Extreme3 motherboard supports vt-d?
I know that Pro4 supports it (and Extreme4 / 6 probably too), but I would like to buy Extreme3 because of better layout.
Last edited by n0rv (2015-03-08 22:14:29)
Offline
Hi.
Anyone can confirm that ASRock Z97 Extreme3 motherboard supports vt-d?I know that Pro4 supports it (and Extreme4 / 6 probably too), but I would like to buy Extreme3 because of better layout.
A user named noctavian put a comprehensive list of success/failure cases here -
https://docs.google.com/spreadsheet/ccc … _web#gid=0
Check for your motherboard and GPU on that list.
- Peter
Offline
Audio:
I get an occasional crackle/skip in the audio. It isn't a huge bother. But what I'd really like to do is to avoid going through alsa and instead pass my USB DAC to the VM using USB passthrough.
I used to think that I need to get a dedicated sound because of a stutter issue with ICH6 and ICH9 - it turns out many people are having issues with the ICH6/9 emulation - something to do with small buffer size. I switched over to AC97 and it works fine. In Win7, I had to install the Realtek AC97 driver. Win8.1 barfs at the Realtek AC97 driver because it is unsigned, but you can disable that protection and it will install fine. Once I switched to AC97, sound works flawlessly.
- Peter
Last edited by pkim (2015-03-09 04:37:10)
Offline
hmmh thanks. I thought it supported it since it was in the bios options
EDIT: can someone please tell me why the 3570 supports vt-d but the 3570k doesnt ? How does this make any sense ?
There is really no good reason other than that Intel is trying to be strategic. They think the Vt-d is a "corporate feature" along with vpro, etc. And since the "K" processors are targeted at the over-clock happy gamers, the VM features don't go together with those "gamer" processors - so the logic goes. I kind of got lucky - I really wanted the VM features, but the non-K processors were much lower on the bang-per-buck ratio. I bought a i7-3790K anyway (not knowing how good/bad the VM support woudl be), but it turns out Vt-d is one of the VM features that it supports - for the first time for a K processor. I don't know why Intel decided to start including Vt-d on the K processor. Maybe they thought VM features are starting to become more mainstream? I really don't know why Vt-d is part of the new K-type processor feature set but I'm happy with my 3790K.
- Peter
Offline
EDIT 3 : i finally tried using -vga none and -nographics menu, now the black qemu monitor shows up, then when i plug in my monitor to my secondary GPU ( the passthrough GPU ), finally i am able to see my VM using -vga none.. but this vm installs windows 8 with any installed NVIDIA drivers yet, and also, this only works on the first boot of the VM, lets say i kill the VM and i change the script, then i start my VM again, the output won't show up again in the passthrough'd monitor, the only way to make it show up again is by rebooting the host.. and after it reboots i can only boot the VM again once... if i boot the VM twice, starting from that time it won't show the output again unless i reboot one more time..
any help on this please !! i think im almost there..
I had the same exact problem with my 560Ti card - it would work in the guest once but after shutting down the guest, it wouldn't work again - probably a failure to cleanly reset the hardware. Also I couldn't get it to work on Win8.1 with UEFI, the firmware wasn't compatible with UEFI. I ended up throwing a GTX 970 at the guest - and it solved both problems: win8.1 UEFI compatibility and also I can shutdown and restart the guest an infinite number of times. I sold the 560Ti card and some old computer hardware on ebay and bought a GTX 960 for the host so I now have a very nice setup - with the Vt-d processor and IOMMU motherboard, I really have two machines in one. And the new GTX 960 is also more power efficient than the 560Ti (about 20W - I estimate about $36 per year in energy savings) - so more money savings there - just a quick note to those of you who is overclocking your CPU thinking you are saving money on the CPU price and getting more computation power, think again: the power ramp up is exponential with the clock frequency/supply power and the money you saved on the CPU will be quickly eaten up by the electricity bill.
If you're sticking with your 560Ti card, please know that I was able to get mine to reset with the host pm-suspend - still a bit inconvenient, but beats rebooting.
- Peter
Offline
Yikes.
So i've had my windows 7 system installed on bare metal into a partition on my hard drive.
Then i've made windows 8.1 virtual machine, and connected that partition to the virtual machine via virtio-blk-pci, just for data transferring needs.
That way, following cases are possible:
1. Linux host is offline since windows 7 is booted, accessing the partition.
2. Linux host is online, and the partition is mounted in host system for file transfers.
3. Linux host is online, partition is unmounted and VM boots, grabbing access to that partition. Linux host can not mount that partition, since mount is preventing me to shoot my leg.
Common sense hints me that EVERYTHING should be fine as long as no two systems use same partition concurrently.
But windows 8.1 defies common sense. Seems like it frigging starts defragging that darn partition if the user is idle(like, when i'm doing something on the host). I'll omit the heavy i/o issues generated by that, but..
I've shut down the guest via regular and fully legit way. I thought it would finish whatever it was doing with the disk and leave it free.
NOPE
I've started my windows 7 system, and a wild CHKDSK message appears. Let it go, i thought, and... CHKDSK uses filesystem checking! It's not very effective.. 40 gigabytes of data is now corrupted.
YAY FOR WINDOWNS!
Interesting, if i'll have two PCs and connect a physical hard drive in a similar way(like, came to work, plugged in my hard drive into windows 8 system, shutted it down, came home, plugged the hard disk back into windows 7 machine), will it go nuts as well?
Morale:
You shouldn't let VM touch your dual boot partition, especially if it is windows 8 vm.
Last edited by Duelist (2015-03-09 18:03:08)
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline
a quick note to those of you who is overclocking your CPU thinking you are saving money on the CPU price and getting more computation power, think again: the power ramp up is exponential with the clock frequency/supply power and the money you saved on the CPU will be quickly eaten up by the electricity bill.
But what if my CPU was free when i got it?
What if the electricity at my area costs less than 0.02$ per kw-h?
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline
I had this problem before with my 470gtx, reboot your pc, start your vm, the first vm boot should start ok, dump your rom with gpu-z, transfer it to your host, then add romfile=... to your script
GPU-Z won't produce a full "pre-POST" ROM . the only way to do so is to install the card as a secondary card on host , then dump the image from there .
Please feel free to correct me if I'm wrong .
Indeed, but it can still work anyway. At least worked for my NVIDIA cards.
Offline
I am having the most bizarre problem with vfio and NIC device. My motherboard has 2 network interfaces, Intel e1000e as well as an Atheros AR8161, so I planned on passing the Atheros device to guest along with GPU. However, when the guest is not running, the Atheros NIC is held by pci-stub/vfio-pci driver, and brings my entire home network down when ethernet cable is plugged in. What could possibly be going on here? Anyone experience anything similar or have an idea what could be happening? Thanks!
Also, aw, on your vfio.blogspot.com FAQ, the use of vfio-pci.ids= is mentioned. Is this option no longer valid (it seems I could not get it to work the way pci-stub.ids= works)?
Last edited by mutiny (2015-03-10 05:40:36)
Offline
Indeed, but it can still work anyway. At least worked for my NVIDIA cards.
It doesn't work when trying to passthrough the host's primary GPU unfortunately : (
Offline
Hi.
Anyone can confirm that ASRock Z97 Extreme3 motherboard supports vt-d?I know that Pro4 supports it (and Extreme4 / 6 probably too), but I would like to buy Extreme3 because of better layout.
I am running Ubuntu 14.10 as host on ASRock Z97 Extreme3 with an i7 4790K and Win8.1 as VM. The Intel HD as primary and a nVidia GTX970 as passthrough adapter.
Everything is running well out of the box with a fresh build of OVMF UEFI instead of sea bios. So I am able to use stock kernel without any patches.
Offline
I am having the most bizarre problem with vfio and NIC device. My motherboard has 2 network interfaces, Intel e1000e as well as an Atheros AR8161, so I planned on passing the Atheros device to guest along with GPU. However, when the guest is not running, the Atheros NIC is held by pci-stub/vfio-pci driver, and brings my entire home network down when ethernet cable is plugged in. What could possibly be going on here? Anyone experience anything similar or have an idea what could be happening? Thanks!
Also, aw, on your vfio.blogspot.com FAQ, the use of vfio-pci.ids= is mentioned. Is this option no longer valid (it seems I could not get it to work the way pci-stub.ids= works)?
Sorry, vfio-pci.ids was complete mis-information, but I'm working to make it true. I've never heard of a NIC affecting the network when simply bound to pci-stub or vfio-pci. The e1000e may be a better target for device assignment. I can't say i'm surprised that an atheros NIC has issues.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
I wonder if anyone had been using asus A88x-Pro with kvm. I had it setup with xen 4.4.3 and I was able to pass R9 270 and couple of other ati cards. I was using 1201 bios version. I had flashed latest bios 1701. I can boot windows 8.1 without ati drivers and with vga none my R 270 displays out put but the moment I had installed ati drivers the system only boots to blue screen it get stuck and does not show desktop. I had to use this options vfio_iommu_type1 disable_hugepages=1. to work around bios bug.
Any tips on geting that motherboard working with kvm ? How I can get uefi emulation working under kvm. So far I had it load uefi but i dont see any options to boot from my hard drive,
Offline
Sorry if this is a hijack, but I felt it was in paralell to OP.
tl;dr: Does anyone have a way of asking Gerd Hoffman to build OVMF with http://sourceforge.net/p/edk2/mailman/message/30377799/ this patch?
After a ton of digging to what seemed like the bottom of the internet, I found out Windows 7 on OVMF (uefi) won't boot to a vfio bound GPU due to it needing something called an int 10h. I think: windows 7 will work on UEFI outside of vfio OVMF because CSM enables the int 10h legacy or something to move windows along. Someone suggested making it possible for OVMF to allow this without(?) CSM in the link above.
My problem is I totally suck at building EDK2 and after a couple days of failing I'd like to see if anyone knows how to contact the owner of that repo to try a special build.
Offline
Sorry if this is a hijack, but I felt it was in paralell to OP.
tl;dr: Does anyone have a way of asking Gerd Hoffman to build OVMF with http://sourceforge.net/p/edk2/mailman/message/30377799/ this patch?
After a ton of digging to what seemed like the bottom of the internet, I found out Windows 7 on OVMF (uefi) won't boot to a vfio bound GPU due to it needing something called an int 10h. I think: windows 7 will work on UEFI outside of vfio OVMF because CSM enables the int 10h legacy or something to move windows along. Someone suggested making it possible for OVMF to allow this without(?) CSM in the link above.
My problem is I totally suck at building EDK2 and after a couple days of failing I'd like to see if anyone knows how to contact the owner of that repo to try a special build.
That's where i begin to feel really, REALLY old. Int 10h is, basically, call "bios function". You set up some registers, then do int 10h and the machine does stuff.
Kek point is - there's certain things that UEFI really wouldn't like, in my opinion. Like, using int 10h you could output text(say hi to VGA maybe?) or change the video mode, if i recall correctly. I remember correctly: http://en.wikipedia.org/wiki/INT_10H http://en.wikipedia.org/wiki/BIOS_interrupt_call
I guess you can't have int 10h working with GOP.
I can't comprehend the patch' insides, but this is very, very interesting.
Also, i've recently tried to install windows 7 on hardware UEFI platform, and failed just like i failed on the VM: stuck when windowns tries to output something on screen.
I guess if you could've feed it with some video drivers(there's some ways to do this, but i ain't no windows magician) before the install - it might work.
Seems like so far my guess is proving true - windows 7 doesn't work with GOP, it can't output graphics that way. If you fetch it drivers which do their own magic way(you can't change resolution on windowns 8 until you install drivers) outputting video - it should work.
Last edited by Duelist (2015-03-10 20:47:34)
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline