You are not logged in.
Thanks to everyone who contributed to this thread, but I need a bit of assistance. I've been trying to get this working for hours, but I can't get this setup working no matter which modules or options I use. There's no errors when I bind the card using the script in the OP or when I run QEMU, but I get no signal.
Hardware:
Motherboard: Intel DB85FL
Processor: Intel i7-4770
Graphics (guest): EVGA GeForce GTX 760
Graphics (host): Intel
I'm using the latest kernel in the repo at the moment (3.17.6), as I am under the impression that the patches in the OP are now included in the official arch kernel. Is this correct, or did I skip the most important step?
Here's some command outputs.
$ zcat /proc/config | grep VFIO
CONFIG_VFIO_IOMMU_TYPE1=m
CONFIG_VFIO=m
CONFIG_VFIO_PCI=m
CONFIG_VFIO_PCI_VGA=y
CONFIG_KVM_VFIO=y
$ cat /proc/cmdline
\boot\vmlinuz-linux root=/dev/sda2 rootfstype=ext4 rw intel_iommu=on i915.enable_hd_vgaarb=1 initrd=boot\initramfs-linux.img
$ cat /etc/modprobe.d/vgapassthru.conf
blacklist nouveau
blacklist nvidia
options kvm ignore_msrs=1
options pci-stub ids=10de:1187,10de:0e0a
options vfio_iommu_type1 allow_unsafe_interrupts=1
$ lsmod | grep vfio
vfio_iommu_type1 17118 0
vfio_pci 35525 0
vfio 18477 2 vfio_iommu_type1,vfio_pci
sudo qemu-system-x86_64 \
-enable-kvm -M q35 -m 8G -cpu host -smp 4,sockets=1,cores=4,threads=1 \
-cpu kvm=off \
-mem-prealloc \
-bios /usr/share/qemu/bios.bin -vga none \
-device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
-device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \
-device vfio-pci,host=01:00.1,bus=root.1,addr=00.1
The monitor is connected to the graphics card via a DVI cable, if that's relevant.
Anyone have any ideas on how to get my card working? I would very much appreciate it as I would like to finally boot Arch full-time.
Last edited by TwosComplement (2015-01-03 22:47:07)
Offline
Thanks to everyone who contributed to this thread, but I need a bit of assistance. I've been trying to get this working for hours, but I can't get this setup working no matter which modules or options I use. There's no errors when I bind the card using the script in the OP or when I run QEMU, but I get no signal.
Hardware:
Motherboard: Intel DB85FL
Processor: Intel i7-4770
Graphics (guest): EVGA GeForce GTX 760
Graphics (host): IntelI'm using the latest kernel in the repo at the moment (3.17.6), as I am under the impression that the patches in the OP are now included in the official arch kernel. Is this correct, or did I skip the most important step?
<ding, ding, ding> We have a winner! The i915 patch is not upstream and is not destined to ever be upstream. If you want to use an unmodified kernel, you either need to get rid of Intel graphics on the host or use OVMF in the guest, which your video card should support.
Here's some command outputs.
$ zcat /proc/config | grep VFIO CONFIG_VFIO_IOMMU_TYPE1=m CONFIG_VFIO=m CONFIG_VFIO_PCI=m CONFIG_VFIO_PCI_VGA=y CONFIG_KVM_VFIO=y
$ cat /proc/cmdline \boot\vmlinuz-linux root=/dev/sda2 rootfstype=ext4 rw intel_iommu=on i915.enable_hd_vgaarb=1 initrd=boot\initramfs-linux.img
$ cat /etc/modprobe.d/vgapassthru.conf blacklist nouveau blacklist nvidia options kvm ignore_msrs=1 options pci-stub ids=10de:1187,10de:0e0a options vfio_iommu_type1 allow_unsafe_interrupts=1
This vfio option is probably not needed. pci-stub is a poor choice to make a module, you generally want it builtin so that it can claim devices before host drivers. Shouldn't be an issue in this case since you're blacklisting the relevant host drivers.
sudo qemu-system-x86_64 \ -enable-kvm -M q35 -m 8G -cpu host -smp 4,sockets=1,cores=4,threads=1 \ -cpu kvm=off \ -mem-prealloc \ -bios /usr/share/qemu/bios.bin -vga none \ -device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \ -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \ -device vfio-pci,host=01:00.1,bus=root.1,addr=00.1
You don't specify your guest OS, but I'll repeat for the billionth time, if it's Windows, don't bother with q35, just use 440fx. You also don't need to specify a bios and -mem-prealloc does nothing in this case.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
Thank you so much for your help! I really need to be more careful about reading, sorry about the misunderstanding. I finally have it working with OVMF and 440fx. Do you accept tips?
Offline
@aw: I'm curious if you have any feedback on my post, with the setup I'm using. I've made a few edits since I originally posted it, based on more reading and some suggestions relating to the changes others have made.
Offline
@aw: I'm curious if you have any feedback on my post, with the setup I'm using. I've made a few edits since I originally posted it, based on more reading and some suggestions relating to the changes others have made.
The vfio-bind script is unnecessary in your configuration. With managed="yes" libvirt should automatically bind the devices to vfio-pci when you start the VM.
It looks like you motherboard has 4 x16 slots, the top 3 sourced from the CPU, the bottom (and the x1 slot and PCI slots via the crappy ASM1083) sourced from the PCH. If that's true, then the top 3 x16 slots will always be grouped together because Intel doesn't feel like supporting ACS on Xeon E3-1200 series. If you could find a GPU that can fit in that bottom slot for the host, then you could at least run a test w/o needing the ACS override hackery. Asus boards are pretty poor about letting you pick the primary graphics though. I wouldn't rule out lack of isolation between GPUs as a source of the black lines, there is a reason the ACS patch is not upstream. Maybe there are Crossfire settings in the BIOS that can be disabled. Trying a non-Radeon card for host or guest would also be a good experiment.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
The vfio-bind script is unnecessary in your configuration. With managed="yes" libvirt should automatically bind the devices to vfio-pci when you start the VM.
Ah, that's easier if I don't need to do the extra to bind it as well since I've already done the unbinding and binding over to pci-stub.
It looks like you motherboard has 4 x16 slots, the top 3 sourced from the CPU, the bottom (and the x1 slot and PCI slots via the crappy ASM1083) sourced from the PCH. If that's true, then the top 3 x16 slots will always be grouped together because Intel doesn't feel like supporting ACS on Xeon E3-1200 series. If you could find a GPU that can fit in that bottom slot for the host, then you could at least run a test w/o needing the ACS override hackery. Asus boards are pretty poor about letting you pick the primary graphics though. I wouldn't rule out lack of isolation between GPUs as a source of the black lines, there is a reason the ACS patch is not upstream. Maybe there are Crossfire settings in the BIOS that can be disabled. Trying a non-Radeon card for host or guest would also be a good experiment.
Yeah, I found there was a large IOMMU group with the PCIe slots and would require too many devices to be passed through. What are Asrock boards like as an alternative, at least from the perspective of picking primary graphics? I've considered replacing the board with the Asrock C226-WS. I'll see if I can find a better setup/arrangement in bios, but another issue was that the cooler I'm using goes ever so slightly into the first x16 slot, so I ended up using it for the USB 3 card. Perhaps a different board with different layout would give me flexibility to move the cards elsewhere without needing to use the ACS patch.
As for the black lines, I tested a GTX 460 and an R7 250 and didn't have any issues with lines, only with the R5 230, so I'm not entirely sure what the issue is there.
Thanks for the feedback though, it gives me some stuff to work with and see what I can change or what might work better!
Last edited by Myranti (2015-01-04 06:53:19)
Offline
I tried installing windows 7 using ovmf but i could never get it to work, i believe it needs the csm present, so i gave up and installed windows 8 with a patched ovmf to allow it to boot directly from the amd sata controller.
OVMF-sata.tar.gz
Also, once you're done installing windows 8 disable fast boot
Hi, I found your post from early December. Can you tell me how you patched the ovmf file with the amd sata controller? I have 3 Sata Controllers available and I'd like to try the Intel one and boot from it. How can I patch the ovmf myself with the Intel sata controller drivers? Is this easy?
Thank you very much for your help!
Last edited by 4kGamer (2015-01-04 13:34:59)
Offline
nbhs wrote:I tried installing windows 7 using ovmf but i could never get it to work, i believe it needs the csm present, so i gave up and installed windows 8 with a patched ovmf to allow it to boot directly from the amd sata controller.
OVMF-sata.tar.gz
Also, once you're done installing windows 8 disable fast bootHi, I found your post from early December. Can you tell me how you patched the ovmf file with the amd sata controller? I have 3 Sata Controllers available and I'd like to try the Intel one and boot from it. How can I patch the ovmf myself with the Intel sata controller drivers? Is this easy?
Thank you very much for your help!
Just google reza sata patch v3, and it's not just for amd controllers, you can also use qemu's emulated one, and no it not that easy, you could use the binaries i provided
Last edited by nbhs (2015-01-04 13:59:36)
Offline
thank you!
one question: since it's not that easy, can I just use the patched ovmf you provided and boot to Windows with it and afterwards install the Intel sata drivers? I am not sure if I'd lose performance that way.
Offline
thank you!
one question: since it's not that easy, can I just use the patched ovmf you provided and boot to Windows with it and afterwards install the Intel sata drivers? I am not sure if I'd lose performance that way.
You need to use the patched binaries if you plan on booting from the controller, without the patch ovmf wont see your drives and you simply cant boot from it unless you dump/get an intel UEFI (not the windows driver) driver for your controller
Here's a guide on how to dump and uefi driver from memory: https://cs.nyu.edu/~gazzillo/xeniac/ext … river.html
But you wont need to do this if you're using my binaries
Last edited by nbhs (2015-01-04 15:13:30)
Offline
So I guess it just doesn't Impact performance if I use those binaries and update SATA Drivers in Windows... Thank you for the link. But I will just use yours. And it worked! Thank you!
Now what I realised is that the only issue I am having are low IOPS (30.000+ compared to bare metal 90.000+).
Is there something I can do about it or is SATA passthrough simply not good enough?
Last edited by 4kGamer (2015-01-04 16:59:37)
Offline
Yeah, I found there was a large IOMMU group with the PCIe slots and would require too many devices to be passed through. What are Asrock boards like as an alternative, at least from the perspective of picking primary graphics? I've considered replacing the board with the Asrock C226-WS. I'll see if I can find a better setup/arrangement in bios, but another issue was that the cooler I'm using goes ever so slightly into the first x16 slot, so I ended up using it for the USB 3 card. Perhaps a different board with different layout would give me flexibility to move the cards elsewhere without needing to use the ACS patch.
Gigabyte boards seem to be the most configurable for setting primary graphics in the BIOS. Asrock annoyed me by being more eager to tell me they don't support Linux than to answer my technical question back when I was looking for an AMD-Vi system, so I don't have any experience with them. I doubt the USB3 card in the top slot is really hurting anything, it's just not making very good use of the bandwidth available in that slot. Are you currently ordering the cards to make the host GPU the primary graphics?
As for the black lines, I tested a GTX 460 and an R7 250 and didn't have any issues with lines, only with the R5 230, so I'm not entirely sure what the issue is there.
Could certainly be some peer-to-peer enabled when multiple Radeon cards are installed. Good for enabling Crossfire, bad for isolation with a VM. Maybe this is why Intel won't support isolation on "client" CPU root ports. You might look at motherboards that would allow you to install the guest GPU in a PCH root port. These are often identifiable as the PCIe2.0 slots rather than PCIe3.0. Assigning the onboard USB3 card to the guest and using the plugin card for the host may also provide more flexibility.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
Gigabyte boards seem to be the most configurable for setting primary graphics in the BIOS. Asrock annoyed me by being more eager to tell me they don't support Linux than to answer my technical question back when I was looking for an AMD-Vi system, so I don't have any experience with them. I doubt the USB3 card in the top slot is really hurting anything, it's just not making very good use of the bandwidth available in that slot. Are you currently ordering the cards to make the host GPU the primary graphics?
Yeah, the cards are ordered so that the host GPU (The R5 230) is primary graphics in the next usable slot after the one being taken up by the USB 3 card.
Could certainly be some peer-to-peer enabled when multiple Radeon cards are installed. Good for enabling Crossfire, bad for isolation with a VM. Maybe this is why Intel won't support isolation on "client" CPU root ports. You might look at motherboards that would allow you to install the guest GPU in a PCH root port. These are often identifiable as the PCIe2.0 slots rather than PCIe3.0. Assigning the onboard USB3 card to the guest and using the plugin card for the host may also provide more flexibility.
I think it might be worth seeing what alternatives there are for motherboards. The main concern is still getting enough PCIe slots, seeing as I'll have no use for PCI slots with my setup.
Edit: It looks like Supermicro have some workable boards with Primary Display selection (looking at the X10SAT), although I from what I can see there may very well be the same sort of issue with IOMMU groups as what I have now.
Last edited by Myranti (2015-01-06 13:27:21)
Offline
Hi all i want to ask. Are there some performance benefits when i use kernel 3.18 instead my 3.15? or another reasons?
Offline
DelusionalLogic wrote:If followed the guide, installed windows 8, and it kinda worked, execept the nividia driver. I enabled "kvm=off" on the cpu parameter, and installed the driver (from a cirrus vga adapter, with the card passthroug as secondary). It installed fine, but now i'm getting BSOD's from the VM. The error is "SYSTEM_SERVICE_EXCEPTION" and it's originating from the nvidia driver.
I then tried installing windows 7, but that's just giving me this whenever i try to boot. the installation went fine (this is without any passthrough).
KVM internal error. Suberror: 1 emulation failure EAX=00000010 EBX=00000080 ECX=00000000 EDX=00000080 ESI=0025db2a EDI=0007db2a EBP=00007c00 ESP=00000200 EIP=000000ca EFL=00010002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=0 ES =0020 00000200 0000ffff 00009300 CS =b000 000b0000 0000ffff 00009f00 SS =0020 00000200 0000ffff 00009300 DS =0020 00000200 0000ffff 00009300 FS =0020 00000200 0000ffff 00009300 GS =0020 00000200 0000ffff 00009300 LDT=0000 00000000 0000ffff 00008200 TR =0000 00000000 0000ffff 00008b00 GDT= 002b0000 0000001f IDT= 00000000 000003ff CR0=00000010 CR2=00000000 CR3=00000000 CR4=00000000 DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 DR6=00000000ffff0ff0 DR7=0000000000000400 EFER=0000000000000000 Code=ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff <ff> ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
Here's my script for launching the vm:
vfio-bind 0000:03:00.0 0000:03:00.1 qemu-system-x86_64 -enable-kvm -M q35 -m 1024 -cpu host,kvm=off \ -smp 6,sockets=1,cores=6,threads=1 \ -bios /usr/share/qemu/bios.bin -vga cirrus \ -device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \ -device vfio-pci,host=03:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \ -device vfio-pci,host=03:00.1,bus=root.1,addr=00.1 \ -drive file=/home/delusional/vm/windows.img,id=disk,format=raw -device ide-hd,bus=ide.0,drive=disk \ -drive file=/home/delusional/vm/install.iso,id=isocd -device ide-cd,bus=ide.1,drive=isocd \ -usb -usbdevice host:1038:1361 -usbdevice host:1532:010d \ -net nic -net bridge,br=br0 \ -boot menu=on
I am aware that this would not enable the passthrough anyway (seeing as the nividia card is secondary) but at least it should boot right?
I was having the same error before I patched the kernel.
Which patch fixed it? I can't see the VGA arbiter patch having much effect since I'm not using the i915 driver. Also what OS are you running in the VM?
Last edited by DelusionalLogic (2015-01-06 15:22:19)
Offline
I've got my Win10 guest to boot with my GTX 760 passed through, though I am unable to get USB working:
qemu-system-x86_64: util/qemu-option.c:387: qemu_opt_get_bool_helper: Assertion `opt->desc && opt->desc->type == QEMU_OPT_BOOL' failed.
Searching for this error, it appears the latest commit of QEMU introduced this error - recent results from Google in last two weeks. Originally my host was setup for XEN, but I disabled those features and booted off your linux-mainline kernel. On installing QEMU-GIT it had to unisntall regular QEMU. Does pulling down QEMU-GIT also grab the latest QEMU source? Could this issue be from an unknown failure of the uninstall?
Offline
I've got my Win10 guest to boot with my GTX 760 passed through, though I am unable to get USB working:
qemu-system-x86_64: util/qemu-option.c:387: qemu_opt_get_bool_helper: Assertion `opt->desc && opt->desc->type == QEMU_OPT_BOOL' failed.
Searching for this error, it appears the latest commit of QEMU introduced this error - recent results from Google in last two weeks. Originally my host was setup for XEN, but I disabled those features and booted off your linux-mainline kernel. On installing QEMU-GIT it had to unisntall regular QEMU. Does pulling down QEMU-GIT also grab the latest QEMU source? Could this issue be from an unknown failure of the uninstall?
If you're unwilling to deal with breakages, you should probably be building from a released tag in qemu.git and not just the current development head. git checkout v2.2.0
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
How did you all compile the modified kernel? The "acs override" patch doesn't seem to be compatible with the 3.17 "quirks.c"
Offline
The_Moves wrote:I've got my Win10 guest to boot with my GTX 760 passed through, though I am unable to get USB working:
qemu-system-x86_64: util/qemu-option.c:387: qemu_opt_get_bool_helper: Assertion `opt->desc && opt->desc->type == QEMU_OPT_BOOL' failed.
Searching for this error, it appears the latest commit of QEMU introduced this error - recent results from Google in last two weeks. Originally my host was setup for XEN, but I disabled those features and booted off your linux-mainline kernel. On installing QEMU-GIT it had to unisntall regular QEMU. Does pulling down QEMU-GIT also grab the latest QEMU source? Could this issue be from an unknown failure of the uninstall?
If you're unwilling to deal with breakages, you should probably be building from a released tag in qemu.git and not just the current development head. git checkout v2.2.0
Yeah, that makes sense. I'll need to recompile with the correct build. I've only been using Arch for three days now and it is a lot different from what i'm used to, RHEL 5/6 Solaris AIX. I'm actually liking it quite a lot.
I assume i'm supposed to do v2.2.0, but how do I pull that specific version instead of automatically getting the latest and greatest?
Last edited by The_Moves (2015-01-06 18:50:37)
Offline
aw wrote:The_Moves wrote:I've got my Win10 guest to boot with my GTX 760 passed through, though I am unable to get USB working:
qemu-system-x86_64: util/qemu-option.c:387: qemu_opt_get_bool_helper: Assertion `opt->desc && opt->desc->type == QEMU_OPT_BOOL' failed.
Searching for this error, it appears the latest commit of QEMU introduced this error - recent results from Google in last two weeks. Originally my host was setup for XEN, but I disabled those features and booted off your linux-mainline kernel. On installing QEMU-GIT it had to unisntall regular QEMU. Does pulling down QEMU-GIT also grab the latest QEMU source? Could this issue be from an unknown failure of the uninstall?
If you're unwilling to deal with breakages, you should probably be building from a released tag in qemu.git and not just the current development head. git checkout v2.2.0
Yeah, that makes sense. I'll need to recompile with the correct build. I've only been using Arch for three days now and it is a lot different from what i'm used to, RHEL 5/6 Solaris AIX. I'm actually liking it quite a lot.
I assume i'm supposed to do v2.2.0, but how do I pull that specific version instead of automatically getting the latest and greatest?
Maybe you shouldn't be using git. http://wiki.qemu.org/Download I assume there are ways to get what you need from arch too.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
Thanks, you are correct. I was making it harder than it was supposed to be
Offline
So i applied the "ACS override" patch and tried running it again (That was a lot of work). Sadly it changed nothing.
To reiterate the issue:
I installed windows 8 and the nvidia drivers using the cirrus drivers with my nvidia card as a secondary passthrough. That part works. The problem occurs when I start the VM with the nvidia drivers installed (the card is still passed through as secondary). It just immediately causes a BSOD with a "SYSTEM_SERVICE_EXCEPTION".
I have the ACS override patch applied and IOMMU enables (with some custom patches to the bios)
My hardware is:
Intel I7 920
Asus P6T deluxe V2 (Yes, i know it's broken. I patched the bios, and it IOMMU does enable)
Nvidia GTX 460 (Passthrough)
Nvidia GTX 660ti (Main)
Offline
So i applied the "ACS override" patch and tried running it again (That was a lot of work). Sadly it changed nothing.
To reiterate the issue:
I installed windows 8 and the nvidia drivers using the cirrus drivers with my nvidia card as a secondary passthrough. That part works. The problem occurs when I start the VM with the nvidia drivers installed (the card is still passed through as secondary). It just immediately causes a BSOD with a "SYSTEM_SERVICE_EXCEPTION".I have the ACS override patch applied and IOMMU enables (with some custom patches to the bios)
My hardware is:
Intel I7 920
Asus P6T deluxe V2 (Yes, i know it's broken. I patched the bios, and it IOMMU does enable)
Nvidia GTX 460 (Passthrough)
Nvidia GTX 660ti (Main)
Where is it claimed that GeForce cards work as secondary in the guest?
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
DelusionalLogic wrote:So i applied the "ACS override" patch and tried running it again (That was a lot of work). Sadly it changed nothing.
To reiterate the issue:
I installed windows 8 and the nvidia drivers using the cirrus drivers with my nvidia card as a secondary passthrough. That part works. The problem occurs when I start the VM with the nvidia drivers installed (the card is still passed through as secondary). It just immediately causes a BSOD with a "SYSTEM_SERVICE_EXCEPTION".I have the ACS override patch applied and IOMMU enables (with some custom patches to the bios)
My hardware is:
Intel I7 920
Asus P6T deluxe V2 (Yes, i know it's broken. I patched the bios, and it IOMMU does enable)
Nvidia GTX 460 (Passthrough)
Nvidia GTX 660ti (Main)Where is it claimed that GeForce cards work as secondary in the guest?
Ohh, so I'm an idiot... The good news is that it actually works. I got the long unsupported nonworking asus board to work.
Thanks.
EDIT:
I was a bit too quick. It boots fine, but the graphics performance is absolutely abysmal. GPU-Z reports that the driver is installed and loaded (forceware 347), it's also reporting the right card (Gefore 460). The problem is that it's reporting 0 MHz clock rate, and 0 MB memory. This obviously means that it's not communicating correctly with the card. What could cause that?
EDIT2: Indeed, even the control panel wont launch. I guess the driver is completely borked. (Exception code 0x40000015)
Last edited by DelusionalLogic (2015-01-07 00:37:52)
Offline
Duelist wrote:Also, if you're sure that your host disks are okay, try preparing your windows to migrating on virtio-blk-pci instead of scsi-way. That involves creating a dummy-drive on virtio-blk-pci device and feeding windows with drivers from virtio.iso and then "reconnecting" the drive.
The disk seems good to me, which qemu options I need to use virtio-blk-pci?
First - you've got to create a -device entry with virtio-blk-pci and drive=null, where -drive=/dev/null,id=null,if=none,format=raw .
Second - you boot windows with virtio.iso plugged into it, load the drivers for that disk controller.
Third - you change the drive of given virtio-blk-pci device to your windows drive and it should boot. If it gets BSOD 7B during startup - drivers for disk controller aren't installed.
Ansa89 wrote:PS: what are the benefits of using virtio-blk instead of virtio-scsi?
With virtio-blk you can use x-data-plane=on, which increases IOPS in guest and lowers overhead in host. In windows guest I wasn't able to pass trim operations through virtio-blk as opposed to virtio-scsi. Later I found a workaround to this by using detect-zeroes=on with discard='unmap' and writing zeros to desired sectors (zero out unused disk space inside guest).
Can someone please share with me how I can enable virtio-blk-data-plane for the Libvirt Domain XML?
I couldn't transform these posts into my XML.
I also found some slides from redhat, but they don't work either. How does the relevant code look like in XML?
I am trying for some days now without luck.
thank you very much.
Last edited by 4kGamer (2015-01-07 10:27:35)
Offline