You are not logged in.
That is my point. What I try to say is that is intentional that on consumer Chipsets you get a single group with the PCI Controller Ports and everything else below it. On C226 I got it working out of the box since its a Workstation/Server Chipset, I didn't had to apply patchs or anything to see nearly everything in its own group.
I'm sorry to blow your theory, but you got it to work on C226 because quirks are already in place for your PCH root ports and you apparently don't have or aren't using multiple processor-based root ports. There's no difference between the consumer 8-series chipsets and the "workstation" version with respect to isolation.
It seems that what you're applying quircks to, is to force to enable it anyways (Similar to Intel only allowing you to use the Unlocked Multiplier of their K Series Processors on Z Chipset Motherboards, while some manufacturers discovered ways to bypass that to overclock in cheaper Chipsets: http://www.xbitlabs.com/news/mainboards … oards.html ). Since you were unaware that Intel sells VT-d support as an extra feature in some Chipsets at the first place, its probable that these quircks are related to that.
What I'm missing is having a lot of IOMMU Groups examples on different platforms. I know how mine in a C226 looks. I know the guy with the Z97 that I posted (Which I think he mentioned to not be using the ACS Override patch), and recall a few more consumer platforms with that same arrangement on other Desktop 8-Series. It is not in-depth enough to confirm this, but I'm inclined to believe than that is what VT-d support on the Chipset means for Intel. It seems that the IOMMU isolation capabilities are exclusive for the Chipset PCIe Controller instead of a global one.
Trust me, I know IOMMU groups. You don't need to apply the ACS patch because a) there's already a kernel-based workaround in place for PCH root ports and b) you're not using the processor based root ports in a way that exposes the lack of isolation. You have not discovered some secret decoder for device isolation in ark. Don't believe me, run 'dmesg | grep "Intel PCH root port ACS workaround enabled"' <-- that's why your IOMMU groups look the way they do. You're welcome.
aw wrote:along with workstation and server class chipsets like X79 and X99.
These are high end Desktop. Workstation and Server class are their same-silicon and more expensive C counterparts, C202, C204 and C206 for Sandy Bridge-E/Ivy Bridge-E, and C216 for Haswell-E.
No, if anything it's the other way, the C series are just rebranded desktop chipsets, the X series actually add more high-end features and are different silicon.
Duelist wrote:If i recall correctly, IOMMU feature was present and was even working on athlon 64 x2 series of CPUs, that was, like, eight years ago. That sparkles my curiosity to try using vfio on these platforms and see how it works. (sadly, my local hardware mess-market was closed some time ago due to fire safety concerns)
AMD-Vi was first supported on Desktop Chipsets with the 890FX, that was during the Phenom II era. 990FX supports it too. However, Bulldozer and derivated processors also incorporate their own IOMMU, so you don't need Chipset (Similar to Intel).
This we can agree on, I think.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
False, while AMD did manage to get ACS on root ports, we still have ACS quirks for other AMD devices.
Did you mean this?
You certainly won't find it on Athlon 64 hardware with AMD-Vi.
What about K10.5(Kuma) athlons, which have their microarch "backported" from phenoms II? Except they were AM2+ socket-based, so their chipsets were only 7XX, which, i believe, do not support IOMMU?
Well, now it makes sense. FX-line is semi-dead(Hail Opteron-X, it is an APU), and all APUs have Unified North Bridge and will only differ by FCH, which isn't so important in IOMMU working, right?
So if intel would make their new chipsets as rare as AMD does, there wouldn't be such a great difference in quirks per secondtime.
P.S.
Why do I remember quirks.c file having only seven lines?..
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline
Hello all, I have the follow setup:
1. intel motherboard with Xeon CPU (E3)
2. NVidia Quadro 2000 (no UEFI support)With the IGD as primary, I tried applying the patches on a Ubuntu server installation though an older kernel (3.13) as explained here: http://ubuntuforums.org/showthread.php?t=2266916 . Everything works fine until I assign an additional PCI device to the VM. In which case the host just completely hangs after the VM shutdown. If I don't assign extra PCI devices to the machine, it works fine, reboots as expected, etc.
I added an AMD/ATI graphic card and set it as primary display. I disabled the NVIDIA display and when I launch the VM everything works fine, but the VM doesn't survive a reboot. It seems that at the first reboot the NVidia doesn't properly reset.Any hint on what to investigate?
Thanks!
I did a lots of experiments with the patches and an extra VGA. So the issues boils down to this and maybe a bug (in Qemu?) :
if the VM starts, the GPU gets in a "weird" internal state. If I kill the VM before Windows starts the NVidia driver, the VM won't boot anymore until the host is reset. What is worst is that any attempt to reboot the guest VM will cause the host to apparently freeze (I don't know if it's just the graphic, I will try to ssh to the host in the frozen state). If Windows start the NVidia driver the GPU is properly put into a "bootable" state and all is nice and fine.
Later I will try to see what happens by using virsh to boot and shutdown the VM.
Has anybody experienced anything similar?
Thanks (more details to follow).
Offline
aw wrote:False, while AMD did manage to get ACS on root ports, we still have ACS quirks for other AMD devices.
Did you mean this?
Yeah, that added to a set of quirks that were already there for devices attached to the AMD southbridge
aw wrote:You certainly won't find it on Athlon 64 hardware with AMD-Vi.
What about K10.5(Kuma) athlons, which have their microarch "backported" from phenoms II? Except they were AM2+ socket-based, so their chipsets were only 7XX, which, i believe, do not support IOMMU?
I'd guess not.
Well, now it makes sense. FX-line is semi-dead(Hail Opteron-X, it is an APU), and all APUs have Unified North Bridge and will only differ by FCH, which isn't so important in IOMMU working, right?
Yeah, they look a lot more similar to an Intel processor+PCH model. Isolation is not required for the IOMMU itself, but we require isolation to make device assignment safe.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
paperinick wrote:Hello all, I have the follow setup:
1. intel motherboard with Xeon CPU (E3)
2. NVidia Quadro 2000 (no UEFI support)With the IGD as primary, I tried applying the patches on a Ubuntu server installation though an older kernel (3.13) as explained here: http://ubuntuforums.org/showthread.php?t=2266916 . Everything works fine until I assign an additional PCI device to the VM. In which case the host just completely hangs after the VM shutdown. If I don't assign extra PCI devices to the machine, it works fine, reboots as expected, etc.
I added an AMD/ATI graphic card and set it as primary display. I disabled the NVIDIA display and when I launch the VM everything works fine, but the VM doesn't survive a reboot. It seems that at the first reboot the NVidia doesn't properly reset.Any hint on what to investigate?
Thanks!I did a lots of experiments with the patches and an extra VGA. So the issues boils down to this and maybe a bug (in Qemu?) :
if the VM starts, the GPU gets in a "weird" internal state. If I kill the VM before Windows starts the NVidia driver, the VM won't boot anymore until the host is reset. What is worst is that any attempt to reboot the guest VM will cause the host to apparently freeze (I don't know if it's just the graphic, I will try to ssh to the host in the frozen state). If Windows start the NVidia driver the GPU is properly put into a "bootable" state and all is nice and fine.
Later I will try to see what happens by using virsh to boot and shutdown the VM.
Has anybody experienced anything similar?
Thanks (more details to follow).
You have a Quadro 2000, just assigned it as a secondary device and be done with it. https://access.redhat.com/documentation … e-GPU.html
If it doesn't work, update kernel and/or QEMU. Note that only K-series Quadros are actually supported, but Fermi are likely to work. Quadro cards do not work well when configured as the primary display for a VM.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
There's no difference between the consumer 8-series chipsets and the "workstation" version with respect to isolation.
There MUST be a difference somewhere. Both ASUS and Supermicro previously did some comments about that. In the case of ASUS, their excuse to never bother making proper DMAR Tables for Intel Motherboards was that non-Q/C Chipsets didn't "fully supported VT-d", which they stated numerous times quite adamantly. Examples here:
http://rog.asus.com/forum/showthread.ph … post412844
Thats because the chipset does not officially support it either. Some of the tests will pass so some motherboard vendors will claim to have support, BUT if you search you'll find the same vendors have issues with certain Vt-d features also. We don't want to play that game.
For an ablsoute guarantee of Vt-d one needs to purchase a Q series chipset.
http://www.overclock.net/t/1488891/asus … t_22328891
Just how Intel have done it for a few generations now. I have seen no documentation (yet) that suggests that VT-d IO is supported on the Z series of chipsets. Z87 was the same. Q series only. If that has changed its news to me (but not impossible).
http://ark.intel.com/products/82012 Does not show any Vt-d support there yet does it?
Some of the other vendors have claimed Vt-D for a few generations, but all 50 Vt-d tests did not pass on their boards. We won't claim support when that happens. Other vendors have more lax rules.
http://www.overclock.net/t/1488891/asus … t_22329055
The Vt-D support was not full directed IO and all tests will not validate on the Z series chipsets in the past. SO take that for what you will.
Nothing has chanhed as far as I am aware.
I end up having to go in this same circle with users on every gen. The CPU supporting and Intel fully enabling it on the chipset are two different things.
And yes any user finding it works partially inthe usage scenario is lucky, as all tests will not validate - so its a crapshoot. For us the risk in claiming partial support outweighs being honest about it.
I'm missing one more Post from him where he said that non-Q Chipsets "fails a battery of 25 of 50 tests" or something like that.
When I asked about VT-d support in a Z87 Motherboard to Supermicro mail support, they replied this:
Since Z87 chipset does not support VT-d, onboard LAN will not support it either because it is connected to PCH PCIe port. One workaround is to use a VT-d enabled PCIe device and plug it into CPU based PCIe-port on board. Along with a VT-d enabled CPU the above workaround should work per Intel.
Also, I just checked the 8-Series Chipset Datasheet, here: http://www.intel.com/content/dam/www/pu … asheet.pdf
It mentions this about VT-d support in Chipsets (Page 253):
5.29.2 Intel® VT-d Features Supported
• The following devices and functions support FLR in the PCH:
— High Definition Audio (Device 27: Function 0)
— SATA Host Controller 1 (Device 31: Function 2)
— SATA Host Controller 2 (Device 31: Function 5)
— USB2 (EHCI) Host Controller 1 (Device 29: Function 0)
— USB2 (EHCI) Host Controller 2 (Device 26: Function 0)
— GbE Lan Host Controller (Device 25: Function 0)
• Interrupt virtualization support for IOxAPIC
• Virtualization support for HPETs
However, VT-d itself does not appear on the feature matrix at Page 52. It mentions this instead:
2. Table above shows feature differences between the PCH SKUs. If a feature is not listed in the table it is
considered a Base feature that is included in all SKUs.
...But that contradicts previous statements.
It may be possible that the IOMMU Groups isolation is unrelated to Q/C Chipsets, however, they are NOT the same. There is a difference somewhere. If you have good contacts with Intel, it may be useful if you get a definite answer about this so you can stop a collection of 3 year old claims.
No, if anything it's the other way, the C series are just rebranded desktop chipsets, the X series actually add more high-end features and are different silicon.
You have two C-Series Chipsets per generation, the ones based on the consumer platform (LGA 1155/1150), like my C226 for Haswell, which you can say that are rebranded consumer Chipsets with a slighty different feature set, and the one for LGA 2011/2011-R3, which is identical silicon to its X counterpart (C612 and X99 for Haswell-E, C602/604/606 and X79 for Sandy Bridge-E/Ivy Bridge-E). However, in that case, the X79/X99 is the most crippled of the two, since they disable the enterprise level features: http://ark.intel.com/compare/81761,81759
Besides, you will not find a Dual Socket X99 Motherboards, that's C612s segment, asserting its highendness.
Also, here is another example in a Datasheet: http://www.intel.com/content/dam/www/pu … asheet.pdf
On Page 49 and 50 you see mentions to VT-d and X79 NOT supporting it, reinforcing that it was supposed to be a Server feature. Moreover, some of the C6xx had an integrated SAS Controller, which X79 did not. A Motherboard manufacturer even managed to hack their way to include it in a X79 based product: http://www.legitreviews.com/ecs-enables … board_1833
Yes, I know that you can get VT-d working anyways. But there is a missing piece in the puzzle.
Last edited by zir_blazer (2015-04-25 12:12:00)
Offline
@aw :
Googling around revealed that AER Errors are quite common when using Nvidia on X99 .
Also it is reported that using this kernel parameter makes the error go away :
pci=nommconf
Does this option affect the working of VFIO & GPU passthrough ?
Offline
@aw :
Googling around revealed that AER Errors are quite common when using Nvidia on X99 .
Also it is reported that using this kernel parameter makes the error go away :
pci=nommconf
Does this option affect the working of VFIO & GPU passthrough ?
mmconf is the mechanism to get to PCI extended configuration space. AER is an extended capability, so disabling access to it disables AER. nommconf is a rather large hammer though, why not just disable AER with pci=noaer? nommconf won't prevent VFIO, but ACS is also an extended capability, so you may find more isolation issues using it.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
Yup . In fact , pci=nommconf results in Code 43 .
Will try pci=noaer now
EDIT : There are a couple of PCI-E options in BIOS that I enabled now . Hopefully they would make the issue go away . If not I will try pci=noaer .
EDIT 2 : When I launch both of my GT610 VM and GTX770 VM , I notice that GTX770 got ecap hiding message in dmesg :
vfio_ecap_init: 0000:01:00.0 hiding ecap 0x19@0x900
but there is nothing similar for the GT610 . Could this be a cause for the reset issues I'm having with GT610 ?
Last edited by Denso (2015-04-25 16:15:23)
Offline
EDIT 2 : When I launch both of my GT610 VM and GTX770 VM , I notice that GTX770 got ecap hiding message in dmesg :
vfio_ecap_init: 0000:01:00.0 hiding ecap 0x19@0x900
but there is nothing similar for the GT610 . Could this be a cause for the reset issues I'm having with GT610 ?
That's just an informational message. If you don't see the same on the GT610, it's because the card doesn't have that capability. I can't imagine how it would contribute to the reset problem.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
Thanks !
It seems that getting rid of it and get a 900 series GPU is the next move
Offline
Hello everyone
i have been for the past 4 days trying to install windows 7 using the method in the op's post and have the vga-passthru working and also passthru the sata controller sofar i have the vga passthru working as it is supposed to but with the sata controller everytime the vm resets the controller is no longer detected and the vm hangs just saying SeaBIOS (version 1.7.5-20140531_171129-lamiak) and i have to restart the entire pc any ideas
i have totally run out of ideas myself i have made sure that the sata controllers in bios are AHCI and that i even turned hotswap off.
the init script i use is as follows:
#!/bin/bash
configfile=/etc/vfio-pci1.cfg
vfiobind() {
dev="$1"
vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
device=$(cat /sys/bus/pci/devices/$dev/device)
if [ -e /sys/bus/pci/devices/$dev/driver ]; then
echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
fi
echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id
}
modprobe vfio-pci
cat $configfile | while read line;do
echo $line | grep ^# >/dev/null 2>&1 && continue
vfiobind $line
done
sudo qemu-system-x86_64 -vga none -enable-kvm -M q35 -m 4096 -cpu host,kvm=off \
-boot menu=on \
-smp 4,sockets=1,cores=1,threads=1 \
-usb -usbdevice host:046d:c313 \
-device vfio-pci,host=02:00.0,bus=pcie.0,x-vga=on \
-device vfio-pci,host=02:00.1 \
-device vfio-pci,host=0a:00.0 \
-device virtio-scsi-pci,id=scsi \
-drive id=windcd,format=raw,media=cdrom,file=/home/cassandra/Downloads/en_windo$
-drive id=virtio,format=raw,media=cdrom,file=/home/cassandra/Downloads/virtio-w$
-net nic,model=virtio -net user \
-vga none \
-boot menu=on
exit 0
the pc specs are :
MOTHERBOARD : Asus sabertooth x79
Processor : Intel Core-I7-3820 @ 3.60 (VT-d enabled)
ram : 16gb ddr3
HOST gfx : Nvidia geforce 7950
guest GFX : Nvidia geforce GTX 660
guest sata : Asmedia ASM1062 Serial ATA controller
guest HDD : hitachi 120gb sata (connected to eSATA)
Offline
Hello everyone
i have been for the past 4 days trying to install windows 7 using the method in the op's post and have the vga-passthru working and also passthru the sata controller sofar i have the vga passthru working as it is supposed to but with the sata controller everytime the vm resets the controller is no longer detected and the vm hangs just saying SeaBIOS (version 1.7.5-20140531_171129-lamiak) and i have to restart the entire pc any ideas
i have totally run out of ideas myself i have made sure that the sata controllers in bios are AHCI and that i even turned hotswap off.
the init script i use is as follows:
#!/bin/bashconfigfile=/etc/vfio-pci1.cfg
vfiobind() {
dev="$1"
vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
device=$(cat /sys/bus/pci/devices/$dev/device)
if [ -e /sys/bus/pci/devices/$dev/driver ]; then
echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
fi
echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id}
modprobe vfio-pci
cat $configfile | while read line;do
echo $line | grep ^# >/dev/null 2>&1 && continue
vfiobind $line
donesudo qemu-system-x86_64 -vga none -enable-kvm -M q35 -m 4096 -cpu host,kvm=off \
-boot menu=on \
-smp 4,sockets=1,cores=1,threads=1 \
-usb -usbdevice host:046d:c313 \
-device vfio-pci,host=02:00.0,bus=pcie.0,x-vga=on \
-device vfio-pci,host=02:00.1 \
-device vfio-pci,host=0a:00.0 \
-device virtio-scsi-pci,id=scsi \
-drive id=windcd,format=raw,media=cdrom,file=/home/cassandra/Downloads/en_windo$
-drive id=virtio,format=raw,media=cdrom,file=/home/cassandra/Downloads/virtio-w$
-net nic,model=virtio -net user \
-vga none \
-boot menu=on
exit 0the pc specs are :
MOTHERBOARD : Asus sabertooth x79
Processor : Intel Core-I7-3820 @ 3.60 (VT-d enabled)
ram : 16gb ddr3
HOST gfx : Nvidia geforce 7950
guest GFX : Nvidia geforce GTX 660
guest sata : Asmedia ASM1062 Serial ATA controller
guest HDD : hitachi 120gb sata (connected to eSATA)
Use virtio-blk or virtio-scsi instead of passingthrough the controller. It's working just fine.
Offline
I'm almost there, I guess, but now Windows doesn't seem to play along. Windows is starting to boot, though, but shortly after the startup screen (the one looking like this: http://upload.wikimedia.org/wikipedia/c … ooting.png ) appears, the VM (or at least video output) freezes (i.e. the dots stop spinning and nothing else is happening - no bluescreen).
Did anyone else had similar issues and could point me in the right direction?
-------- details --------
Windows installation was straight forward. After installing virtio drivers the hdd was found, the installer created partitions and copied files. I later mounted the data partition to check it's contents and it superficially looked like a common windows installation.
Once the installer was done, it rebooted. Then, according to the message in the following startup screen, it did some further "setting up devices". At the end of that process, it seemed like the VM hang already but after a few seconds it rebooted again. And now it hangs for real...
I'm running Ubuntu 15.04 3.16.0-36-generic on host and Windows 8.1 professional in the VM. When setting everything up I laregly followed http://ubuntuforums.org/showthread.php?t=2266916, supplemented with info from this thread.
The VM is launched with:
qemu-system-x86_64 \
-enable-kvm \
-M q35 \
-m 6G \
-mem-path /dev/hugepages \
-cpu host,kvm=off \
-smp 4,sockets=1,cores=4,threads=1 \
-vga none \
-usb -usbdevice host:046d:c52b \
-bios /home/user/vm/OVMF-pure-efi.fd \
-device virtio-scsi-pci,id=scsi0 \
-drive file=/dev/sdc,id=disk,format=raw,if=none -device scsi-hd,bus=scsi0.0,drive=disk \
-drive file=/home/user/iso/de_windows_8_1_x64_dvd_xxxxxxx.iso,id=isowindows,if=none -device scsi-cd,bus=scsi0.0,drive=isowindows \
-drive file=/home/user/iso/virtio-win-0.1-100.iso,id=isovirtio,if=none -device ide-cd,bus=ide.1,drive=isovirtio \
-device vfio-pci,host=01:00.0,multifunction=on,x-vga=on \
-device vfio-pci,host=01:00.1 \
-boot order=cd,menu=on \
-k de
Hardware:
Xeon E5-2630v3
X99 board
ATI R5 230 (host)
nvidia GTX960 (passed through)
SSD solely for VM (sdc)
USB mouse/keyboard combo passed to VM
Offline
Bronek wrote:aw wrote:AMD Bonaire and Hawaii users with reset issues, please try this QEMU patch (against v2.3.0-rc4): http://fpaste.org/214053/ Report back your GPU device ID and whether resets are resolved. As noted in the patch, I expect the ultimate resolution will be a device specific reset in the kernel, this is just a prototype of that. If you do not have an AMD GPU affected by reset problems (ie. one guest boot per host boot), please don't bother with this patch.
Do you think that could help with reset of AMD W7100 ?
Dunno, try it. That's a Tonga GPU, who knows, maybe the bug extends further than Bonaire/Hawaii.
is it independent from http://article.gmane.org/gmane.linux.kernel.pci/40663 , or should I patch my kernel 3.18.12 as well? Or perhaps try with kernel 4.0?
Last edited by Bronek (2015-04-26 20:06:45)
Offline
is it independent from http://article.gmane.org/gmane.linux.kernel.pci/40663 , or should I patch my kernel 3.18.12 as well? Or perhaps try with kernel 4.0?
Not at all, it should be independent of kernel version. I've also decided that the reset is not reliable enough for the kernel, so this is the patch proposed for QEMU: https://lists.gnu.org/archive/html/qemu … 03128.html
EDIT: Note that this version only enables the reset for specific device IDs. If you're looking to try it on something else, you'll need to add the ID for your hardware to the list.
Last edited by aw (2015-04-26 20:11:30)
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
I'm currently trying to get this working on Ubuntu 14.10 on kernel 3.19 and am having a lot of issues. When I try to boot it with qemu, a see a small black window pop up with:
compat_monitor0 console
QEMU 2.2.1 monitor - type 'help' for more information
(qemu)
and no output from the second monitor (my tv). I have also tried using Virtual Machine Manager, which, after passing through my GTX 760, gives me a virtual window on my main monitor. I installed windows 8.1 on it and saw that I had a code 43 on my GTX 760. I tried installing drivers for it on the guest machine (a few different releases back to 335.23) and none of them solved the issue.
Here's my execution script:
#!/bin/bash
configfile=/etc/vfio-pci1.cfg
vfiobind() {
dev="$1"
vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
device=$(cat /sys/bus/pci/devices/$dev/device)
if [ -e /sys/bus/pci/devices/$dev/driver ]; then
echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
fi
echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id
}modprobe vfio-pci
cat $configfile | while read line;do
echo $line | grep ^# >/dev/null 2>&1 && continue
vfiobind $line
donesudo qemu-system-x86_64 -enable-kvm -M q35 -m 4096 -cpu host,kvm=off \
-smp 4,sockets=1,cores=1,threads=1 \
-bios /usr/share/seabios/bios.bin -vga none \
-device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
-device vfio-pci,host=07:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on,rombar=0,romfile=/home/carter/Downloads/EVGA.GTX760.4096.130607.rom \
-device vfio-pci,host=07:00.1,bus=root.1,addr=00.1 \
-drive file=/home/carter/windows8.img,cache=writeback,if=none,id=drive0,aio=native \
-device virtio-blk-pci,drive=drive0,ioeventfd=on,bootindex=1 \
-device virtio-scsi-pci,id=scsi \
-drive file=/home/carter/Windows_8.1_Pro_X64_Activated.iso,id=iso_install,if=none \
-device scsi-cd,drive=iso_install \
-boot menu=onexit 0
My output of dmesg | grep pci-stub:
[ 0.514450] pci-stub: add 10DE:0E0A sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000
[ 0.514462] pci-stub 0000:07:00.1: claimed by stub
[ 0.514469] pci-stub: add 10DE:1187 sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000
[ 0.514474] pci-stub 0000:07:00.0: claimed by stub
The contents of /etc/modules:
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
# Parameters can be specified after the module name.
pci_stub
vfio
vfio_iommu_type1
vfio_pci
kvm
kvm_intel
The contents of /etc/default/grub:
# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
# info -f grub -n 'Simple configuration'GRUB_DEFAULT="0"
#GRUB_HIDDEN_TIMEOUT="0"
GRUB_HIDDEN_TIMEOUT_QUIET="true"
GRUB_TIMEOUT="10"
GRUB_DISTRIBUTOR="`lsb_release -i -s 2> /dev/null || echo Debian`"
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash intel_iommu=on vfio_iommu_type1.allow_unsafe_interrupts=1 pcie_acs_override=downstream i915.enable_hd_vgaarb=1"
GRUB_CMDLINE_LINUX=""# Uncomment to enable BadRAM filtering, modify to suit your needs
# This works with Linux (no patch required) and with any kernel that obtains
# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)
#GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef"# Uncomment to disable graphical terminal (grub-pc only)
#GRUB_TERMINAL="console"# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'
#GRUB_GFXMODE="640x480"# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux
#GRUB_DISABLE_LINUX_UUID="true"# Uncomment to disable generation of recovery mode menu entries
#GRUB_DISABLE_RECOVERY="true"# Uncomment to get a beep at grub start
#GRUB_INIT_TUNE="480 440 1"
I have tried the ACS and i915 patches, neither of which helped. However, I'm not entirely sure I did it right because I haven't found a clear quide on how to apply them. I tried it the way done in this guide.
My specs are:
Gigabyte H77 motherboard (with vt-d turned on)
NVIDIA GTX 560 for the host
EVGA GTX 760 for the guest
Intel i5 Quad Core
Would anybody care to help me out?
Offline
Has anybody actually had success following the Ubuntu guide? Seems like a flood of people following whatever misinformation it contains and landing here.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
m5a97 pro
fx-8320
hd 6850
8 gb
qemu-2.2.1
seabios-1.8.1
windows works, no problem to launch some games or benchmarks, but...
a lot of hangs during startup (looks like 1 in 10 it can successfully, other times - just logo, or black screen after it), nothing in dmesg or /var/log that could be helpful. If I look in virt-manager it shows CPU usage as a straight line from the moment of hang.
Could the change of the motherboard helps me? What about sabertooth 990fx r2.0? Or better try Intel?
<domain type='kvm'>
<name>windows</name>
<uuid>80831171-02e1-4c05-be41-011401d13184</uuid>
<memory unit='KiB'>8388608</memory>
<currentMemory unit='KiB'>8388608</currentMemory>
<memoryBacking>
<hugepages/>
</memoryBacking>
<vcpu placement='static'>6</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-2.2'>hvm</type>
<bootmenu enable='yes'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='custom' match='exact'>
<model fallback='allow'>Opteron_G5</model>
<vendor>AMD</vendor>
<topology sockets='1' cores='6' threads='1'/>
<feature policy='require' name='perfctr_core'/>
<feature policy='require' name='monitor'/>
<feature policy='require' name='skinit'/>
<feature policy='require' name='tce'/>
<feature policy='require' name='mmxext'/>
<feature policy='require' name='osxsave'/>
<feature policy='require' name='vme'/>
<feature policy='require' name='topoext'/>
<feature policy='require' name='fxsr_opt'/>
<feature policy='require' name='bmi1'/>
<feature policy='require' name='ht'/>
<feature policy='require' name='cr8legacy'/>
<feature policy='require' name='ibs'/>
<feature policy='require' name='wdt'/>
<feature policy='require' name='extapic'/>
<feature policy='require' name='osvw'/>
<feature policy='require' name='nodeid_msr'/>
<feature policy='require' name='perfctr_nb'/>
<feature policy='require' name='cmp_legacy'/>
<feature policy='require' name='lwp'/>
<feature policy='require' name='invtsc'/>
</cpu>
<clock offset='localtime'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/bin/qemu-wrapper</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='directsync' io='threads'/>
<source file='/var/lib/libvirt/images/vm1.qcow2'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='directsync' io='threads'/>
<source file='/home/walkindude/driveD.qcow2'/>
<target dev='vdb' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/walkindude/Downloads/en_windows_10_pro_10061_x64_dvd.iso'/>
<target dev='hda' bus='ide'/>
<readonly/>
<boot order='3'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/walkindude/virtio-win-0.1-100.iso'/>
<target dev='hdb' bus='ide'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
<controller type='usb' index='0' model='ich9-ehci1'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x7'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci1'>
<master startport='0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci2'>
<master startport='2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci3'>
<master startport='4'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='ide' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<controller type='virtio-serial' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:c3:bc:db'/>
<source network='network'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<sound model='ac97'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
</sound>
<hostdev mode='subsystem' type='usb' managed='yes'>
<source>
<vendor id='0x09da'/>
<product id='0x0260'/>
</source>
<boot order='2'/>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
</source>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x03' slot='0x00' function='0x1'/>
</source>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</hostdev>
<hostdev mode='subsystem' type='usb' managed='yes'>
<source>
<vendor id='0x09da'/>
<product id='0xf643'/>
</source>
</hostdev>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</memballoon>
</devices>
</domain>
Last edited by walkindude (2015-04-26 21:31:24)
Offline
@carterghill
rombar=0,romfile=/home/carter/Downloads/EVGA.GTX760.4096.130607.rom
Do not pass Go, do not collect $200. This is a bogus combination of arguments that causes QEMU to load the ROM into a legacy ROM space, disassociating it from the device. I'd remove both arguments and only add back romfile if you need it.
You don't need the i915 patch since you don't have Intel host graphics, which means you also don't need this: i915.enable_hd_vgaarb=1
Chances are you also don't need this: vfio_iommu_type1.allow_unsafe_interrupts=1
Personally, given your guest and hardware combination, I'd use a standard 440FX machine, OVMF guest firmware, and you should be able to manage it with libvirt.
You might even be able to move one of your cards to a PCH root port (probably listed as a PCIe 2.0 port) and avoid needing the ACS override patch.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
@han5
Try installing TightVNC server in the guest so you can get to the desktop w/o the display and determine if the guest is running or really hung. You also shouldn't need x-vga=on since you're using OVMF (and you're multifunction=on option isn't really doing anything since you're not specifying the addr= to put a secondary function on the device).
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
@walkindude
It looks like you're running an 8G guest on an 8G host. Device assignment VMs cannot be overcommitted. See if you still have problems reducing the VM to 6G or 7G. The host could be swapping like crazy and timing glitches resulting from that could cause the problems you're seeing.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
@aw
sorry to be not clear mysefl:
Host: 16G
Guest: 8G
Also in /etc/sysctl.d/99-sysctl.conf:
vm.nr_hugepages = 4400
hugepage size: 2048 kB
Tried to set it to 5500 for example.
After boot:
/proc/meminfo
HugePages_Total: 5500
HugePages_Free: 1404
HugePages_Rsvd: 0
HugePages_Surp: 0
This remained the same after reboots. For now refreshed windows boots as expected, but for how long.
Last edited by walkindude (2015-04-27 05:18:48)
Offline
Bronek wrote:is it independent from http://article.gmane.org/gmane.linux.kernel.pci/40663 , or should I patch my kernel 3.18.12 as well? Or perhaps try with kernel 4.0?
Not at all, it should be independent of kernel version. I've also decided that the reset is not reliable enough for the kernel, so this is the patch proposed for QEMU: https://lists.gnu.org/archive/html/qemu … 03128.html
EDIT: Note that this version only enables the reset for specific device IDs. If you're looking to try it on something else, you'll need to add the ID for your hardware to the list.
Thanks for heads up, I found this:
+ switch (vendor) {
+ case 0x1002:
+ switch (device) {
I will amend if necessary for W7100 (don't remember its device code ATM) and will let you know if it worked. FWIW, http://article.gmane.org/gmane.linux.kernel.pci/40663 patched on kernel 3.18.11 (with qemu 2.2.1) have not fixed my problems with W7100 reset, so I'm keen to try something else.
I wouldn't have this problem if only R9 290X fitted in single PCIe slot (and one bracket, too)
Offline
nope, just turned on my PC, and on the first VM boot got black screen
but
HugePages_Total: 5500
HugePages_Free: 1404
HugePages_Rsvd: 0
HugePages_Surp: 0
free -m
total: 15977
used: 11962
free 2550
shared: 86
buff/cache: 1466
available: 3608
Could it be errors in DSDT? Some time ago I've tried to do https://wiki.archlinux.org/index.php/DSDT, out of scientific interest Had fixed errors and warnings, but then during my first tries on passthrough, didn't get after seabios - black screen. Then I tried to do without it. System boots for some time, like 4-8 times, and then shit happens. Didn't install to much: just Visual Studio, Qt, Steam...
Also in dmesg (not always see that, maybe it's a culprit):
[ 5.173608] kvm: Nested Virtualization enabled
[ 5.173615] kvm: Nested Paging enabled <- tried with and without, nothing seems changed
[ 83.226853] kvm: zapping shadow pages for mmio generation wraparound
[ 92.263251] kvm [3109]: vcpu3 unimplemented perfctr wrmsr: 0xc0010004 data 0xffffffffffffd8f0
[ 92.263255] kvm [3109]: vcpu4 unimplemented perfctr wrmsr: 0xc0010004 data 0xffffffffffffd8f0
[ 92.263256] kvm [3109]: vcpu0 unimplemented perfctr wrmsr: 0xc0010000 data 0x530076
[ 92.263259] kvm [3109]: vcpu5 unimplemented perfctr wrmsr: 0xc0010000 data 0x530076
[ 92.263260] kvm [3109]: vcpu1 unimplemented perfctr wrmsr: 0xc0010000 data 0x530076
[ 92.263262] kvm [3109]: vcpu4 unimplemented perfctr wrmsr: 0xc0010000 data 0x530076
[ 92.263296] kvm [3109]: vcpu3 unimplemented perfctr wrmsr: 0xc0010000 data 0x530076
[ 451.669332] kvm [3109]: vcpu3 unimplemented perfctr wrmsr: 0xc0010004 data 0xffffffffffffd8f0
[ 451.669335] kvm [3109]: vcpu4 unimplemented perfctr wrmsr: 0xc0010004 data 0xffffffffffffd8f0
[ 451.669337] kvm [3109]: vcpu0 unimplemented perfctr wrmsr: 0xc0010004 data 0xffffffffffffd8f0
[ 451.669341] kvm [3109]: vcpu2 unimplemented perfctr wrmsr: 0xc0010004 data 0xffffffffffffd8f0
[ 451.669345] kvm [3109]: vcpu5 unimplemented perfctr wrmsr: 0xc0010000 data 0x530076
[ 451.669347] kvm [3109]: vcpu4 unimplemented perfctr wrmsr: 0xc0010000 data 0x530076
[ 451.669349] kvm [3109]: vcpu0 unimplemented perfctr wrmsr: 0xc0010000 data 0x530076
[ 451.669352] kvm [3109]: vcpu2 unimplemented perfctr wrmsr: 0xc0010000 data 0x530076
[ 451.669361] kvm [3109]: vcpu3 unimplemented perfctr wrmsr: 0xc0010000 data 0x530076
Last edited by walkindude (2015-04-27 12:43:16)
Offline