You are not logged in.
Hi all, It is possible Passthrough another PCIe device? PCIe USB controller, PCIe Sound Card? PCIe network card?
Offline
Hi all, It is possible Passthrough another PCIe device? PCIe USB controller, PCIe Sound Card? PCIe network card?
Yes . Have a look at my VM :
qemu-system-x86_64 -name main -nographic \
-enable-kvm -m 8192 -cpu host,kvm=off -smp 4,sockets=1,cores=4,threads=1 \
-vga none -nodefconfig \
-drive if=pflash,format=raw,readonly,file=/usr/share/ovmf/x64/ovmf_code_x64.bin \
-drive if=pflash,format=raw,file=/VMs/ovmf_main.bin \
-device vfio-pci,host=04:00.0,multifunction=on \
-device vfio-pci,host=04:00.1 \
-device vfio-pci,host=07:00.0 \
-device vfio-pci,host=00:1b.0 \
-drive file=/VMs/Win_Main.img,cache=writeback,format=raw,if=none,id=drive0,aio=threads \
-device virtio-blk-pci,drive=drive0,ioeventfd=on,bootindex=1 \
-device virtio-scsi-pci,id=scsi \
-drive file=/VMs/Win7.iso,id=iso_install,if=none \
-device scsi-cd,drive=iso_install \
-cdrom /VMs/virtio.iso \
-localtime \
-net nic,model=virtio,macaddr=38:29:21:5F:C4:7D -net bridge,br=br0 \
-netdev type=tap,ifname=tap_s,id=net1,vhost=on,vhostforce=on,queues=4,script= \
-device virtio-net-pci,netdev=net1,mq=on,vectors=9 \
-monitor unix:/tmp/vm_main,server,nowait &
As you can see , I'm passing-through a total of 4 PCI devices . My GPU + HDMI Audio , a USB 3.0 PCI-E card and the onboard Intel HD Audio controller .
Offline
Hi all, It is possible Passthrough another PCIe device? PCIe USB controller, PCIe Sound Card? PCIe network card?
That's mostly the purpose of vfio, GPU assignment with vfio is just a fun hack.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
THX Denso and aw
flack wrote:Hi all, It is possible Passthrough another PCIe device? PCIe USB controller, PCIe Sound Card? PCIe network card?
That's mostly the purpose of vfio, GPU assignment with vfio is just a fun hack.
I ask becouse using USB network card and USB soundcard in qemu. But when i play over internet on virtual machine then i got some latency lags. I think this can be reduced when i use USB PCIe card. And then put all USB devices to this PCIe USB controller.
Offline
THX Denso and aw
aw wrote:flack wrote:Hi all, It is possible Passthrough another PCIe device? PCIe USB controller, PCIe Sound Card? PCIe network card?
That's mostly the purpose of vfio, GPU assignment with vfio is just a fun hack.
I ask becouse using USB network card and USB soundcard in qemu. But when i play over internet on virtual machine then i got some latency lags. I think this can be reduced when i use USB PCIe card. And then put all USB devices to this PCIe USB controller.
Yeeeah, that's usually how people do when they're dealing with USB latency - they pass-through the whole USB controller as a PCIe device. As stated in Denso's example - it's done the same way as GPU pass-through, only much easier.
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline
THX Denso and aw
aw wrote:flack wrote:Hi all, It is possible Passthrough another PCIe device? PCIe USB controller, PCIe Sound Card? PCIe network card?
That's mostly the purpose of vfio, GPU assignment with vfio is just a fun hack.
I ask becouse using USB network card and USB soundcard in qemu. But when i play over internet on virtual machine then i got some latency lags. I think this can be reduced when i use USB PCIe card. And then put all USB devices to this PCIe USB controller.
TBH virtio-net is probably much better (and faster) than your USB NIC.
Offline
TBH virtio-net is probably much better (and faster) than your USB NIC.
Agreed
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
As reported countless times in this thread, the latest Nvidia driver is incompatible with hyper-v extensions, including hv_time. The driver doesn't seem to detect these extensions on every boot, which can lead to confusing issues like this, but hv_time and friends should never be recommended for Nvidia users.
Got it, looks like some boost function can't use on NVIDIA Driver.
Also, I found out the delay problem I got just Windows 8 problem. After update, everything got right.
(P.S I got Windows Update from 4 p.m to 3 a.m, how 'wonderful' does Windows 8.)
Offline
Now I'm trying to install Ubuntu 14.04.1 on OVMF VM.
It can enter into GRUB menu, but I select install Ubuntu, only black screen.
Is my GTX980 too new so that Ubuntu's driver didn't support yet?
Maybe I should change back to GTX480, but I guess GTX480's BIOS didn't support UEFI model.
Offline
Ok i got Yosemite in KVM with GPU passtrough working!
Essentially i created a pure mac install usb by
sudo /Applications/Install\ OS\ X\ Yosemite.app/Contents/Resources/createinstallmedia --volume /Volumes/Name-des-USB-Sticks/ --applicationpath /Applications/Install\ OS\ X\ Yosemite.app/ --nointeraction
Booted seabios and used chameleon 2510 as kernel. Installation worked quite nicely on q35 machine. But after install also 440fx works. So i use this.
One needs to play with slots for e1000-82545em to get it working. (Some strange irq issue apparently).
GPU passthrough works, without any kvm=off stuff. On mac i can even pass my titan as secondary card besides vga. Go figure. USB3 (ASM1042) works with driver from multibeast. Pass through of 82574L also works with driver from multibeast but performance is abysimal. Hope virtio-net is working on Yosemite soon, no need to pass a real card anymore. But e1000 is not bad for now.
One last problem is cpu. Now it works with model core2duo but this disables some sse instructions, and all aes and avx and other features my Xeon 1650 v1 (Think i7-3930k with ECC) has. Model Sandybridge kinda works. But OS X cant figure the right clock and think it runs on 133MHz and so the realtime clock runs 20 times faster. Anyone knows how to fix this? Passing cpu as model host ends in kernel panic on boot.
I also run some test on Mac VM, Ubuntu 14.04 VM and Windows7 VM and also on Host. All VM have 4 pinned HT Cores (8 Logical) Windows VM is additinally guared by cset, Ubuntu and Mac are not. Hugepages are on.
no hv enhancments on Windows VM due to latest driver.
Geekbench (first single then multi)
Baseline Host Ubuntu 12.04 3.17.1: http://browser.primatelabs.com/geekbench3/1848887 3218 17638
Windows 7 VM: http://browser.primatelabs.com/geekbench3/1848676 3221 12821
Ubuntu 14.04: http://browser.primatelabs.com/geekbench3/1848847 3230 13215
Mac 10.10.2: http://browser.primatelabs.com/geekbench3/1848531 2940 12055
lower result on mac almost fully attributable to lacking of aes on core2duo, so for some stuff somehow managing to get mac vm working with sandybridge or even host would be great.
Also done Unigine 4.0 benches
Winows 7 VM
FPS: 61.3 Score: 1544 Min FPS: 8.6 Max FPS: 141.3
Windows 7 (build 7601, Service Pack 1) 64bit CPU model: Intel(R) Xeon(R) CPU E5-1650 0 @ 3.20GHz (3999MHz) x4 GPU model: NVIDIA GeForce GTX TITAN 9.18.13.4725 (4095MB) x1
Settings Render: Direct3D11 Mode: 1600x900 8xAA windowed Preset Extreme
Ubuntu 14.04 VM
FPS: 63.8 Score: 1608 Min FPS: 9.3 Max FPS: 132.5
Linux 3.13.0-45-generic x86_64 CPU model: Intel(R) Xeon(R) CPU E5-1650 0 @ 3.20GHz (3999MHz) x8 GPU model: GeForce GTX TITAN PCI Express 346.35 (6144MB) x1
Settings Render: OpenGL Mode: 1600x900 8xAA windowed Preset Extreme
Mac OS X 10.10.2 (Stock Nvidia Driver)
FPS: 59.1 Score: 1490 Min FPS: 8.5 Max FPS: 122.1
Darwin 14.1.0 x86_64 CPU model: Intel(R) Core(TM)2 Duo CPU T7700 @ 2.40GHz (3999MHz) x8 GPU model: NVIDIA GeForce GTX TITAN (6144MB) x1
Settings Render: OpenGL Mode: 1600x900 8xAA windowed Preset Extreme
Id say not bad, differences are not that big.
Maybe i do also bare metal run on Heaven to compare but this needs a reboot and a new xorg.conf.
So appart from some difficulties in setting this up, there is no reason to not do it
Last edited by lordleto (2015-02-05 15:28:18)
Offline
$ find /sys/kernel/iommu_groups /sys/kernel/iommu_groups /sys/kernel/iommu_groups/0 /sys/kernel/iommu_groups/0/devices /sys/kernel/iommu_groups/0/devices/0000:00:00.0 /sys/kernel/iommu_groups/1 /sys/kernel/iommu_groups/1/devices /sys/kernel/iommu_groups/1/devices/0000:00:01.0 /sys/kernel/iommu_groups/1/devices/0000:01:00.0 /sys/kernel/iommu_groups/2 /sys/kernel/iommu_groups/2/devices /sys/kernel/iommu_groups/2/devices/0000:00:02.0 /sys/kernel/iommu_groups/3 /sys/kernel/iommu_groups/3/devices /sys/kernel/iommu_groups/3/devices/0000:00:03.0 /sys/kernel/iommu_groups/3/devices/0000:00:03.3 /sys/kernel/iommu_groups/4 /sys/kernel/iommu_groups/4/devices /sys/kernel/iommu_groups/4/devices/0000:00:1a.0 /sys/kernel/iommu_groups/4/devices/0000:00:1a.1 /sys/kernel/iommu_groups/4/devices/0000:00:1a.2 /sys/kernel/iommu_groups/4/devices/0000:00:1a.7 /sys/kernel/iommu_groups/5 /sys/kernel/iommu_groups/5/devices /sys/kernel/iommu_groups/5/devices/0000:00:1b.0 /sys/kernel/iommu_groups/6 /sys/kernel/iommu_groups/6/devices /sys/kernel/iommu_groups/6/devices/0000:00:1c.0 /sys/kernel/iommu_groups/6/devices/0000:00:1c.1 /sys/kernel/iommu_groups/6/devices/0000:00:1c.2 /sys/kernel/iommu_groups/6/devices/0000:00:1c.3 /sys/kernel/iommu_groups/6/devices/0000:03:00.0 /sys/kernel/iommu_groups/6/devices/0000:04:00.0 /sys/kernel/iommu_groups/6/devices/0000:05:00.0 /sys/kernel/iommu_groups/7 /sys/kernel/iommu_groups/7/devices /sys/kernel/iommu_groups/7/devices/0000:00:1d.0 /sys/kernel/iommu_groups/7/devices/0000:00:1d.1 /sys/kernel/iommu_groups/7/devices/0000:00:1d.2 /sys/kernel/iommu_groups/7/devices/0000:00:1d.7 /sys/kernel/iommu_groups/8 /sys/kernel/iommu_groups/8/devices /sys/kernel/iommu_groups/8/devices/0000:00:1e.0 /sys/kernel/iommu_groups/8/devices/0000:15:00.0 /sys/kernel/iommu_groups/8/devices/0000:15:00.2 /sys/kernel/iommu_groups/8/devices/0000:15:00.4 /sys/kernel/iommu_groups/8/devices/0000:15:00.5 /sys/kernel/iommu_groups/9 /sys/kernel/iommu_groups/9/devices /sys/kernel/iommu_groups/9/devices/0000:00:1f.0 /sys/kernel/iommu_groups/9/devices/0000:00:1f.2 /sys/kernel/iommu_groups/9/devices/0000:00:1f.3
What is better/easier - blacklisting, or using PCI stub? Do I understand it right that you suggest going the pci-stub route because of VGA BIOS being integrated into system firmware?
I plan to use Intel GPU for the host system and give Radeon entirely to qemu. Not sure if that's what you meant by
you probably don't need the i915 VGA arbitration patch if you do end going the primary head/VGA route.
?
Also, when I am not running qemu, is there a way to keep using radeon (occasionally) with DRI_PRIME?
EDIT:
I booted with that custom kernel form the OP, but it has negative influence on the radeon GPU temperature. It skyrockets to ~90 C even when not in use and the fan is running like crazy. Perhaps it disables power management for it?
@aw, could you please comment on my post?
Offline
So guys can you tell me why are we compiling linux-mainline but not stock arch package? Is it possible that all things work when i compile linux package with patches applied?
Offline
So guys can you tell me why are we compiling linux-mainline but not stock arch package? Is it possible that all things work when i compile linux package with patches applied?
I guess because not all kernel included VFIO driver, like Ubuntu generic kernel not included yet. So we usually install kernel from kernel.org.
And I don't know what do you mean about 'all things work', the patches you need is depends on your hardware, like if you have Intel graphic on your computer, you need to install i915 patch.
Offline
What is better/easier - blacklisting, or using PCI stub?
I tend to prefer pci-stub.ids, but neither is really better or easier.
Do I understand it right that you suggest going the pci-stub route because of VGA BIOS being integrated into system firmware?
No, that's unrelated
I plan to use Intel GPU for the host system and give Radeon entirely to qemu. Not sure if that's what you meant by
you probably don't need the i915 VGA arbitration patch if you do end going the primary head/VGA route.
?
That's really your only option. Theoretically the existing VGA arbitration in the i915 driver actually works for the generation of hardware you have.
Also, when I am not running qemu, is there a way to keep using radeon (occasionally) with DRI_PRIME?
Maybe. Graphics drivers often don't work well with binding and unbinding the device. GPUs aren't hotplugged nearly as often as NICs.
EDIT:
I booted with that custom kernel form the OP, but it has negative influence on the radeon GPU temperature. It skyrockets to ~90 C even when not in use and the fan is running like crazy. Perhaps it disables power management for it?
How are you reading the GPU temperature without a driver? Is the exhaust hot? It's possible that the system relies on the driver for reporting the temperature and freaks out when there isn't one. These are just some of the fun problems you're going to have trying to make this work on a laptop.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
Utku1Kan wrote:So guys can you tell me why are we compiling linux-mainline but not stock arch package? Is it possible that all things work when i compile linux package with patches applied?
I guess because not all kernel included VFIO driver, like Ubuntu generic kernel not included yet.
Now you've gone and made me install an ubuntu VM just to check. Not true, both 14.04 and 14.10 have vfio enabled in the kernel. I run a stock Fedora kernel since I don't need either the i915 patch (using OVMF) or the ACS override.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
AKSN74 wrote:Utku1Kan wrote:So guys can you tell me why are we compiling linux-mainline but not stock arch package? Is it possible that all things work when i compile linux package with patches applied?
I guess because not all kernel included VFIO driver, like Ubuntu generic kernel not included yet.
Now you've gone and made me install an ubuntu VM just to check. Not true, both 14.04 and 14.10 have vfio enabled in the kernel. I run a stock Fedora kernel since I don't need either the i915 patch (using OVMF) or the ACS override.
Sorry, my information still old , now I'm install 3.18.5 kernel.
After that, keep testing why Ubuntu 14.04 installation can't start with OVMF.
Offline
Now I had another problem while installing Ubuntu 14.04-1 with OVMF.
(Sorry about so many problems first.)
First installation, I use GTX980 to install, but only got blank screen (monitor power is on) after I select 'Install Ubuntu'.
Then I change GTX480 to install. In OVMF message, it shows:
No suitable video mode.
Enter into blind mode.
But I can see installation wizard. So I installed Ubuntu then reboot, OVMF shows same message, but get blank screen with monitor power off.
Then I switch back to GTX980, same as beginning, get blank screen with monitor power is on.
Now it still blank, and I can reboot VM with Ctrl+Alt+Delete then Enter.
Before that, I'm sure that I can install and enter into Ubuntu in real computer with GTX980. But not install with UEFI mode.
Does anyone know how to solve it?
EDIT: Fixed with nomodeset, seems like Ubuntu got a problem with GTX980 while in UEFI mode.
Last edited by AKSN74 (2015-02-06 05:39:20)
Offline
AKSN74 wrote:Utku1Kan wrote:So guys can you tell me why are we compiling linux-mainline but not stock arch package? Is it possible that all things work when i compile linux package with patches applied?
I guess because not all kernel included VFIO driver, like Ubuntu generic kernel not included yet.
Now you've gone and made me install an ubuntu VM just to check. Not true, both 14.04 and 14.10 have vfio enabled in the kernel. I run a stock Fedora kernel since I don't need either the i915 patch (using OVMF) or the ACS override.
So you're actually using Fedora 21 with stock 3.18.3? How can you compile ACS patch then?
Offline
How can you compile ACS patch then?
He doesn't need it on his hardware .
Last edited by Denso (2015-02-06 12:20:56)
Offline
devianceluka wrote:How can you compile ACS patch then?
He doesn't need it on his hardware .
I understand he doesnt need it. I would like to know how can he compile one. Few pages back I posted that on my Fedora I cannot compile ACS patch and did not get an answer, while on previous Fedora I was compiling it flawlessly for more then 6 months. Looks like now I will not get an answer either.
Offline
I have redone Unigine test.
First attempt i forgot to disable cuda double precision and also switching intput during run cause spikes prob due to usb disconnect.
So clean results are
Host Ubuntu 12.04
FPS: 67.7 Score: 1704 Min FPS: 39.3 Max FPS: 144.2
System Platform: Linux 3.17.1-031701-generic x86_64
CPU model: Intel(R) Xeon(R) CPU E5-1650 0 @ 3.20GHz (3999MHz) x12
GPU model: GeForce GTX TITAN PCI Express 340.32 (6144MB) x1
Settings Render: OpenGL Mode: 1600x900 8xAA windowed Preset Extreme
Windows 7 KVM
FPS: 70.2 Score: 1768 Min FPS: 27.4 Max FPS: 153.6
System Platform: Windows 7 (build 7601, Service Pack 1) 64bit
CPU model: Intel(R) Xeon(R) CPU E5-1650 0 @ 3.20GHz (3999MHz) x4
GPU model: NVIDIA GeForce GTX TITAN 9.18.13.4725 (4095MB) x1
Settings Render: Direct3D11 Mode: 1600x900 8xAA windowed Preset Extreme
Ubuntu 14.04 KVM
FPS: 66.8 Score: 1684 Min FPS: 38.8 Max FPS: 138.6
System Platform: Linux 3.13.0-45-generic x86_64
CPU model: Intel(R) Xeon(R) CPU E5-1650 0 @ 3.20GHz (3999MHz) x8
GPU model: GeForce GTX TITAN PCI Express 346.35 (6144MB) x1
Settings Render: OpenGL Mode: 1600x900 8xAA windowed Preset Extreme
Mac OS X 10.10.2 KVM
FPS: 61.3 Score: 1544 Min FPS: 37.2 Max FPS: 133.5
System Platform: Darwin 14.1.0 x86_64
CPU model: Intel(R) Core(TM)2 Duo CPU T7700 @ 2.40GHz (3999MHz) x8
GPU model: NVIDIA GeForce GTX TITAN (6144MB) x1
Settings Render: OpenGL Mode: 1600x900 8xAA windowed Preset Extreme
So i will try now to boot mac os though omvf and clover, maybe then i can use a proper cpu model (ie SandyBridge). Host prob will never work since there never was a mac with SandyBridge-E Xeons.
Offline
Hello guys,
I'm dealing with a problem I can't solve, I have successfully passed through an AMD R7 250 to my windows 8.1 guest but I'm getting a lot of audio hiccups everytime other guest starts an io intensive job.
The vms are stored at an zfs dataset.
My setup is the follow:
CPU: Intel Xeon E3-1245V3
MOBO: Supermicro X10SAE-O
RAM: 16GB ECC
VIDEO: AMD R7 250
Any help would be welcome.
Thanks.
Offline
Hello guys,
I'm dealing with a problem I can't solve, I have successfully passed through an AMD R7 250 to my windows 8.1 guest but I'm getting a lot of audio hiccups everytime other guest starts an io intensive job.
The vms are stored at an zfs dataset.
My setup is the follow:
CPU: Intel Xeon E3-1245V3
MOBO: Supermicro X10SAE-O
RAM: 16GB ECC
VIDEO: AMD R7 250Any help would be welcome.
Thanks.
Pin vCPUs and don't oversubscribe physical CPUs if you expect the guest to handle latency sensitive tasks. You may also need to move host device interrupts to other CPUs. Use isolcpus if you really want to have vCPU isolation guarantees.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
bpbastos wrote:Hello guys,
I'm dealing with a problem I can't solve, I have successfully passed through an AMD R7 250 to my windows 8.1 guest but I'm getting a lot of audio hiccups everytime other guest starts an io intensive job.
The vms are stored at an zfs dataset.
My setup is the follow:
CPU: Intel Xeon E3-1245V3
MOBO: Supermicro X10SAE-O
RAM: 16GB ECC
VIDEO: AMD R7 250Any help would be welcome.
Thanks.
Pin vCPUs and don't oversubscribe physical CPUs if you expect the guest to handle latency sensitive tasks. You may also need to move host device interrupts to other CPUs. Use isolcpus if you really want to have vCPU isolation guarantees.
Thank you aw.
I'm already using isolcpus=2-7 and pinning vcpus for my guest.
The only thing I'm not doing is moving my host device interrups. Do you have any script to do it?
Here my Windows 8.1 xml:
<domain type='kvm' id='2'>
<name>htpc-sala-windows</name>
<uuid>89abba7f-cbb4-4360-8dc2-a1187be73e1c</uuid>
<memory unit='KiB'>3145728</memory>
<currentMemory unit='KiB'>3145728</currentMemory>
<vcpu placement='static'>2</vcpu>
<cputune>
<vcpupin vcpu='0' cpuset='2'/>
<vcpupin vcpu='1' cpuset='3'/>
</cputune>
<resource>
<partition>/machine</partition>
</resource>
<os>
<type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type>
</os>
<features>
<acpi/>
<apic/>
<pae/>
<hyperv>
<relaxed state='on'/>
<vapic state='on'/>
<spinlocks state='on' retries='8191'/>
</hyperv>
</features>
<cpu mode='host-passthrough'>
</cpu>
<clock offset='localtime'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
<timer name='hypervclock' present='yes'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/sbin/qemu-system-x86_64.vgaon</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='writeback' io='threads'/>
<source file='/var/lib/libvirt/images/htpc-sala-windows.qcow2'/>
<backingStore/>
<target dev='vda' bus='virtio'/>
<boot order='1'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</disk>
<disk type='block' device='cdrom'>
<driver name='qemu' type='raw'/>
<backingStore/>
<target dev='hda' bus='ide'/>
<readonly/>
<alias name='ide0-0-0'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<controller type='usb' index='0' model='ich9-ehci1'>
<alias name='usb0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x7'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci1'>
<alias name='usb0'/>
<master startport='0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0' multifunction='on'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci2'>
<alias name='usb0'/>
<master startport='2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x1'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci3'>
<alias name='usb0'/>
<master startport='4'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'>
<alias name='pci.0'/>
</controller>
<controller type='ide' index='0'>
<alias name='ide0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<controller type='virtio-serial' index='0'>
<alias name='virtio-serial0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</controller>
<interface type='bridge'>
<mac address='52:54:00:ed:4b:fb'/>
<source bridge='br0'/>
<target dev='vnet0'/>
<model type='virtio'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<hostdev mode='subsystem' type='usb' managed='yes'>
<source>
<vendor id='0x046d'/>
<product id='0xc52b'/>
<address bus='3' device='6'/>
</source>
<alias name='hostdev0'/>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='yes'>
<driver name='vfio'/>
<source>
<address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</source>
<alias name='hostdev1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='yes'>
<driver name='vfio'/>
<source>
<address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
</source>
<alias name='hostdev2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</hostdev>
<hostdev mode='subsystem' type='usb' managed='yes'>
<source>
<vendor id='0x0471'/>
<product id='0x060d'/>
<address bus='3' device='5'/>
</source>
<alias name='hostdev3'/>
</hostdev>
<memballoon model='virtio'>
<alias name='balloon0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</memballoon>
</devices>
</domain>
cmdline:
BOOT_IMAGE=/boot/vmlinuz-linux root=UUID=1ff9fe0c-8a92-42b2-97ac-0892e610d9ad rw quiet intel_iommu=on i915.enable_hd_vgaarb=1 isolcpus=2-7 pci-stub.ids=1002:6610,1002:aab0
Offline
aw wrote:Pin vCPUs and don't oversubscribe physical CPUs if you expect the guest to handle latency sensitive tasks. You may also need to move host device interrupts to other CPUs. Use isolcpus if you really want to have vCPU isolation guarantees.
Thank you aw.
I'm already using isolcpus=2-7 and pinning vcpus for my guest.
The only thing I'm not doing is moving my host device interrups. Do you have any script to do it?
I don't have anything, but you want to manipulate /proc/irq/*/smp_affinity. You probably want to be careful only to do this for device interrupts (ie. things with IO-APIC or PCI-MSI in the type from /proc/interrupts). You'll also want to make sure irqbalance doesn't move interrutps back to your isolated CPUs, there's a IRQBALANCE_BANNED_CPUS environment variable that can be used to do that.
EDIT: I doubt an E3 v3 has it, but if /sys/module/kvm_intel/parameters/enable_apicv reports 'Y' then by making sure assigned device interrupts come to a CPU not running the guest, KVM can inject the interrupt into the guest with forcing a VM exit. I also recall someone was using (I think) the nohz_full= boot option to stop timer ticks on the isolated CPUs.
Last edited by aw (2015-02-06 17:42:29)
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline