You are not logged in.
orcephrye wrote:aw wrote:Sure, but that's not what you want. You want to pin a single vCPU to a single pCPU such that the scheduler has zero migration to do and the caches are hot. If you're benchmarking a VM, it's probably also best to set the performance governor in the host since ondemand can add latency to the guest's ability to react to load.
So... perhaps a script like this:
VAR=2 && QEMUPID=`pidof /usr/bin/qemu-system-x86_64` && THREADLIST=`ps -o lwp --no-headers -p $QEMUPID -L | grep -v $QEMUPID` && for i in $THREADLIST ; do taskset -cp $VAR $i ; VAR=$((VAR+1)) ; done
Seems like it might work, the vCPUs should be the first N threads spawned by the main QEMU pid. BTW, you seem to be starting at physical CPU2, but you were originally using -smp cpus=8,sockets=1,cores=8,threads=1 on an 8-core processor. Are you also scaling back to a 6-core guest?
Yeppers. The idea is having 2 pCPUs for the host. And use isolcpus on the other 6. Use the script above to mask/pin/affinity (whatever) the threads to certain pCPUs. Starting with 2 and ending on 7. The list should spawn 7 PIDs and then one of them is remvoed with the grep command. That is the qemu parent pid which can happy stay on the host... right? I mean thats a good question shall the parent pid go over? I would assume not.. but you know what happens when people assume.
*And yes I would probably change the cmdLine to be: -smp cpus=6,sockets=1,cores=6,threads=1
Last edited by orcephrye (2014-11-17 21:16:36)
Offline
aw wrote:orcephrye wrote:So... perhaps a script like this:
VAR=2 && QEMUPID=`pidof /usr/bin/qemu-system-x86_64` && THREADLIST=`ps -o lwp --no-headers -p $QEMUPID -L | grep -v $QEMUPID` && for i in $THREADLIST ; do taskset -cp $VAR $i ; VAR=$((VAR+1)) ; done
Seems like it might work, the vCPUs should be the first N threads spawned by the main QEMU pid. BTW, you seem to be starting at physical CPU2, but you were originally using -smp cpus=8,sockets=1,cores=8,threads=1 on an 8-core processor. Are you also scaling back to a 6-core guest?
Yeppers. The idea is having 2 pCPUs for the host. And use isolcpus on the other 6. Use the script above to mask/pin/affinity (whatever) the threads to certain pCPUs. Starting with 2 and ending on 7. The list should spawn 7 PIDs and then one of them is remvoed with the grep command. That is the qemu parent pid which can happy stay on the host... right? I mean thats a good question shall the parent pid go over? I would assume not.. but you know what happens when people assume.
Sure, I'm not sure how much study there's been regarding whether it's better to pin the parent process to a non-vCPU processor or just let it float around. If you had multiple NUMA nodes you'd certainly want to pin it to the node, but otherwise I'm not sure how much it matters.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
Hi,
I am trying to pass-though asus GTX980 (under ASRock x99 professional, radeon as primary card). Installing Windows 2012 R2, host Linux 3.16.0-24-generic #32-Ubuntu
I have partial success but now i've faced few problems i don't understand, perhaps someone could explain something to me or reference to useful articles, because i am very newbe to all of this.
btw, i've used this article: http://www.pugetsystems.com/labs/articl … 4-KVM-585/
1. Monitor/card refused to turn on. I've fixed it by turning off VGA ROM initialization in the bios (thanks to asrock for the option). Now it works, but (because radeon is not initialized too) i totally miss everything before X on primary card, plus i have error message from speaker at boot. I've tried "romfile=" parameter, but it does not help. Moreover it prevent monitor from turning on. So i don't understand what is "romfile=" for? It simply breaks something.
I am disabling NVIDIA by adding "pci_stub ids=" into /etc/initramfs-tools/modules. Is there any "stronger" way to prevent card's ROM from loading? Why is it actually a problem?
Disabling VGA in the BIOS works, but i don't like it, you understand.
edit: "UEFI only" bios option works better. i guess my HD5450 does not have uefi, so nvidia works during boot, then radeon turns on with desktop. looks like non-uefi rom can't be loaded twice - into host and guest.
2. I've tried last downloadable qemu (2.1.91 aka 2.2.0rc1) (not sources) from the site, but was unable to use it due disks problem.
When i use -hda -cdrom keys Windows installer simply can't recognize disks.
I've read i need ahci for windows to recognize disks, but when i use ahci together with -drive/-device combination linked via "id/drive" fields, qemu says "this <diskname> is already defined" or something like that (can reproduce and write exactly if you want)
This has gone after downgrading to qemu 2.1.2, but i don't like it, you understand.
3. Card is connected to the virtual ioh3420. Could someone explain why? Why do i actually need ioh3420? I've tried connecting card directly to the pcie.0. It works. I feel no difference. So why do i actually need additional virtual device? I don't mind, but i don't like useless virtual device, you understand.
4. When i view "System Information" in the NVIDIA Control Panel is see "Bus: PCI Express x 1" (or even "x0" without ioh3420). This sounds scary. It supposed to be "PCIe 3.0 x16".
So do i really suffer from slow connection between Video Card and (hm, not even sure what) CPU? Or does it magically overclock this virtual PCIe 1.0 x 1 to be like physical 3.0x16?
I have not scientifically checked performances difference yet, but i don't like x1 instead of x16, you understand me.
Besides that everything works fine. No "!" marks in device manager, virtual machine restarts, starts stops without problems, latest nvidia drivers 344.65. Sound via HDMI works fine.
qemu-system-x86_64 -enable-kvm -M q35 -m 4096 -cpu host,kvm=off \
-smp 4,sockets=1,cores=4,threads=1 \
-bios /home/biasha/Downloads/bios.bin-1.7.5 -vga none \
-device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
-device vfio-pci,host=01:00.0,bus=root.1,addr=0.0,multifunction=on,x-vga=on \
-device vfio-pci,host=01:00.1,bus=root.1,addr=0.1 \
-device ahci,bus=pcie.0,id=ahci \
-drive file=/home/biasha/Downloads/windows1.img,id=disk,format=raw -device ide-hd,bus=ahci.0,drive=disk \
-drive file=/home/biasha/Downloads/en_windows_server_2012_r2_vl_with_update_x64_dvd_4065221.iso,id=isocd -device ide-cd,bus=ahci.1,drive=isocd \
-usb -usbdevice host:046d:c52b -usbdevice host:0738:1709 \
-boot menu=on
Last edited by biasha (2014-11-18 02:56:57)
Offline
...
All my quotes and stuff
...Sure, I'm not sure how much study there's been regarding whether it's better to pin the parent process to a non-vCPU processor or just let it float around. If you had multiple NUMA nodes you'd certainly want to pin it to the node, but otherwise I'm not sure how much it matters.
So... got it working. But for your information you NEED to pin the parent process with the rest of them. Without doing that I got a lot of stuttering... in everything.. audio/mouse movements/rendering. I also discovered that qemu spawns a lot more sub processes then simply the threads. So I run qemu like so:
taskset -c 2,3,4,5,6,7 qemu-system-x86_64 [....]
And then I run:
VAR=2 && QEMUPID=`pidof /usr/bin/qemu-system-x86_64` && THREADLIST=`ps -o lwp --no-headers -p $QEMUPID -L | grep -v $QEMUPID` && for i in $THREADLIST ; do taskset -cp $VAR $i ; if [ $VAR -eq 7 ] ; then VAR=1; fi ; VAR=$((VAR+1)) ; done
Notice the added if statement in the for loop. This resets the CPU counter. (I am horrible and naming variables.. shame.. shame on me). Either case now this doesn't care how many processes spawn at run time. It spreads them all out. I ran qemu several times while testing and it went from anywhere around 7 (6 cores) to 12. And while it was running it would spawn more every now and again. With the first taskset command this makes sure all processes in the future get to run on at least one of the 6 CPUs. And the fancy for loop with tasksets makes sure that each starting thread gets its own CPU. Sorta... although all the first threads that spawn are the threads for each CPU#... everytime I ran it. Which was about 10 to 15 times... i lost count. Basically its safe to assume that the first 6 threads from the one-liner above will be the qemu vCPUs.
Anywho... that had a pretty nice performance boost. Although what was even bigger was the governor being set to performance. I am use to working with RHEL 5/6 with cpuspeed service on Dell 2950s/R710s and R720s. Normally all you need to do is disable energy saving features in the bios and call it a day. So I already had done that before. But apparently I was horribly wrong. I used cpupower to set it to performance and that made pretty big diff as well. Combine that with cpu stuff and... well I am gaming and near 90% bare metal performance.
@biasha
I checked my Nvidia card which I am using on Windows and it too shows PCI x0. However I am getting good performance (now) in my games. This variable may not dictate the actual bandwidth it may just simply be displaying what it sees from the controller.
Also... I made the card that Linux runs on be the primary card on the motherboard. I did this because I had no other choice. You cannot use the card that is your primary if your bios doesn't support switching or changing the VGA settings. I have an MSI 990FXa-GD80 and it doesn't have that feature. I recommend placing the card your Windows guest will be using in a different PCI slot. Most motherboard have more then one PCI x16 slot. You would then be able to enjoy seeing the whole boot process and not wait in mystery.
I am also not using ioh3420 or any such thing. Here is my current config for your referance:
qemu-system-x86_64 -nographic -monitor telnet:127.0.0.1:1234,server,nowait \
-name gaming -enable-kvm -M q35 \
-balloon none -m 16386 -mem-path /dev/hugepages -mem-prealloc \
-cpu host,hv-time -smp cpus=4,sockets=1,cores=4,threads=1 \
-bios /usr/share/qemu/bios.bin -vga none \
-device vfio-pci,host=06:00.0,multifunction=on,x-vga=on \
-device vfio-pci,host=06:00.1 \
-device vfio-pci,host=07:06.0 \
-device virtio-scsi-pci,id=scsi \
-drive file=/dev/sdb,id=disk0,format=raw -device scsi-hd,drive=disk0 \
-drive file=/dev/sdc,id=disk1,format=raw -device scsi-hd,drive=disk1 \
-net nic,model=virtio,macaddr=00:16:35:AF:94:4B -net bridge,br=br0 \
-usb -usbdevice host:1532:0109 -usbdevice host:046d:c531 -usbdevice host:1a40:0101 \
-rtc base=localtime -monitor unix:/tmp/vm_gaming,server,nowait
PS: Much thanks to aw,Denso and sinny for their help.
Last edited by orcephrye (2014-11-18 05:29:06)
Offline
PS: Much thanks to aw,Denso and sinny for their help.
You're welcome .
Pssst : stealing everyone else's CMDLines and trying them was a big help for me , also many thanks to aw and everyone else for taking their time to help !
---------------------------------------------
@aw :
I noticed that ACS patch causes these errors :
[Thu Nov 13 03:13:21 2014] pcieport 0000:00:03.0: AER: Multiple Corrected error received: id=0018
[Thu Nov 13 03:13:21 2014] pcieport 0000:00:03.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, id=0018(Receiver ID)
[Thu Nov 13 03:13:21 2014] pcieport 0000:00:03.0: device [8086:2f08] error status/mask=00000040/00002000
[Thu Nov 13 03:13:21 2014] pcieport 0000:00:03.0: [ 6] Bad TLP
I recompiled 3.18-rc5 without the ACS patch and added the X99 quirks manually to "/drivers/pci/quirks.c" file , and I no longer recieve them .
Last edited by Denso (2014-11-18 08:05:39)
Offline
I noticed that ACS patch causes these errors :
Seems like my guess was true. Devices aren't isolated properly. Good luck.
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline
can anyone help me with why my Windows guest cannot boot anymore? Also tried ubuntu guest and the 2nd time when I reboot it wont start xinit! It looks like it didnt reset the GPU the proper way so it can reuse/reinitialize it. This is with SeaBIOS and it was working flawlessly till now. Using Fedora 3.17.2 with Seabios 1.7.5 git and Qemu 2.1.2 git
Offline
Denso wrote:I noticed that ACS patch causes these errors :
Seems like my guess was true. Devices aren't isolated properly. Good luck.
Umm .. That sounds tragic . But my host AND VMs are working flawlessly (aside from the VM rebooting hanging the host , which is caused by Nvidia's drivers I believe) .. So I don't see the issue here ?
Last edited by Denso (2014-11-18 12:30:18)
Offline
I get an error, when i want to bind a device to vfio-pci.
[ 395.815169] vfio-pci: probe of 0000:01:00.0 failed with error -22
[ 395.815178] vfio-pci: probe of 0000:01:00.1 failed with error -22
Here is my cmdline
initrd=\initramfs-linux.img keydevice=UUID=daabf52e-3e81-499a-af71-e5a2887840e9:key homedevice=UUID=cc29c820-32e3-45b6-8bf4-a152bd1a29a1:home cryptdevice=UUID=16ccebd1-dadb-4180-abd3-854a23871ac0:root_enc:allow-discards cryptkey=/dev/mapper/key:0:527 root=/dev/mapper/root_enc rw pci-stub.ids=10de:06c0,10de:0be5 intel_iommu=1 i915.enable_hd_vgaarb=1
I think IOMMU is enabled:
[ 0.032533] dmar: IOMMU 0: reg_base_addr fed90000 ver 1:0 cap c0000020e60262 ecap f0101a
[ 0.032537] dmar: IOMMU 1: reg_base_addr fed91000 ver 1:0 cap c9008020660262 ecap f0105a
[ 0.032607] IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1
My devices get claimed by pci-stub
[ 0.412398] pci-stub: add 10DE:06C0 sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000
[ 0.412414] pci-stub 0000:01:00.0: claimed by stub
[ 0.412423] pci-stub: add 10DE:0BE5 sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000
[ 0.412431] pci-stub 0000:01:00.1: claimed by stub
Intel Virtualization Tech and Intel VT-d Tech is enabled in my MSI-Z77A-G45. I use an i7-2600: http://ark.intel.com/de/products/52213/ … o-3_80-GHz
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz
stepping : 7
microcode : 0x29
cpu MHz : 1599.992
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb pln pts dtherm tpr_shadow vnmi flexpriority ept vpid xsaveopt
bugs :
bogomips : 6802.96
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
At boot I get the following error, i don't know if it has to do something with it:
[ 28.354419] [drm:ilk_display_irq_handler] *ERROR* Pipe B FIFO underrun
[ 28.354429] [drm:cpt_set_fifo_underrun_reporting] *ERROR* uncleared pch fifo underrun on pch transcoder B
[ 28.354430] [drm:cpt_serr_int_handler] *ERROR* PCH transcoder B FIFO underrun
Any ideas?
Offline
Researching processor thermal management and thermal monitoring i've got an interesting document.
http://amd-dev.wpengine.netdna-cdn.com/ … _Guide.pdf
(i have amd athlon x4 750k FM2 socket, family 15h model 10h)
There is plenty of bugs in the IOMMU.
IOMMU Event Not Flagged when DTE Reserved Bits Are Not Zero
IOMMU Event Log Not Generated for Invalid DTE GCR3 Table Root Pointer
IOMMU Event Log Not Generated for Not Present Host Intermediate Page Tables
Incorrect Translation with IOMMU v1 512 GB Page Table
IOMMU Event Log Ordering Violation
IOMMU PPR Log Ordering Violation
IOMMU Interrupt May Be Lost
Well, i have nothing to do, but to read and try to understand what the hell is happening here.
But i have a note for everyone using AMD FM2 socket based C/APUs:
For CPB(turbo) to work, two things are needed: 1. TCTL must be below 74C; 2. Some(i think two) cores must be disabled by OS' power management software.
There is two problems:
1. The actual critical temperature is waaay higher than 74C, actually it is ~120C - only in that case the CPU will do THERMTRIP and(CPB is disabled) will turn off the system almost immediatly.
2. Linux does not do that yet, so your CPU will stay at it's base frequency, never working in turbo mode.
In the real case, VM won't load all 4 cores to 100%, meaning that there will be a lot of P-state transitions. Every P-state transition is either a hang-up or crash or something else nasty(look over the revision guide, P-states are buggy too), or just a minor slowdown. The slowdown effect is amplified by VM, and i've actually noticed heavy CPU performance drawbacks on dynamic CPU-load jobs, like games. On static CPU-load jobs, like mining or bruteforcing, the performance hit is not so significant. The actual numbers will come tomorrow or something, as i don't have the time to test it all yet.
EDIT:
Yep. The raw, full load CPU performance is almost the same, sometimes even doing majestic stuff.
I've ran hashcat-cliXOP.bin(.exe) with MD5 APR(-m1600) with -a3 and a huge mask of ?a?a?a?a?a on a bogus hash and i've got 37k/second inside the VM and 37k/second on the host system.
Fun part: i can run hashcat on host and guest systems simulatenously, having 27k and 27k performance meters. That feels weird.
The P-state transition part: when i am running hashcat on three or less cores instead of four, the performance decrease is not linear(same thing for plain 64, avx and avx2 versions, apart from xop), and there is a lot of P-state transitions. The actual process is fluctuating between two(or more) cores, each needs to "speed up" and we have lags.
Last edited by Duelist (2014-11-19 15:42:50)
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline
hmmmm, after all pinning work done by orcephrye i just wonder how libvirt/virt-manager's cpu pinning works.
and whether it's more optimal (performance-wise) to use manual cpu pinning or libvirt/virt-manager's one.
p.s. for now i am totally satisfied with performance i get using "dumb" libvirt/virt-manager approach, but having more info on the matter would be great (in case someone has clear view of things)
Offline
To add some information: I compiled the kernel myself with the PKGBUILD. I only added the two patches, i modified them, because they did not patch correctly with actual kernel 3.17.3
i915_vga_arbiter_fixes_mod.patch:
http://sprunge.us/HWaY
acs_override_mod.patch:
http://sprunge.us/UiVa
$ lsmod | grep stub
pci_stub 12429 0
nouveau is NOT loaded. nouveau is also blacklisted.
The first time I run ./vfio-bind there are 3 messages in dmesg, after that every time I run it, there are 4 messages. I dont really get it. (I echoed X to /dev/kmesg after every vfio-bind)
[ 60.559359] usb 1-1.6: USB disconnect, device number 5
[ 892.895082] VFIO - User Level meta-driver version: 0.3
[ 892.900616] vfio-pci: probe of 0000:01:00.0 failed with error -22
[ 892.902868] vfio-pci: probe of 0000:01:00.0 failed with error -22
[ 892.902878] vfio-pci: probe of 0000:01:00.1 failed with error -22
[ 919.366858] X
[ 925.888402] vfio-pci: probe of 0000:01:00.0 failed with error -22
[ 925.888414] vfio-pci: probe of 0000:01:00.1 failed with error -22
[ 925.890925] vfio-pci: probe of 0000:01:00.0 failed with error -22
[ 925.890935] vfio-pci: probe of 0000:01:00.1 failed with error -22
[ 942.445180] X
[ 943.751225] vfio-pci: probe of 0000:01:00.0 failed with error -22
[ 943.751236] vfio-pci: probe of 0000:01:00.1 failed with error -22
[ 943.753664] vfio-pci: probe of 0000:01:00.0 failed with error -22
[ 943.753674] vfio-pci: probe of 0000:01:00.1 failed with error -22
[ 1013.824028] X
[ 1014.636678] vfio-pci: probe of 0000:01:00.0 failed with error -22
[ 1014.636689] vfio-pci: probe of 0000:01:00.1 failed with error -22
[ 1014.639130] vfio-pci: probe of 0000:01:00.0 failed with error -22
[ 1014.639141] vfio-pci: probe of 0000:01:00.1 failed with error -22
[ 1015.453419] X
[ 1016.252234] vfio-pci: probe of 0000:01:00.0 failed with error -22
[ 1016.252245] vfio-pci: probe of 0000:01:00.1 failed with error -22
[ 1016.254664] vfio-pci: probe of 0000:01:00.0 failed with error -22
[ 1016.254675] vfio-pci: probe of 0000:01:00.1 failed with error -22
Offline
To add some information:
snip[ 60.559359] usb 1-1.6: USB disconnect, device number 5 [ 892.895082] VFIO - User Level meta-driver version: 0.3 [ 892.900616] vfio-pci: probe of 0000:01:00.0 failed with error -22 [ 892.902868] vfio-pci: probe of 0000:01:00.0 failed with error -22 [ 892.902878] vfio-pci: probe of 0000:01:00.1 failed with error -22
-EINVAL likely means the IOMMU is not enabled. Look at the groups in /sys/kernel/iommu_groups/ If there's nothing there, vfio can't work. You probably need intel_iommu=on or your kernel config may be broken.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
hmmmm, after all pinning work done by orcephrye i just wonder how libvirt/virt-manager's cpu pinning works.
and whether it's more optimal (performance-wise) to use manual cpu pinning or libvirt/virt-manager's one.p.s. for now i am totally satisfied with performance i get using "dumb" libvirt/virt-manager approach, but having more info on the matter would be great (in case someone has clear view of things)
libvirt provides vcpu pinning options equivalent to what orcephrye is doing, check the libvirt domain xml spec for the correct options (I've never had any luck trying to figure out how to specify it via the virt-manager UI).
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
-EINVAL likely means the IOMMU is not enabled. Look at the groups in /sys/kernel/iommu_groups/ If there's nothing there, vfio can't work. You probably need intel_iommu=on or your kernel config may be broken.
Thank you for your answer.
You are right, /sys/kernel/iommu_groups is a empty directory.
I used the config.x86_64 from the PKGBUILD: https://projects.archlinux.org/svntogit … ages/linux
zcat /proc/config.gz | grep CONFIG_VFIO_PCI_VGA
CONFIG_VFIO_PCI_VGA=y
Here is the /proc/config.gz
What else must be enabled?
http://sprunge.us/XSCA
Offline
aw wrote:You probably need intel_iommu=on
What else must be enabled?
That is a kernel boot parameter AW mentioned. Please check if it's enabled.
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline
intel_iommu=1
Just looked at the code, this is not valid. It needs to be "on"
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
oh, thank you very very much!!!! :-)
Omg, dont know, where I got the 1 from :-x
Offline
Again: I start in X as root:
qemu-system-x86_64 -enable-kvm -M q35 -m 1024 -cpu host -smp 3,sockets=1,cores=3,threads=1 -bios /usr/share/qemu/bios.bin -vga none -device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on -device vfio-pci,host=01:00.1,bus=root.1,addr=00.1
I get this output:
Device option ROM contents are probably invalid (check dmesg).
Skip option ROM probe with rombar=0, or load from file with romfile=
dmesg says
[ 1137.081643] vfio-pci 0000:01:00.0: Invalid ROM contents
[ 1137.473305] kvm: zapping shadow pages for mmio generation wraparound
dwe11er says:
https://bbs.archlinux.org/viewtopic.php … 7#p1431277
The commandline option is set (with =1, not with =on): i915.enable_hd_vgaarb=1
Is the "vbios file" a file I get with nvflash and how do I have to pass it to qemu?
Is it reasonable to use rombar=0 and again: how do I have to pass it to qemu?
Offline
I've been able to get this working on my main desktop system (FX-8350, Gigabyte 990FXA-UD3, R9 290 for the guest, GTX 750 Ti for the host) but have been having more trouble with the less-friendly hardware used in my secondary/guest computer:
Motherboard: Intel DP67BG
CPU: i5-2500
Host graphics: Gigabyte HD6670
Guest graphics: Gigabyte GTX 650 Ti (GV-N65TOC-1GI rev 1.0, has the original BIOS-only firmware)
I've been able to get the guest to boot up OK with the nvidia card passed through, installed the drivers etc. (with the ACS overrides patch) but I get the notorious 'Code 43' error. I've got kvm hiding enabled, and as far as I can tell none of the hyperv extensions are enabled, but even if I use the 335 drivers I still get the same error in device manager & no output on the card.
Disabling/enabling the card in device manager removes the error for that boot, but the card still doesn't work and the nvidia control panel crashes if I attempt to open it.
I've also tried the card in both the primary & secondary x16 slots without any luck, the bios also does have an option to select which card to use as primary but this also does not help.
Any ideas, or am I SOL?
Also, assuming I'm not entirely SOL I'm interested in trying out the OVMF method, but as mentioned the card I want to pass through doesn't have EFI-capable firmware. There is a firmware update listed for it on Gigabyte's site that apparently does, but I'm not sure how to flash this from Linux...
Offline
Hey guys.
I made an account just so I could post my findings and what I needed to do make this work.
First following the rest of the directions did work, I got my display working pretty quickly and easily, but my problem was once Windows installed the video drivers it would blue screen, and it went away so fast I couldn't read the error.
Finally it stayed long enough, I found out it was being caused my the Windows ATI drivers. I'm using a Radeon HD 2600 XT. What I needed to do was boot into safe mode, uninstall the Windows drivers and use the ATI drivers I downloaded.
Once that was done the machine booted up correctly and has been working great since.
Thanks for all the great work.
Offline
Congratulations @ Eric! Nice to hear more success stories. Cross your fingers for me now
@Alex (aw) thanks a lot for your suggestion. Switching to MSI worked exactly as you said. Scratching in the sound is gone! Thank you so much.
Is it possible that (in one of your setups) you have a i5-3470t?
Cause that's the CPU I am running and I couldn't manage to get the best solution regarding cpu tuning.
would you be so kind to post that certain part of code in the .xml?
that would be great
Last edited by 4kGamer (2014-11-19 21:16:37)
Offline
I'm using a Radeon HD 2600 XT.
Wow, I think that's the oldest Radeon card we'd heard of working. Thanks for the report.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
Congratulations @ Eric! Nice to hear more success stories. Cross your fingers for me now
@Alex (aw) thanks a lot for your suggestion. Switching to MSI worked exactly as you said. Scratching in the sound is gone! Thank you so much.
Is it possible that (in one of your setups) you have a i5-3470t?
Cause that's the CPU I am running and I couldn't manage to get the best solution regarding cpu tuning.
would you be so kind to post that certain part of code in the .xml?
that would be great
I do have an i5-3470t, but all the tuning I do is pretty generic.
First you can set hugepages:
<memoryBacking>
<hugepages/>
<nosharepages/>
</memoryBacking>
You'll need to pre-allocate the hugepages via kernel commandline or procfs, libvirt won't do it for you (yet).
Basic CPU pinning:
<vcpu placement='static'>4</vcpu>
<cputune>
<vcpupin vcpu='0' cpuset='0'/>
<vcpupin vcpu='1' cpuset='1'/>
<vcpupin vcpu='2' cpuset='2'/>
<vcpupin vcpu='3' cpuset='3'/>
</cputune>
CPU type and topology:
<cpu mode='host-passthrough'>
<topology sockets='1' cores='2' threads='2'/>
</cpu>
This is equivalent to -cpu host with a topology matching the physical CPU.
And if you're using AMD or have otherwise freed yourself of Nvidia restrictions:
<features>
<hyperv>
<relaxed state='on'/>
<vapic state='on'/>
<spinlocks state='on' retries='8191'/>
</hyperv>
</features>
<clock offset='localtime'>
<timer name='hypervclock' present='yes'/>
</clock>
And as noted recently, use the performance CPU governor on the host for benchmarking/gaming.
The i5-3470t is a little on the under-powered side for gaming, even on bare metal. For a VM, you don't really have any cores or even threads to spare for the host. I think things like Steam in-home-streaming recommend a quad-core. That doesn't stop me from using it with Nvidia and AMD VMs running simultaneously, but I either need to partition them as 1core + 1thread each, with pinning on the correct physical CPUs, or I need to serialize the workload between them to avoid choppiness.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
Thank you for posting your configuation. My settings are just about the same. Looks like the i5 is simply not good enough as you already stated, but I haven't set hugepages yet.
That I'll try next.
For 4k of course it's a no-go. I guess it needs to be the i7-4790k which is next on my agenda.
Offline