You are not logged in.
thearcherblog wrote:OMG... Today will be a party day!!!!! It's working!!!
All of you are the masters of knowledge and the kings of wisdom!!!!
There are not enough words to express my happiness right now!
If exists a donation place I will do it right now and if not, please, let me know if any of you is in Norway and I will invite him/her to a beer!!!!
Amazing!!!!!! Thanks a lot!!!!!
So setting those options seems necessary on new Intel platforms (X99-Z97) .
Happy to see that you got it working
I'm not expert so I cannot be sure if it's necessary on these chipsets only but now the solution is on this forum for eternity
Really I'm the most happy guy in the world right now and all thanks to all of you. I made my donation really happy! Please... never change! Best community ever!
Offline
I've noticed that my disk subsystem is bottlenecking my game.
I'm using a virtio-blk-pci device with 60GB raw image cache=none(i thought that windows would cache disk adequately, foolish me) on my 750GB 5-years-old-with-few-badsectors-louder-than-a-CNC-mill WD caviar black hdd.So i've created a new image sized 10GB in some directory mounted on the system SSD disk, sized 10GB. Attached it as an another virtio-blk-pci device to my windows VM and...
The disk image had 10GB of zeroes inside, so windows suggested initializing it in diskmgmt.msc, allowing me to choose between MBR and GPT(windows 7!). That again proves that i am right saying you do not need windows 8 to run on pure EFI.So i've moved my steam library to the new disk image, and i want to show you a neat little trick: windows has symlinks. Kind of. You can attach a drive as an empty NTFS folder, which is useful for some applications that load their data from a fixed location(like user's documents folder), but the folder must be empty.
Now the disk IO lags are gone, yay for me.
Put stuff on LVM instead of file based images.
Screenshot. I've got LVM on top of dm-crypt on top of mdadm's RAID1 (2x 1TB Barracuda ES.2). All of the disks (windows and steam are separate) are:
raw images
cache set to none
aio set to native
ioeventfd set to on
x-data-plane set to on
steam drive has contiguous allocation
Last edited by dwe11er (2014-11-24 09:59:01)
Offline
Duelist wrote:I've noticed that my disk subsystem is bottlenecking my game.
I'm using a virtio-blk-pci device with 60GB raw image cache=none(i thought that windows would cache disk adequately, foolish me) on my 750GB 5-years-old-with-few-badsectors-louder-than-a-CNC-mill WD caviar black hdd.So i've created a new image sized 10GB in some directory mounted on the system SSD disk, sized 10GB. Attached it as an another virtio-blk-pci device to my windows VM and...
The disk image had 10GB of zeroes inside, so windows suggested initializing it in diskmgmt.msc, allowing me to choose between MBR and GPT(windows 7!). That again proves that i am right saying you do not need windows 8 to run on pure EFI.So i've moved my steam library to the new disk image, and i want to show you a neat little trick: windows has symlinks. Kind of. You can attach a drive as an empty NTFS folder, which is useful for some applications that load their data from a fixed location(like user's documents folder), but the folder must be empty.
Now the disk IO lags are gone, yay for me.
Put stuff on LVM instead of file based images.
Screenshot. I've got LVM on top of dm-crypt on top of mdadm's RAID1 (2x 1TB Barracuda ES.2). All of the disks (windows and steam are separate) are:
raw images
cache set to none
aio set to native
ioeventfd set to on
x-data-plane set to on
steam drive has contiguous allocation
I am also using LVM and can confirm that it works really good. However, can you post your qemu command-line please? I'd like to try some the optimization you are using, and I also would like to see how you "call" your LVM parition (ie: i'm only doing this: "-drive file=/dev/guest/win7,id=data,format=raw -device scsi-hd,drive=data").
Offline
dwe11er wrote:Duelist wrote:I've noticed that my disk subsystem is bottlenecking my game.
I'm using a virtio-blk-pci device with 60GB raw image cache=none(i thought that windows would cache disk adequately, foolish me) on my 750GB 5-years-old-with-few-badsectors-louder-than-a-CNC-mill WD caviar black hdd.So i've created a new image sized 10GB in some directory mounted on the system SSD disk, sized 10GB. Attached it as an another virtio-blk-pci device to my windows VM and...
The disk image had 10GB of zeroes inside, so windows suggested initializing it in diskmgmt.msc, allowing me to choose between MBR and GPT(windows 7!). That again proves that i am right saying you do not need windows 8 to run on pure EFI.So i've moved my steam library to the new disk image, and i want to show you a neat little trick: windows has symlinks. Kind of. You can attach a drive as an empty NTFS folder, which is useful for some applications that load their data from a fixed location(like user's documents folder), but the folder must be empty.
Now the disk IO lags are gone, yay for me.
Put stuff on LVM instead of file based images.
Screenshot. I've got LVM on top of dm-crypt on top of mdadm's RAID1 (2x 1TB Barracuda ES.2). All of the disks (windows and steam are separate) are:
raw images
cache set to none
aio set to native
ioeventfd set to on
x-data-plane set to on
steam drive has contiguous allocation
I am also using LVM and can confirm that it works really good. However, can you post your qemu command-line please? I'd like to try some the optimization you are using, and I also would like to see how you "call" your LVM parition (ie: i'm only doing this: "-drive file=/dev/guest/win7,id=data,format=raw -device scsi-hd,drive=data").
I'm using libvirt to start my VMs. Anyway, here is libvirt's command-line (I've truncated not relevant stuff):
/usr/sbin/qemu-system-x86_64 -name windows -S -machine pc-i440fx-2.1,accel=kvm,usb=off -cpu host,hv_time,-kvm_pv_eoi,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff
-m 8192 -realtime mlock=off -smp 6,sockets=1,cores=3,threads=2 -nographic -no-user-config -nodefaults
-rtc base=localtime,clock=vm,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown
...
-drive file=/dev/storage/windows-ovmf,if=none,id=drive-virtio-disk0,format=raw,aio=native
-device virtio-blk-pci,ioeventfd=on,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive file=/dev/storage/steam,if=none,id=drive-virtio-disk1,format=raw,aio=native
-device virtio-blk-pci,ioeventfd=on,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk1,id=virtio-disk1
...
-device vfio-pci,host=00:1b.0,id=hostdev0,bus=pci.0,addr=0x2
-device vfio-pci,host=01:00.0,id=hostdev1,bus=pci.0,multifunction=on,addr=0x7
-device vfio-pci,host=01:00.1,id=hostdev2,bus=pci.0,addr=0x7.0x1
-device usb-host,hostbus=2,hostaddr=5,id=hostdev3
...
-set device.virtio-disk0.scsi=off -set device.virtio-disk0.x-data-plane=on -set device.virtio-disk1.scsi=off -set device.virtio-disk1.x-data-plane=on -msg timestamp=on
Offline
Thanks dwe11er!
If I remember correctly OP (or someone else) compiled OVMF from git and gave a link to binaries. Anyone could share this link please?
Last edited by Nesousx (2014-11-24 11:51:48)
Offline
Thanks dwe11er!
If I remember correctly OP (or someone else) compiled OVMF from git and gave a link to binaries. Anyone could share this link please?
Offline
Thanks nbhs. @dwe11er, after a quick test here, I have way better disk perf if I do not specify "aio=native,cache=none". I will do more tests if I have time, and report here. I just wanted to let you know, that you may be able to get better perf too. Unless those values you get are very close from your bare metal config.
Offline
Denso wrote:thearcherblog wrote:OMG... Today will be a party day!!!!! It's working!!!
All of you are the masters of knowledge and the kings of wisdom!!!!
There are not enough words to express my happiness right now!
If exists a donation place I will do it right now and if not, please, let me know if any of you is in Norway and I will invite him/her to a beer!!!!
Amazing!!!!!! Thanks a lot!!!!!
So setting those options seems necessary on new Intel platforms (X99-Z97) .
Happy to see that you got it working
I'm not expert so I cannot be sure if it's necessary on these chipsets only but now the solution is on this forum for eternity
Well, I'd prefer to have a bit more closure than instructions to apply a long list of arbitrary module options. Any chance you could whittle down the list of options and figure out what fixed it?
options kvm ignore_msrs=1 options intel_kvm emulate_invalid_guest_state=0 options intel_kvm nested=1 options intel_kvm enable_shadow_vmcs=1 options intel_kvm enable_apicv=1 options intel_kvm ept=1 options vfio_pci nointxmask=1 options vfio_iommu_type1 disable_hugepages=1 options vfio_iommu_type1 allow_unsafe_interrupts=1
Most of these seem completely unnecessary to me, especially the vfio options. APICv is already enabled by default when available, so that's a no-op. nested=1, why? You're not doing nesting. ignore_msrs=1 is the only thing I have set on my box.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
Thanks nbhs. @dwe11er, after a quick test here, I have way better disk perf if I do not specify "aio=native,cache=none". I will do more tests if I have time, and report here. I just wanted to let you know, that you may be able to get better perf too. Unless those values you get are very close from your bare metal config.
Might be. My main testing method was 'no sound stuttering during heavy I/O' and it's ok with these settings
Offline
Hi, I just want to share my sucess story .
I'm running sucessfully Arch Linux with Nvidia driver with Qemu machine with Windows 7. My CPU's i5-3550, motherboard's AsRock Z77 Extreme4, host GPU's GeForce 9800 GTX+, guest GPU's Radeon 4850. Working just great, using obviously mainline kernel from OP PKG.
Some write-up with my thoughts if anyone's interested: http://tech.draiser.net/2014/11/23/kvmq … g-machine/
Offline
thearcherblog wrote:Denso wrote:So setting those options seems necessary on new Intel platforms (X99-Z97) .
Happy to see that you got it working
I'm not expert so I cannot be sure if it's necessary on these chipsets only but now the solution is on this forum for eternity
Well, I'd prefer to have a bit more closure than instructions to apply a long list of arbitrary module options. Any chance you could whittle down the list of options and figure out what fixed it?
Denso wrote:options kvm ignore_msrs=1 options intel_kvm emulate_invalid_guest_state=0 options intel_kvm nested=1 options intel_kvm enable_shadow_vmcs=1 options intel_kvm enable_apicv=1 options intel_kvm ept=1 options vfio_pci nointxmask=1 options vfio_iommu_type1 disable_hugepages=1 options vfio_iommu_type1 allow_unsafe_interrupts=1
Most of these seem completely unnecessary to me, especially the vfio options. APICv is already enabled by default when available, so that's a no-op. nested=1, why? You're not doing nesting. ignore_msrs=1 is the only thing I have set on my box.
Ok, I will test one by one to check what fixed it
But give me 24 hours or so
And thanks a lot!!!
Offline
Most of these seem completely unnecessary to me, especially the vfio options. APICv is already enabled by default when available, so that's a no-op. nested=1, why? You're not doing nesting. ignore_msrs=1 is the only thing I have set on my box.
I know . There was a time when I completely commented out every line of those options and the VMs booted just fine . But they weren't stable after hours of operation . Plus , with them on , both of my VMs can boot fine at the same time (2 seperate GPUs , one to each VM) .
THe masking option is needed in my case , otherwise the whole VM goes into SloMo mode .
I'll comment out the APICv + nested + shadow_vmcs + ept from now on though . I already commented out the disable_hugepages one for a month or so , no problem .
I didn't need any of these options when I was using Z77 .. X99 is more troublesome to setup . ALOT . And there is no way I can reboot any VM with GPU assigned to it without crashing the host It is the ONLY remaining issue I have now .
They reboot fine , until I install Nvidia's drivers and then they crash the host whenever I reboot them .
Last edited by Denso (2014-11-24 18:31:54)
Offline
aw wrote:Most of these seem completely unnecessary to me, especially the vfio options. APICv is already enabled by default when available, so that's a no-op. nested=1, why? You're not doing nesting. ignore_msrs=1 is the only thing I have set on my box.
I know . There was a time when I completely commented out every line of those options and the VMs booted just fine . But they weren't stable after hours of operation . Plus , with them on , both of my VMs can boot fine at the same time (2 seperate GPUs , one to each VM) .
I do this too, one AMD + one Nvidia. AFAIK, I only need ignore_msrs so that passmark performance test works
THe masking option is needed in my case , otherwise the whole VM goes into SloMo mode .
That's specific to a device you're assigning, not likely related to the chipset. The option overrides our detection of whether the device supports INTxDisable masking and instead masks the interrupt at the APIC instead of the device. The huge downside of that is that it requires the device has an exclusive interrupt because of the difference in where we mask the interrupt. Also note:
$ modinfo vfio-pci
...
parm: nointxmask:Disable support for PCI 2.3 style INTx masking. If this resolves problems for specific devices, report lspci -vvvxxx to linux-pci@vger.kernel.org so the device can be fixed automatically via the broken_intx_masking flag. (bool)
Have you done that? This option would apply automatically to just that device if you did...
I'll comment out the APICv + nested + shadow_vmcs + ept from now on though . I already commented out the disable_hugepages one for a month or so , no problem .
I didn't need any of these options when I was using Z77 .. X99 is more troublesome to setup . ALOT . And there is no way I can reboot any VM with GPU assigned to it without crashing the host It is the ONLY remaining issue I have now .
They reboot fine , until I install Nvidia's drivers and then they crash the host whenever I reboot them .
Aside from the two Radeon reset issues (R7790 and similar can only be used once per boot and HD8570 and similar mistakenly report NoSoftRst-) I don't know of any other GPU reset issues, nor do I see why X99 would be more or less problematic than Z77 in that respect.
EDIT: Nvidia cards have pretty terrible DisableINTx support on the audio device (esp. Quadro and even fake Quadro), but the better option there is typically to make the guest driver use MSI, not to run with nointxmask
Last edited by aw (2014-11-24 18:54:49)
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
Have you done that? This option would apply automatically to just that device if you did...
Reported just now
Will try to comment out nointxmask=1 and see if reboot issues are resolved .
nor do I see why X99 would be more or less problematic than Z77 in that respect.
I don't think its X99's fault , but rather ASUS's .
EDIT :
Confirmed . Commenting out nointxmask=1 solves the reboot issue in one VM . The second one , reboots fine and Windows loads fine until it reaches the login screen , then it goes into black screen . The good news is that the host doesn't crash anymore
Now . I just have to wait for my USB PCI-E card to be fixed so I wouldn't need nointxmask=1 .
EDIT 2 :
Enabling MSI in the second 8.1 VM fixed its reboots issue completely ! Thanks Alex ! One more question , I'm about to reinstall Windows 8.1 on this particular VM . can I export the "MessageSignaledInterruptProperties" key to a .reg file and just apply it to the new installed OS ? Or should I just do it manually again ? EDIT : Yes , it works when exported to a .reg file .
Last edited by Denso (2014-11-24 20:48:28)
Offline
Have you done that? This option would apply automatically to just that device if you did...
Reported just now
Will try to comment out nointxmask=1 and see if reboot issues are resolved .
nor do I see why X99 would be more or less problematic than Z77 in that respect.
I don't think its X99's fault , but rather ASUS's .
EDIT :
Confirmed . Commenting out nointxmask=1 solves the reboot issue in one VM . The second one , reboots fine and Windows loads fine until it reaches the login screen , then it goes into black screen . The good news is that the host doesn't crash anymore
Now . I just have to wait for my USB PCI-E card to be fixed so I wouldn't need nointxmask=1 .
You can add a patch like this one to your kernel to try it:
http://git.kernel.org/cgit/linux/kernel … 3cb30b73ad
Copy the two lines of code added and replace with your vendor & device ID.
EDIT 2 :
Enabling MSI in the second 8.1 VM fixed its reboots issue completely ! Thanks Alex !
One more question , I'm about to reinstall Windows 8.1 on this particular VM . can I export the "MessageSignaledInterruptProperties" key to a .reg file and just apply it to the new installed OS ? Or should I just do it manually again ?EDIT : Yes , it works when exported to a .reg file .
Great! And neat trick with the exported reg file, please add it as a comment on the blog.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
You can add a patch like this one to your kernel to try it:
http://git.kernel.org/cgit/linux/kernel … 3cb30b73ad
Copy the two lines of code added and replace with your vendor & device ID.
Would do that when I compile the next time for sure .
Great! And neat trick with the exported reg file, please add it as a comment on the blog.
Will do so in a minute
EDIT :
errrr ... I can't publish my comment to your blog because I don't have any account of those that are required to post . Can you publish it ? Thank you .
Hi .
You can export the "MessageSignaledInterruptProperties" key to a ".reg" file and use it to enable MSI easily when you reinstall Windows .
Navigate to "MessageSignaledInterruptProperties" key and right click on it > Choose "Export" , give it a name and save it anywhere you like . You will end up with a ".reg" file . you can double click on it to merge it into the registry when you do a reinstall of Windows .
You can also combine multiple ".reg" files into one big unified one to make things even easier .
EDIT 2 :
Why can't I enable MSI for Nvidia's HDMI audio device ?
Figured it out . The device path shown in device properties doesn't point to the correct path . I enabled MSI for ALL devices that have Vendor's ID of 10de (Nvidia's ID) , and now it works beautifully without "nointxmask=1" . (Yes , the culprit was Nvidia's audio device and NOT the USB card as I initially assumed) .
EDIT 3 :
Here is the ".reg" file for fixing Nvidia's GT610 audio and reboot issues (I exported each key to a seperate file then combined them into a unified file using a quick copy/paste action in notepad :
http://www.pastebin.ca/2877860
If you have GT610 and want to try it , just copy the text to a file , save it with ".reg" extention and just double click on it to merge it to your registry .
Hope this helps
Last edited by Denso (2014-11-25 08:16:19)
Offline
If you have GT610 and want to try it , just copy the text to a file , save it with ".reg" extention and just double click on it to merge it to your registry .
Hope this helps
Huh, you're passing a GT610?
I have a GT610 too, but i'm using it for host system output. Is there any testing i could help you with? My GT610 is silent 1GB version from gigabyte.
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline
Just to inform:
Having the cpupower ondemand or on powersave give me from 12 fps to 75 running Valley Benchmark so... be careful
Between ondemand and performance I didn't found any relevant difference.
Regards,
TheArcher
Last edited by thearcherblog (2014-11-25 12:33:56)
Offline
Just to inform:
Having the cpupower ondemand or on powersave give me from 12 fps to 75 running Valley Benchmark so... be careful
Between ondemand and performance I didn't found any relevant difference.
The powersave governor statically runs the processor at the lowest speed regardless of load, so yes, it will have a dramatic effect on performance. The performance governor runs the processor at the highest speed regardless of load. The ondemand governor adjusts the processor speed based on load, which is where the guest can experience response latency.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
Thank you, all is working now except sound and reboot.
I hear sound, but I get scraching/stutter. I use alsa and tried it with the params the OP mentioned it, but without "QEMU_ALSA_DAC_BUFFER_SIZE=512 QEMU_ALSA_DAC_PERIOD_SIZE=170 QEMU_AUDIO_DRV=alsa" it is the same result.
Reboot:
I found this method: http://blog.ktz.me/?p=219
All is working, when I start the logon script, the screen turns black and comes back. When I reboot with the logoff script, I get Invalid ROM contents and starting the VM does not work.
Any Ideas?
Offline
thearcherblog wrote:Just to inform:
Having the cpupower ondemand or on powersave give me from 12 fps to 75 running Valley Benchmark so... be careful
Between ondemand and performance I didn't found any relevant difference.
The powersave governor statically runs the processor at the lowest speed regardless of load, so yes, it will have a dramatic effect on performance. The performance governor runs the processor at the highest speed regardless of load. The ondemand governor adjusts the processor speed based on load, which is where the guest can experience response latency.
However, this isn't true for intel_pstate (now by default for Sandy Bridge and upwards) which supports only powersave and performance governors. They behave more like ondemand, with some changes in power consumption characteristics in the long run.
Last edited by dwe11er (2014-11-25 15:41:14)
Offline
aw wrote:thearcherblog wrote:Just to inform:
Having the cpupower ondemand or on powersave give me from 12 fps to 75 running Valley Benchmark so... be careful
Between ondemand and performance I didn't found any relevant difference.
The powersave governor statically runs the processor at the lowest speed regardless of load, so yes, it will have a dramatic effect on performance. The performance governor runs the processor at the highest speed regardless of load. The ondemand governor adjusts the processor speed based on load, which is where the guest can experience response latency.
However, this isn't true for intel_pstate (now by default for Sandy Bridge and upwards) which supports only powersave and performance governors. They behave more like ondemand, with some changes in power consumption characteristics in the long run.
It's truth but I needed to disable the intel_pstate because my processor was running all time at 4GHz...
Offline
I have a GT610 too, but i'm using it for host system output. Is there any testing i could help you with? My GT610 is silent 1GB version from gigabyte.
This card sucks . Or maybe my X99-Deluxe sucks . This card used to work with zero tweaking on my AMD 990FXA board . Now everytime I think I finally got rid of the reboot issue , a couple reboots later and BAAAM it hangs the VM again . Sigh ... I'm really tired . It works pretty darn well until I install the Nvidia drivers . Then it is the same all over again . I'm seriously thinking of running the VM in full screen on top of a Linux DE . Will QXL give me 60fps when watching movies and surfing ? Because if it would , I will happily go that route .
umm ... would it be possible to manually install JUST the graphics/audio driver in Windows without running the Nvidia installer ? Would worth a try !
Offline
I have a GT610 too, but i'm using it for host system output. Is there any testing i could help you with? My GT610 is silent 1GB version from gigabyte.
This card sucks . Or maybe my X99-Deluxe sucks . This card used to work with zero tweaking on my AMD 990FXA board . Now everytime I think I finally got rid of the reboot issue , a couple reboots later and BAAAM it hangs the VM again . Sigh ... I'm really tired . It works pretty darn well until I install the Nvidia drivers . Then it is the same all over again . I'm seriously thinking of running the VM in full screen on top of a Linux DE . Will QXL give me 60fps when watching movies and surfing ? Because if it would , I will happily go that route .
umm ... would it be possible to manually install JUST the graphics/audio driver in Windows without running the Nvidia installer ? Would worth a try !
uhm, running vm for surfing/video playback?
i always thought both of these areas are far more superior/comfortable while being in linux, but i just might have missed something ))
Offline
This card sucks .
Well, you know, it's a PLUG to fill the pci-e port.
BTW, does anyone know what those two or three recent seavgabios git updates brought us? Seems like i've got something borked:
Unigine Heaven benchmark runs fine in CF mode, GPU-Z says CF is enabled and even renders, but when i start my -nodx9ex switched game which is using DX9 - it crashes with something related to access violation and then even the Unigine Heaven benchmark breaks. Yesterday it was working just fine. And there is no messages in the host dmesg or something.
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline