You are not logged in.
jack_boss wrote:aw wrote:Maybe your ovmf package isn't installing the files where libvirt/virt-manager looks for them? On my Fedora 21 system the binaries are installed in /usr/share/edk2.git/ovmf-x64/ Your virt-manager would need to be pulled form the devel tree since last Wednesday to include the fix. AIUI = As I Understand It.
I'm stuck at this step too, since I started following your guide.
I found out that ovmf from the arch official repository installs its files to/usr/share/ovmf/ovmf_x64.bin
vs ovmf-svn (installed today)
/usr/share/ovmf/64/ovmf_x64.bin
I made softlinks to point to the "correct" place. Didn't work.
You need to follow the instructions here:
https://wiki.archlinux.org/index.php/PC … stallation
The packages on aur won't work.
Edit: And to make it appear in virtual manager add this to qemu.conf:
nvram = [
"/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd:/usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd",
"/usr/share/edk2.git/aarch64/QEMU_EFI-pflash.raw:/usr/share/edk2.git/aarch64/vars-template-pflash.raw",
]Now this is where I am unclear, I only included the second aarch64 line because libvirt fails to start if it isn't in there, but there isn't anything at that location. I guess qemu.conf needs it defined even if you don't use it?
This is the right answer, in general. In more detail:
* The AUR builds are probably not okay -- I assume they don't package the separate (split) OVMF_CODE.fd (firmware binary) and OVMF_VARS.fd (varstore template) files. Gerd's packages do.
* You should edit the nvram stanza in /etc/libvirt/qemu.conf as described above, and restart libvirtd. This is documented in the OVMF whitepaper, section "Installation of OVMF guests with virt-manager and virt-install".
* Unless you give me the exact error message from libvirt, I won't know why it rejected the single-entry nvram stanza. (Refer to /var/log/messages or systemctl status libvirtd.) My guess is that you may have left a stray comma at the end of that sole entry. (OTOH, as Alex points out, your example now has a stray comma after the second entry, and apparently libvirt doesn't complain... So please provide the exact error message.)
Offline
Ok, so it looks like my issue was that I was commenting out the line in the middle of the configuration block with a #. If I remove the line completely libvirtd restarts fine. My fault for being lazy.
This is what systemctl spits back at me with the # in the config block.
May 27 15:27:09 Desktop libvirtd[2264]: libvirt version: 1.2.15
May 27 15:27:09 Desktop libvirtd[2264]: internal error: Failed to find path for dmidecode binary
May 27 15:27:09 Desktop libvirtd[2264]: configuration file syntax error: /etc/libvirt/qemu.conf:516: expecting a value
May 27 15:27:09 Desktop libvirtd[2264]: Initialization of QEMU state driver failed: configuration file syntax error: /etc/libvirt/qemu.conf:516: expecting a value
May 27 15:27:09 Desktop libvirtd[2264]: Driver state initialization failed
Edit: And yeah the aur builds of ovmf just create one binary, so using gerd's package is the correct option to get it working with virt-manager.
Last edited by Punkbob (2015-05-27 19:33:26)
Offline
@lersek thanks a lot this is actually working :
1. Getting ovmf from Gerf Hoffman's repo ;
2. Editing qemu.conf.
I now can select OVMF from virt-manager GUI. Not sure about hyper-V extensions though. I will probably have to turn them off manually. I'll keep you posted.
Offline
update on the laptop experiment:
when I've tried to get more info on the BSOD, it just installed the driver fine. I've not changed anything since the BSOD, no idea why it didn't crash now then ...
though I'm still in state I was in previous install (the one where I installed drivers when cirrus was primary) - it boots, shows logo ans "starting windows", then screen goes black and while after that I see the sound it does when finished booting and asks for password.
I don't see anything interesting in event log, nothing in dmesg at that time.
interesting thing is that windows lists the card as 7400M series (after installing amd driver). no idea if that's correct or not, I've not installed windows on this machine in some time, so I don't have where to check, if that's correct. (6400 and 7400 are quite simmilar, linux lists it as 6400m/7400m)
the problem with not shutting down is still present, any ideas on that?
Offline
Is it possible to use cpu-passthrough mode with that 2 extra flags to make windows 10 boot/work with "real" cpu id?
Offline
Is it possible to use cpu-passthrough mode with that 2 extra flags to make windows 10 boot/work with "real" cpu id?
It's a problem with kvm *itself* so it makes more sense to report a bug with upstream.
Edit: I am debugging it now, the error I see spit back at me in the log files is:
kvm [5953]: vcpu0 unhandled wrmsr: 0x38d data 0
Which when I search there are references on the kvm mailing list to processor functionality that the OS is trying to address, and kvm not having the extra code to properly use those features.
I am going to build a test box and turn serial debugging on inside windows, and go from there to see what it complains about.
Last edited by Punkbob (2015-05-28 17:09:27)
Offline
AIUI Win7+OVMF+SMP is incompatible with hyper-v extensions, so you'll need to disable them even if you're not using Nvidia.
Errr... I've had this working some time ago, but... now the manually-added hyper-v enlightenments are gone from my domain xml... I should try adding them manually. Oh well. Yeah, not it hangs up on boot logo.
though I'm still in state I was in previous install (the one where I installed drivers when cirrus was primary) - it boots, shows logo ans "starting windows", then screen goes black and while after that I see the sound it does when finished booting and asks for password.
Can you plug a screen into some of that GPU's video outputs? Like, an HDMI port or VGA or whatever you have there? Chances are the GPU doesn't find where to output and starts working as a 3D accelerator. Maybe you'll need to research your laptop deeper.
I am going to build a test box and turn serial debugging on inside windows, and go from there to see what it complains about.
Am i misunderstanding something, or wouldn't you achieve the same using qemu's built-in GDB server?
Last edited by Duelist (2015-05-28 18:57:31)
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline
@Duelist
From what I understand if I want to see what is going on inside windows I need to enable serial debugging and then connect to that from another windows machine as windows uses a special format. I also need to look at kvm to see what calls are being made to it, but I was starting from the inside out.
If I am wrong please feel free to correct me, as it'd save me some time configuring windbg.
Offline
So completely useless, but if you enable debug mode in Windows it will boot on haswell just fine.
It appears that it's an issue with setting a value on IA32_FIXED_CTR_CTRL which relates to CPU performance monitoring on processors Nethalhem or later.
A VMware article mentions things that you have to do to enable it in esx, might be something that kvm needs to address.
http://kb.vmware.com/selfservice/micros … Id=2030221
Edit: Or I could be barking up the completely wrong tree, still need to debug the windows kernel.
Edit2: Solved it or at least got it to boot?
So I did 2 things, first I set the CPU to "host-passthrough" and then ran "echo 1 > /sys/module/kvm/parameters/ignore_msrs"
It basically tells kvm to not send a fault when it can't access or set a msr value on the host CPU. The host-passthrough part is most likely not required, but something I tried that at least gave me a different msr error(vcpu0 ignored rdmsr: 0x639) and should also enable you to use Nvidia game-streaming as it will report your exact CPU.
Last edited by Punkbob (2015-05-29 03:35:43)
Offline
Nice, I will try it later today after work.
Offline
Edit2: Solved it or at least got it to boot?
So I did 2 things, first I set the CPU to "host-passthrough" and then ran "echo 1 > /sys/module/kvm/parameters/ignore_msrs"
It basically tells kvm to not send a fault when it can't access or set a msr value on the host CPU. The host-passthrough part is most likely not required, but something I tried that at least gave me a different msr error(vcpu0 ignored rdmsr: 0x639) and should enable you to use Nvidia game-streaming.
This is also necessary to make the Passmark Performance Test work, so I've typically got it enabled already FWIW
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
Also, when using CPU-Z software, dmesg screams with unhandled rdmsr errors, i guess cpu-z tries to determine the frequency and a multiplier by accessing those MSRs, and since we have our CPUs virtual, it fails too.
I don't think that this detail is very worth noting, but let it be.
BTW:
What do you guys recommend for storage subsystem?
I've added a small, 8GB raw file image located on old OCZ SSD to my VM, and connected it via virtio-blk-pci.
I've tried using iothreads(which are related to x-data-plane, i guess) with virtio-blk-pci but i have huge host CPU usage in benchmarks and small freezes in some weird games when loading heavy parts of the scene(the game is poorly written, but it's almost 8 years old). ATTO disk benchmark also reports a decrease in linear write performance starting somewhere at 256k sector size.
That compares poorly with virtio-scsi with four queues enabled, it provides me a significant increase in linear write performance, but the freezes aren't gone and there's a huge host CPU usage while benchmarking(well, it's pushing 200-300MB/s).
I remember aw said that he has x-data-plane enabled, but... it's either i've missed something in libvirt's domain xml or virtio-scsi is better.
And also, seems like cache=none performs better than cache=directsync when not using iothreads - redhat's optimization guide doesn't mention the second caching method or iothreads, maybe it's already outdated?..
But i definitely should determine a good method of measuring disk performance, as ATTO disk benchmark doesn't count IOPS.
Last edited by Duelist (2015-05-29 05:07:46)
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline
I remember aw said that he has x-data-plane enabled
Nope, I use virtio-scsi w/ multiqueue
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
Is it worth getting the i7-4790K over the i5-4690K for the extra vCPUs (hyperthreading)? What kind of benefits would I see?
My use case is simultaneous gaming on the VM, with moderate multitasking on the host (say 30-50 firefox tabs, etc).
Offline
Could you please test this? http://paste.fedoraproject.org/225883/98048143/
Now kernel doesn't panic or throw any error, but QEMU (stable) still hangs at shutting down, I will try the git version tomorrow
EDIT: Noticed that there is another version for upstream, I will test that one, sorry for confusion
Last edited by Cubex (2015-06-01 01:24:48)
Offline
Is it worth getting the i7-4790K over the i5-4690K for the extra vCPUs (hyperthreading)? What kind of benefits would I see?
My use case is simultaneous gaming on the VM, with moderate multitasking on the host (say 30-50 firefox tabs, etc).
30-50 firefox tabs
LOL!
And that's only the QEMU group. As you can see, firefox is more memory-hungry.
And now seriously:
I have 4 cores in my cheap Athlon X4 750K, and i'm assigning all the cores to the guest. Yeah, when either host or guest is loaded - the other system will stutter, lag and suffer from CPU starvation. But i'm very rarely that "productive".
You'll want additional cores to use cpu pinning.
So, for example, i could've cut three cores from the host and dedicate(pin) them to the VM. That would give the guest a very good CPU performance(almost native/baremetal) and what is more important - reduce the latencies and that would theoretically fix the audio problems. But i don't like to leave my host with one core only, because... i feel sad for it.
So basically, more cores - more performance, obviously, and if getting an i7 isn't much a financial burden for you - it would be great to use it.
BUT! Beware of VT-d, as some(4770K for example) K-series CPUs have disabled VT-d. That feature is crucial for the passthrough to work.
Also, you could get a Xeon E3-series, like aw does(read the first part of the guide on his blog), or, better yet - take a Xeon E5. That'd be an ultimate choice, but ultimately pricey.
Last edited by Duelist (2015-05-31 21:57:32)
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline
-Snip-
I have 4 cores in my cheap Athlon X4 750K, and i'm assigning all the cores to the guest. Yeah, when either host or guest is loaded - the other system will stutter, lag and suffer from CPU starvation. But i'm very rarely that "productive".You'll want additional cores to use cpu pinning.
So, for example, i could've cut three cores from the host and dedicate(pin) them to the VM. That would give the guest a very good CPU performance(almost native/baremetal) and what is more important - reduce the latencies and that would theoretically fix the audio problems. But i don't like to leave my host with one core only, because... i feel sad for it.So basically, more cores - more performance, obviously, and if getting an i7 isn't much a financial burden for you - it would be great to use it.
-Snip-
How would i go about dedicating some cores to the VM? And could it be done dynamically? (while the system and VM is running). I already have the VCPU cores pinned, but the cores are still being used on the host.
I'm having some problems, in the beginning it was with USB sound cards (it made the whole VM stutter when it was running). Now i'm using a virtual soundcard and the whole system is just slightly slow.
Offline
How would i go about dedicating some cores to the VM? And could it be done dynamically? (while the system and VM is running). I already have the VCPU cores pinned, but the cores are still being used on the host.
I'm having some problems, in the beginning it was with USB sound cards (it made the whole VM stutter when it was running). Now i'm using a virtual soundcard and the whole system is just slightly slow.
There's a number of various ways of doing so, but... i haven't used any of these:
cpuset
taskset
Libvirt's way - similar to cpuset maybe?..
And there was a way of using isolcpus and switching the cores offline for host(NoHZ mode?) and only bringing them up for the guest system, but i can't find more info... But yeah, somewhere in the thread someone did that.
Dynamically... Well, dynamic allocation ruins the latencies, as i understand it. Using cpuset does exactly that you want - it only makes the process working on the specified core, without forcing all other process from that core.
I'm not so familiar with all this stuff, because i don't have "enough" CPU cores for experiments.
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline
*SNIP*
And that's only the QEMU group. As you can see, firefox is more memory-hungry.And now seriously:
I have 4 cores in my cheap Athlon X4 750K, and i'm assigning all the cores to the guest. Yeah, when either host or guest is loaded - the other system will stutter, lag and suffer from CPU starvation. But i'm very rarely that "productive".You'll want additional cores to use cpu pinning.
So, for example, i could've cut three cores from the host and dedicate(pin) them to the VM. That would give the guest a very good CPU performance(almost native/baremetal) and what is more important - reduce the latencies and that would theoretically fix the audio problems. But i don't like to leave my host with one core only, because... i feel sad for it.So basically, more cores - more performance, obviously, and if getting an i7 isn't much a financial burden for you - it would be great to use it..
Haha point taken.
(As a side note, you should consider using tree style tabs.)
I'm planning on reusing my old ram (16GB) and using the savings to get an i7-4790K which does support VT-d: http://ark.intel.com/compare/80810,80811,80807,80806
Given that this has 4 real cores and 8 threads, would I still have to worry about CPU pinning to avoid stuttering? If so, I think i would just allocate 50% to each side.
Offline
Both patches works for fixing the null pointer at shutdown, thanks.
EDIT:
"dmar: DRHD: handling fault status reg 2"
is caused by USB3 controller in MB when using qemu git version, but in stable works
Last edited by Cubex (2015-06-01 04:05:47)
Offline
Guys, some time ago i was told that UEFI GOP can't go beyond 1024x768 without device-specific drivers.
How does gigabyte Z87X-UD3H output 1920x1080 in UEFI setup? Intel-specific hacks?
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline
Hey all,
I've gone through some of the posts around pages 26-29 concerning running vga passthrough as a non-root user. Although I used `chown` on /dev/vfio for my user, I still got the following error:
qemu-system-x86_64 -enable-kvm -m 1024 -cpu host,kvm=off \
-smp 4,sockets=1,cores=4,threads=1 \
-device vfio-pci,host=03:00.0,x-vga=on -device vfio-pci,host=03:00.1 \
-vga none
#--- it returns this:
-smp 4,sockets=1,cores=4,threads=1 \
-device vfio-pci,host=03:00.0,x-vga=on -device vfio-pci,host=03:00.1 \
-vga noneqemu-system-x86_64: -device vfio-pci,host=03:00.0,x-vga=on: vfio_dma_map(0x7f400232e9d0, 0x0, 0xc0000, 0x7f3fa0000000) = -12 (Cannot allocate memory)
qemu-system-x86_64: -device vfio-pci,host=03:00.0,x-vga=on: vfio_dma_map(0x7f400232e9d0, 0xc0000, 0x20000, 0x7f3fee600000) = -12 (Cannot allocate memory)
qemu-system-x86_64: -device vfio-pci,host=03:00.0,x-vga=on: vfio_dma_map(0x7f400232e9d0, 0x100000, 0x3ff00000, 0x7f3fa0100000) = -12 (Cannot allocate memory)
qemu-system-x86_64: -device vfio-pci,host=03:00.0,x-vga=on: vfio: memory listener initialization failed for container
qemu-system-x86_64: -device vfio-pci,host=03:00.0,x-vga=on: vfio: failed to setup container for group 15
qemu-system-x86_64: -device vfio-pci,host=03:00.0,x-vga=on: vfio: failed to get group 15
qemu-system-x86_64: -device vfio-pci,host=03:00.0,x-vga=on: Device initialization failed.
qemu-system-x86_64: -device vfio-pci,host=03:00.0,x-vga=on: Device 'vfio-pci' could not be initialized
The same command runs fine as root. I have also chown'ed /dev/hugepages in addition -- to no avail. Does anyone have an answer to this problem? I apologize if an answer was earlier in the 200+ pages that I couldn't search well.
Offline
Hey all,
I've gone through some of the posts around pages 26-29 concerning running vga passthrough as a non-root user. Although I used `chown` on /dev/vfio for my user, I still got the following error:qemu-system-x86_64 -enable-kvm -m 1024 -cpu host,kvm=off \ -smp 4,sockets=1,cores=4,threads=1 \ -device vfio-pci,host=03:00.0,x-vga=on -device vfio-pci,host=03:00.1 \ -vga none #--- it returns this: -smp 4,sockets=1,cores=4,threads=1 \ -device vfio-pci,host=03:00.0,x-vga=on -device vfio-pci,host=03:00.1 \ -vga noneqemu-system-x86_64: -device vfio-pci,host=03:00.0,x-vga=on: vfio_dma_map(0x7f400232e9d0, 0x0, 0xc0000, 0x7f3fa0000000) = -12 (Cannot allocate memory) qemu-system-x86_64: -device vfio-pci,host=03:00.0,x-vga=on: vfio_dma_map(0x7f400232e9d0, 0xc0000, 0x20000, 0x7f3fee600000) = -12 (Cannot allocate memory) qemu-system-x86_64: -device vfio-pci,host=03:00.0,x-vga=on: vfio_dma_map(0x7f400232e9d0, 0x100000, 0x3ff00000, 0x7f3fa0100000) = -12 (Cannot allocate memory) qemu-system-x86_64: -device vfio-pci,host=03:00.0,x-vga=on: vfio: memory listener initialization failed for container qemu-system-x86_64: -device vfio-pci,host=03:00.0,x-vga=on: vfio: failed to setup container for group 15 qemu-system-x86_64: -device vfio-pci,host=03:00.0,x-vga=on: vfio: failed to get group 15 qemu-system-x86_64: -device vfio-pci,host=03:00.0,x-vga=on: Device initialization failed. qemu-system-x86_64: -device vfio-pci,host=03:00.0,x-vga=on: Device 'vfio-pci' could not be initialized
The same command runs fine as root. I have also chown'ed /dev/hugepages in addition -- to no avail. Does anyone have an answer to this problem? I apologize if an answer was earlier in the 200+ pages that I couldn't search well.
In order to assign a PCI device to a VM, all of the guest memory needs to be pinned (ie. locked) into host memory and mapped through the IOMMU. A normal user only has the ability to lock 64KB of memory (see ulimit). Your VM is probably bigger than that, therefore you need to increase the locked memory limit for the user to run the VM.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
In order to assign a PCI device to a VM, all of the guest memory needs to be pinned (ie. locked) into host memory and mapped through the IOMMU. A normal user only has the ability to lock 64KB of memory (see ulimit). Your VM is probably bigger than that, therefore you need to increase the locked memory limit for the user to run the VM.
Thanks for the tip! How do I increase the locked memory limit?
Offline
aw wrote:In order to assign a PCI device to a VM, all of the guest memory needs to be pinned (ie. locked) into host memory and mapped through the IOMMU. A normal user only has the ability to lock 64KB of memory (see ulimit). Your VM is probably bigger than that, therefore you need to increase the locked memory limit for the user to run the VM.
Thanks for the tip! How do I increase the locked memory limit?
/etc/security/limits.d/ (see limits.conf it the parent directory)
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline