You are not logged in.
ghormoon wrote:I don't get where you maen to do that. on the web site, I do manually select (notebook gpu > HD series > 6xxx> win 7 x64) and it gives me 288MB catalyst. that one only gives me default/custom selection and field for path. on next (either default or custom), it goes through few steps before allowing you to select things (at least on custom, default won't let you) and it crashes before that selection just after saying "detecting graphics hardware" in the progress bar. I can't find any way to get more specific driver than by the series 6xxx. What am I missing?
Have you tried manually installing the driver? Once downloaded, you only extract it somewhere on your disk but do not use wizard. Then from Windows Device Manager, you right clic on the graphic card (probably with the yellow warning sign), and install driver from here by pointing to the path were the drivers has been extracted.
I used to do it this way with my old setup (Xen + AMD cards), in order to avoid BSOD.
I doubt that this will work, since AMD Catalyst has got oh so complex in last two years or so.
...
Whoa, what a colourful glitches i have with hugepages enabled. Just awesome. It's covered in stripes. And i haven't even specified the VM to use hugepages in the xml...
Well, seems like my issue with every-15-seconds-freeze isn't related to the VM, but is caused by the game.
Last edited by Duelist (2015-05-26 23:26:55)
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline
Seems that sometimes when I shut down the VM the QEMU hangs while doing, and this appears in dmesg when force killing qemu
[ 8032.137126] BUG: unable to handle kernel NULL pointer dereference at 0000000000000018
[ 8032.144976] IP: [<ffffffff8141ff71>] vfio_device_get+0x11/0x50
[ 8032.150251] pci-stub 0000:02:00.0: claimed by stub
[ 8032.150354] vfio-pci 0000:02:00.1: Relaying device request to user (#0)
[ 8032.162192] PGD 0
[ 8032.164217] Oops: 0000 [#1] PREEMPT SMP
[ 8032.168190] Modules linked in: udf crc_itu_t arc4 ecb md4 md5 hmac nls_utf8 cifs dns_resolver vhost_net vhost macvtap macvlan tun ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter ip_tables x_tables bridge stp llc cfg80211 rfkill gspca_zc3xx gspca_main videodev media snd_hda_codec_hdmi mousedev hid_logitech_hidpp hid_logitech_dj uas usb_storage snd_hda_codec_realtek snd_hda_codec_generic coretemp hwmon intel_rapl iosf_mbi x86_pkg_temp_thermal intel_powerclamp kvm_intel kvm fuse nvidia(O) nls_iso8859_1 crct10dif_pclmul crc32_pclmul nls_cp437 crc32c_intel vfat ghash_clmulni_intel iTCO_wdt fat iTCO_vendor_support mxm_wmi aesni_intel snd_usb_audio aes_x86_64 lrw gf128mul snd_usbmidi_lib snd_hda_intel snd_hda_controller snd_hda_codec snd_rawmidi e1000e snd_hda_core snd_seq_device r8169
[ 8032.239642] snd_hwdep snd_pcm evdev mii i2c_i801 mac_hid snd_timer ptp glue_helper mei_me ablk_helper drm snd cryptd sb_edac pcspkr serio_raw pps_core tpm_tis mei i2c_core soundcore psmouse lpc_ich edac_core shpchp tpm wmi processor button sch_fq_codel nfs lockd grace sunrpc fscache ext4 crc16 mbcache jbd2 hid_generic usbhid hid sr_mod cdrom sd_mod atkbd libps2 ahci libahci xhci_pci libata xhci_hcd scsi_mod usbcore usb_common i8042 serio
[ 8032.278086] CPU: 3 PID: 3125 Comm: qemu-system-x86 Tainted: G O 4.1.0-1-ARCH #1
[ 8032.286506] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./X99 Extreme6, BIOS P1.90 04/17/2015
[ 8032.296303] task: ffff88048cf0a8c0 ti: ffff880447034000 task.ti: ffff880447034000
[ 8032.303764] RIP: 0010:[<ffffffff8141ff71>] [<ffffffff8141ff71>] vfio_device_get+0x11/0x50
[ 8032.312023] RSP: 0018:ffff880447037ae8 EFLAGS: 00010296
[ 8032.317320] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
[ 8032.324434] RDX: ffff88047cc71c40 RSI: ffff880447037bc0 RDI: 0000000000000000
[ 8032.331549] RBP: ffff880447037af8 R08: ffff88047cc71c40 R09: ffff88047cc71c40
[ 8032.338666] R10: ffffffff810d443a R11: 0000000000003029 R12: ffff88047cc71c40
[ 8032.345780] R13: ffffffff81423090 R14: ffff88048c032c00 R15: ffff88048c032c00
[ 8032.352903] FS: 00007f4bd6a70700(0000) GS:ffff88048f2c0000(0000) knlGS:0000000000000000
[ 8032.360971] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 8032.366701] CR2: 0000000000000018 CR3: 000000000180b000 CR4: 00000000001407e0
[ 8032.373815] Stack:
[ 8032.375824] ffff88048c065002 0000000000000000 ffff880447037b18 ffffffff8141ffcd
[ 8032.383267] ffff880447037b38 ffff880447037bc0 ffff880447037b38 ffffffff8142313d
[ 8032.390709] ffff880447037bd0 ffff880447037bd0 ffff880447037b58 ffffffff814230ef
[ 8032.398155] Call Trace:
[ 8032.400597] [<ffffffff8141ffcd>] vfio_device_get_from_dev+0x1d/0x30
[ 8032.406932] [<ffffffff8142313d>] vfio_pci_get_devs+0x3d/0x70
[ 8032.412662] [<ffffffff814230ef>] vfio_pci_walk_wrapper+0x5f/0x70
[ 8032.418742] [<ffffffff81304129>] pci_walk_bus+0x79/0xa0
[ 8032.424055] [<ffffffff81424a30>] vfio_pci_release+0x290/0x410
[ 8032.429878] [<ffffffff81423100>] ? vfio_pci_walk_wrapper+0x70/0x70
[ 8032.436127] [<ffffffff8141f9f0>] vfio_device_fops_release+0x20/0x40
[ 8032.442466] [<ffffffff811dfb6c>] __fput+0x9c/0x200
[ 8032.447337] [<ffffffff811dfd1e>] ____fput+0xe/0x10
[ 8032.452204] [<ffffffff81095bf4>] task_work_run+0xd4/0xf0
[ 8032.457597] [<ffffffff8107b593>] do_exit+0x3b3/0xba0
[ 8032.462641] [<ffffffff8107be0b>] do_group_exit+0x3b/0xb0
[ 8032.468024] [<ffffffff810877a6>] get_signal+0x246/0x670
[ 8032.473323] [<ffffffff810e6aa3>] ? hrtimer_try_to_cancel+0x93/0x120
[ 8032.479668] [<ffffffff81015577>] do_signal+0x37/0x770
[ 8032.484799] [<ffffffff810e7a54>] ? hrtimer_nanosleep+0xc4/0x1d0
[ 8032.490791] [<ffffffff810fb751>] ? SyS_futex+0x81/0x190
[ 8032.496102] [<ffffffffa061c4f7>] ? kvm_on_user_return+0x77/0x90 [kvm]
[ 8032.502612] [<ffffffff81015d10>] do_notify_resume+0x60/0x80
[ 8032.508258] [<ffffffff8158e0bc>] int_signal+0x12/0x17
[ 8032.513389] Code: ed eb d8 0f 1f 80 00 00 00 00 48 89 df e8 38 fe ff ff 48 89 d8 eb cb 0f 1f 00 0f 1f 44 00 00 55 48 89 e5 53 48 89 fb 48 83 ec 08 <48> 8b 7f 18 e8 16 fe ff ff b8 01 00 00 00 f0 0f c1 03 83 c0 01
[ 8032.533315] RIP [<ffffffff8141ff71>] vfio_device_get+0x11/0x50
[ 8032.539233] RSP <ffff880447037ae8>
[ 8032.542713] CR2: 0000000000000018
[ 8032.546021] ---[ end trace 276b0f1585740db3 ]---
[ 8032.550625] Fixing recursive fault but reboot is needed!
Offline
Seems that sometimes when I shut down the VM the QEMU hangs while doing, and this appears in dmesg when force killing qemu
Could you please test this? http://paste.fedoraproject.org/225883/98048143/
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
Hello, maybe it's a bit late to jump on the bandwagon, but if I read the first post correctly a attempt with the intel IGP is legit if using OVMF.
In any case, I ran the OVMF test and recieved a black qemu window.
It took a bit of try and error (adding kernel paraments and changes in /etc/mkinitcpio.conf, due to my lack of seeing the obvious or due to my hardware. I'd like to share what I have...
My issue before I go any deeper with the guide is that don't get a /etc/vfio-pci.cfg file.
/boot/loader/entries/arch.conf
options root=/dev/sdb2 rw intel_iommu=on pci-stub.ids=10de:1184,10de:0e0a vfio_iommu_type1.allow_unsafe_interrupts=1
/etc/mkinitcpio.conf
MODULES="kvm kvm_intel pci_stub"
/etc/systemd/system/vfio-bind.service
(Same as on initial post)
I ran systemctl enable vfio-bind.service.
Thank you.
Offline
Punkbob wrote:aw wrote:Running 352.86 here, no Code 43 problems. The NVIDIA Control Panel stopped working, but I see others reporting that in non-VM applications. It worked on 350.12. Code 43 is a pretty generic "something is wrong" error, hypervisor detection isn't the only reason for it to get thrown. I recently discovered that failure to plugin aux power to cards also generates a Code 43.
Hmm, I will look into it more then. Btw, are you running Windows 10 or 8.1, cause it might be an issue on my end as I am running my VMs with Windows 10.
8.1, AIUI 10 has numerous issues running as a VM
@Punkbob
What build of Windows 10 are you using and did you need to do anything special to get it installed? I see one report that installing build 10041 requires at least one hyper-v extension to be enabled, which isn't going to sit well with Nvidia drivers.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
Hello, maybe it's a bit late to jump on the bandwagon, but if I read the first post correctly a attempt with the intel IGP is legit if using OVMF.
In any case, I ran the OVMF test and recieved a black qemu window.
It took a bit of try and error (adding kernel paraments and changes in /etc/mkinitcpio.conf, due to my lack of seeing the obvious or due to my hardware. I'd like to share what I have...My issue before I go any deeper with the guide is that don't get a /etc/vfio-pci.cfg file.
Please try the howto guide in my sig, libvirt has come a long way and vfio-bind scripts should no longer be necessary. intel_iommu=on and a pci-stub.ids= should be sufficient to simply let libvirt handle the rest.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
My issue before I go any deeper with the guide is that don't get a /etc/vfio-pci.cfg file.
Ha! Well... You just create it manually, where you put something like DEVICES="device-ids,comma-separated". But that's not the recommended way of doing vfio bind anymore, it's much simpler to use libvirt.
/boot/loader/entries/arch.conf
options root=/dev/sdb2 rw intel_iommu=on pci-stub.ids=10de:1184,10de:0e0a vfio_iommu_type1.allow_unsafe_interrupts=1
You must NOT enable unsafe interrupts if you're not being told to do so in the dmesg. Read aw's blog and his VFIO FAQ.
Also, he posted a 5-part story of doing the GPU passthrough the right way, containing the fresh instructions and thoughts. Very worth reading, especially knowing the fact that he actually wrote all that vfio stuff.
EDIT: huh, that aw's message wasn't there when i wrote this. Seems like my F5 key isn't working well anymore..
Last edited by Duelist (2015-05-27 13:33:49)
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline
Hi,
I have a running Win7 (non OVMF) machine and a running Win 10 (ovmf) machine. Host is Arch Linux, with following kernel from AUR : 4.0.1-1-vfio.
I am using onboard intel as host graphics, and a GTX 970 as gaming card. Lately, I had issues with Kodi on host, and I had to switch "acceleration" from UXA to SNA.
In the end, I now need to reinstall my Win7 machine with OVMF, so that I can have DRI enabled on host. However, the Windows 7 installation hangs at the "Starting Windows" screen. Looks like some kind of "hardware" issue.
Here is my "install script" that worked with Win 10, but doesn't work with win 7 (I only replaced the "win disk" line, and the "iso file" line) :
qemu-system-x86_64 -name windaube -enable-kvm -m 16384 -cpu host,kvm=off \
-smp 4,sockets=1,cores=4,threads=1 \
-drive if=pflash,format=raw,readonly,file=/usr/share/ovmf/x64/ovmf_code_x64.bin \
-drive if=pflash,format=raw,file=/usr/share/ovmf/x64/ovmf_vars_x64.bin \
-device vfio-pci,host=01:00.0 -device vfio-pci,host=01:00.1 -device vfio-pci,host=00:1d.0 \
-vga none \
-device virtio-scsi-pci,id=scsi \
-drive file=/dev/guest/windaube,id=win,format=raw,if=none -device scsi-hd,drive=win \
-drive file=/home/nesousx/Isos/virtio-win-0.1-81.iso,id=isocd,if=none -device scsi-cd,drive=isocd \
-drive file=/home/nesousx/Isos/Win_7_Pro_x64.iso,id=virtiocd,if=none -device scsi-cd,drive=virtiocd \
-net nic,model=virtio,macaddr=52:54:00:7c:3c:42 -net bridge,br=br0 \
-nographic
I have tried to run it :
* with a small amount of RAM ;
* without assigning devices.
Nothing worked.
Do someone have any idea why it is not working?
Thanks in advance.
Last edited by Nesousx (2015-05-27 16:38:22)
Offline
Have you tried the instructions on post 4470 on page 179? You need to use the qxl driver for the initial output...
Last edited by mostlyharmless (2015-05-27 14:29:23)
Offline
It's generally a good idea to get the VM installed first, then add an assigned GPU. I've also heard to use either qxl or vga devices for the install. Also, AIUI Win7+OVMF+SMP is incompatible with hyper-v extensions, so you'll need to disable them even if you're not using Nvidia.
EDIT: I believe virt-manager has support for this upstream, but it was just fixed last week. So, if you're pulling from the devel tree, hyper-v should automatically be disabled once you select OVMF and have specified a Windows 7 guest type.
Last edited by aw (2015-05-27 14:48:19)
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
I was able to make a Radeon HD7770 work in Win 7 X64 SP1 by first booting up with QXL graphics and then doing "device_add vfio-pci,host=01:00.0" and "device_add vfio-pci,host=01:00.1" from the Qemu Console while the system was already up and running. Device manager was showing the usual "Code 10" at this point.
The act of "hot-plugging" the GPU however, made it possible to complete the installation of Catalyst 14.12 without immediately crashing the installer (like what usually happens).
After Catalyst was installed, I was able to add the -device parameters to my qemu command-line and the guest boots up using the QXL for VGA and switches output to the vfio'd GPU as soon as desktop is ready.
This way I don't have to use 'x-vga' or any sort of additional patches for the kernel v4.0.0. Also I'm able to use the integrated Intel HD4600 graphics for my host without having to disable DRI.
Rebooting/Suspending/Hibernation all seem to work just fine with this too.
I'm also passing trough the integrated Intel HD Audio and a USB 3.0 PCI-e card. They also work without any issues. I can even do low-latency ASIO stuff with the real sound card, which is really cool.
By using virtio networking and tightvnc mirror driver with raw encoding, I get so good VNC performance between the host and guest, that I don't miss SPICE. Even 3D works smoothly over VNC now And of course there is Steam in-home streaming for a little better latency for realtime 3D gaming, or simply switching the inputs from the monitor.
Offline
aw wrote:Punkbob wrote:Hmm, I will look into it more then. Btw, are you running Windows 10 or 8.1, cause it might be an issue on my end as I am running my VMs with Windows 10.
8.1, AIUI 10 has numerous issues running as a VM
@Punkbob
What build of Windows 10 are you using and did you need to do anything special to get it installed? I see one report that installing build 10041 requires at least one hyper-v extension to be enabled, which isn't going to sit well with Nvidia drivers.
I am using build 9926 on my most recent VM, and I didn't change anything or enable anything different then I would with a normal Windows 10 install. When I first switched over to Windows 10 a few months ago it initially didn't have any specific display drivers from Nvidia, so I just used the ones from Windows 8.1. However once I try drivers that are specifically released for Windows 10, the screen goes black, and when I reboot it just spits the code 43 error back at me.
Edit: Let me try a later build real quick, I thought I was using one that was more recent, but I think I confused which iso I grabbed off my NAS.
Last edited by Punkbob (2015-05-27 16:03:06)
Offline
Have you tried the instructions on post 4470 on page 179? You need to use the qxl driver for the initial output...
Thanks a lot, it seems to be working perfectly.
Last edited by Nesousx (2015-05-27 17:36:44)
Offline
Cubex wrote:Seems that sometimes when I shut down the VM the QEMU hangs while doing, and this appears in dmesg when force killing qemu
Could you please test this? http://paste.fedoraproject.org/225883/98048143/
Here's a bit more complete patch posted upstream, I'd appreciate testing in your scenario. Thanks. https://lkml.org/lkml/2015/5/27/546
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
Just as an update, as I haven't gotten to the point where I can try the Nvidia drivers yet, build 10074 of Windows 10 needs you to set your cpu as being a core2duo in order to boot the install environment.
Offline
Win 10 needs this to boot...
<feature policy='force' name='lahf_lm'/>
<feature policy='force' name='cx16'/>
Offline
Win 10 needs this to boot...
<feature policy='force' name='lahf_lm'/>
<feature policy='force' name='cx16'/>
Tried this, didn't work. Unless I am doing it wrong:
<cpu mode='custom' match='exact'>
<model fallback='allow'>Haswell</model>
<feature policy='force' name='lahf_lm'/>
<feature policy='force' name='cx16'/>
</cpu>
Offline
ah, for me works with kvm64
<cpu mode='custom' match='exact'>
<model fallback='allow'>kvm64</model>
<topology sockets='1' cores='4' threads='1'/>
<feature policy='force' name='lahf_lm'/>
<feature policy='force' name='cx16'/>
</cpu>
edit: problem with this is that you can't enable/disable shadow play in geforce experience for better streaming...
Last edited by slis (2015-05-27 17:24:53)
Offline
ah, for me works with kvm64
<cpu mode='custom' match='exact'>
<model fallback='allow'>kvm64</model>
<topology sockets='1' cores='4' threads='1'/>
<feature policy='force' name='lahf_lm'/>
<feature policy='force' name='cx16'/>
</cpu>edit: problem with this is that you can't enable/disable shadow play in geforce exeprience for better streaming...
Confirmed that this works, let me see what happens with latest Nvidia drivers.
Edit: I've seen mention that the Q35 version supposedly works out of the box, but I can't get it to see my drives in ovmf, and haven't really looked into it besides a quick test.
Edit2: This works, and the latest version of Nvidia drivers are running fine.
So the current state of Windows 10 in kvm is this:
If you want to run it as your CPU model you need to run the old 9926 build and it will only run with Nvidia Win8.1 drivers up to 347.88.
if you want the latest drivers or even just to run the later builds at all in kvm, you either need to set your CPU model to Core2duo, or use kvm64 with the additional options above.
Last edited by Punkbob (2015-05-27 18:36:58)
Offline
It's generally a good idea to get the VM installed first, then add an assigned GPU. I've also heard to use either qxl or vga devices for the install. Also, AIUI Win7+OVMF+SMP is incompatible with hyper-v extensions, so you'll need to disable them even if you're not using Nvidia.
EDIT: I believe virt-manager has support for this upstream, but it was just fixed last week. So, if you're pulling from the devel tree, hyper-v should automatically be disabled once you select OVMF and have specified a Windows 7 guest type.
Sounds interesting ! I'd like to give a try via virt-manager, however I can't select OVMF/UEFI as bios. Virt-manager / libvirt doesn't detect it. It is greyed out, and says that the firmware image is not installed.
I have qemu 2.3.0-2, virt-manager 1.2.0-2, libvirt 1.2.15-1 and ovmf 16229-1.
Any idea what would be the cause please? Also, what does AUI stands for, please?
Last edited by Nesousx (2015-05-27 17:39:16)
Offline
aw wrote:It's generally a good idea to get the VM installed first, then add an assigned GPU. I've also heard to use either qxl or vga devices for the install. Also, AIUI Win7+OVMF+SMP is incompatible with hyper-v extensions, so you'll need to disable them even if you're not using Nvidia.
EDIT: I believe virt-manager has support for this upstream, but it was just fixed last week. So, if you're pulling from the devel tree, hyper-v should automatically be disabled once you select OVMF and have specified a Windows 7 guest type.
Sounds interesting ! I'd like to give a try via virt-manager, however I can't select OVMF/UEFI as bios. Virt-manager / libvirt doesn't detect it. It is greyed out, and says that the firmware image is not installed.
I have virt-manager version 1.2.0-2, libvirt 1.2.15-1 and ovmf 16229-1.
Any idea what would be the cause please? Also, what does AUI stands for, please?
Maybe your ovmf package isn't installing the files where libvirt/virt-manager looks for them? On my Fedora 21 system the binaries are installed in /usr/share/edk2.git/ovmf-x64/ Your virt-manager would need to be pulled form the devel tree since last Wednesday to include the fix. AIUI = As I Understand It.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
Nesousx wrote:aw wrote:It's generally a good idea to get the VM installed first, then add an assigned GPU. I've also heard to use either qxl or vga devices for the install. Also, AIUI Win7+OVMF+SMP is incompatible with hyper-v extensions, so you'll need to disable them even if you're not using Nvidia.
EDIT: I believe virt-manager has support for this upstream, but it was just fixed last week. So, if you're pulling from the devel tree, hyper-v should automatically be disabled once you select OVMF and have specified a Windows 7 guest type.
Sounds interesting ! I'd like to give a try via virt-manager, however I can't select OVMF/UEFI as bios. Virt-manager / libvirt doesn't detect it. It is greyed out, and says that the firmware image is not installed.
I have virt-manager version 1.2.0-2, libvirt 1.2.15-1 and ovmf 16229-1.
Any idea what would be the cause please? Also, what does AIUI stands for, please?
Maybe your ovmf package isn't installing the files where libvirt/virt-manager looks for them? On my Fedora 21 system the binaries are installed in /usr/share/edk2.git/ovmf-x64/ Your virt-manager would need to be pulled form the devel tree since last Wednesday to include the fix. AIUI = As I Understand It.
Ok, thanks, I'll look into it.
Everytime I saw you saying AIUI, it was next to Windows. Hence, I believed it was some kind of "special" version of Windows since I never heard that acronym. Thanks for teaching me.
Offline
Nesousx wrote:aw wrote:It's generally a good idea to get the VM installed first, then add an assigned GPU. I've also heard to use either qxl or vga devices for the install. Also, AIUI Win7+OVMF+SMP is incompatible with hyper-v extensions, so you'll need to disable them even if you're not using Nvidia.
EDIT: I believe virt-manager has support for this upstream, but it was just fixed last week. So, if you're pulling from the devel tree, hyper-v should automatically be disabled once you select OVMF and have specified a Windows 7 guest type.
Sounds interesting ! I'd like to give a try via virt-manager, however I can't select OVMF/UEFI as bios. Virt-manager / libvirt doesn't detect it. It is greyed out, and says that the firmware image is not installed.
I have virt-manager version 1.2.0-2, libvirt 1.2.15-1 and ovmf 16229-1.
Any idea what would be the cause please? Also, what does AUI stands for, please?
Maybe your ovmf package isn't installing the files where libvirt/virt-manager looks for them? On my Fedora 21 system the binaries are installed in /usr/share/edk2.git/ovmf-x64/ Your virt-manager would need to be pulled form the devel tree since last Wednesday to include the fix. AIUI = As I Understand It.
I'm stuck at this step too, since I started following your guide.
I found out that ovmf from the arch official repository installs its files to
/usr/share/ovmf/ovmf_x64.bin
vs ovmf-svn (installed today)
/usr/share/ovmf/64/ovmf_x64.bin
I made softlinks to point to the "correct" place. Didn't work.
Last edited by jack_boss (2015-05-27 19:07:46)
Offline
aw wrote:Nesousx wrote:Sounds interesting ! I'd like to give a try via virt-manager, however I can't select OVMF/UEFI as bios. Virt-manager / libvirt doesn't detect it. It is greyed out, and says that the firmware image is not installed.
I have virt-manager version 1.2.0-2, libvirt 1.2.15-1 and ovmf 16229-1.
Any idea what would be the cause please? Also, what does AUI stands for, please?
Maybe your ovmf package isn't installing the files where libvirt/virt-manager looks for them? On my Fedora 21 system the binaries are installed in /usr/share/edk2.git/ovmf-x64/ Your virt-manager would need to be pulled form the devel tree since last Wednesday to include the fix. AIUI = As I Understand It.
I'm stuck at this step too, since I started following your guide.
I found out that ovmf from the arch official repository installs its files to/usr/share/ovmf/ovmf_x64.bin
vs ovmf-svn (installed today)
/usr/share/ovmf/64/ovmf_x64.bin
I made softlinks to point to the "correct" place. Didn't work.
You need to follow the instructions here:
https://wiki.archlinux.org/index.php/PC … stallation
The packages on aur won't work.
Edit: And to make it appear in virtual manager add this to qemu.conf:
nvram = [
"/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd:/usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd",
"/usr/share/edk2.git/aarch64/QEMU_EFI-pflash.raw:/usr/share/edk2.git/aarch64/vars-template-pflash.raw",
]
Now this is where I am unclear, I only included the second aarch64 line because libvirt fails to start if it isn't in there, but there isn't anything at that location. I guess qemu.conf needs it defined even if you don't use it?
Last edited by Punkbob (2015-05-27 19:15:34)
Offline
jack_boss wrote:aw wrote:Maybe your ovmf package isn't installing the files where libvirt/virt-manager looks for them? On my Fedora 21 system the binaries are installed in /usr/share/edk2.git/ovmf-x64/ Your virt-manager would need to be pulled form the devel tree since last Wednesday to include the fix. AIUI = As I Understand It.
I'm stuck at this step too, since I started following your guide.
I found out that ovmf from the arch official repository installs its files to/usr/share/ovmf/ovmf_x64.bin
vs ovmf-svn (installed today)
/usr/share/ovmf/64/ovmf_x64.bin
I made softlinks to point to the "correct" place. Didn't work.
You need to follow the instructions here:
https://wiki.archlinux.org/index.php/PC … stallation
The packages on aur won't work.
Edit: And to make it appear in virtual manager add this to qemu.conf:
nvram = [
"/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd:/usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd",
"/usr/share/edk2.git/aarch64/QEMU_EFI-pflash.raw:/usr/share/edk2.git/aarch64/vars-template-pflash.raw",
]Now this is where I am unclear, I only included the second aarch64 line because libvirt fails to start if it isn't in there, but there isn't anything at that location. I guess qemu.conf needs it defined even if you don't use it?
How did you figure this part out?
nvram = [
"/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd:/usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd",
"/usr/share/edk2.git/aarch64/QEMU_EFI-pflash.raw:/usr/share/edk2.git/aarch64/vars-template-pflash.raw",
]
I assumed it would appear in virt-manager without that part, I mean ovm-svn would probably work to if i were to edit qemu.conf.
Offline