You are not logged in.
vCPUs are threads and by default are handled the same as any other thread on the host, scheduling on available physical CPUs. You can pin them to help out the scheduler and improve cache locality. You also need to realize that there are additional threads that run in the host for I/O, so if you have a 1:1 mapping of vCPU to pCPUs and don't have extra resources for the additional threads, then that time will get stolen from the vCPUs. It's therefore often advisable to not fully consume the available host CPUs. cgroups can be used to attempt to further isolate where processes can run and what can run on the same pCPUs as your guest. There's also always some overhead to virtualization, so you're never going to get 100% of native performance.
Thank you. I'm thinking now what would be best way for eg. Windows guest games which can effectively use more then 2 cores - to give them 2-3 cores dedicated for VM or give 4 not doing anything other on host. I know there's always some overhead, but in this combination I'd like that Windows and games would work at their best since that games could perhaps work better when have many cores.
Last edited by dRaiser (2014-11-26 20:34:10)
Offline
I just tried the command: /sbin/lsmod | grep virtio to check whether the virtio_net driver is being used, but it's empty? Network is working though ...
then I had a look into /var/log/libvirt/qemu/vm1.log and I found this:
2014-11-26 20:53:04.784+0000: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/sbin/qemu-system-x86_64 -name vm1 -S -machine pc-i440fx-2.2,accel=k$
Domain id=2 is tainted: host-cpu
I have to say I haven't been happy with CPU performance. Whenever I do anything - like just opening the file Explorer it CPU usage jumps to 40 %.
So basically there are two issues here:
1. It looks like I am NOT using the the virtio_net driver
2. Something weird is going on with my CPU
@aw or anyone who can help me, do you know why I get that log output?
Offline
I just tried the command: /sbin/lsmod | grep virtio to check whether the virtio_net driver is being used, but it's empty? Network is working though ...
You ran this in the host? The guest is the one that needs this driver, the host doesn't load it.
then I had a look into /var/log/libvirt/qemu/vm1.log and I found this:
2014-11-26 20:53:04.784+0000: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/sbin/qemu-system-x86_64 -name vm1 -S -machine pc-i440fx-2.2,accel=k$ Domain id=2 is tainted: host-cpu
I don't see what problem you're pointing out here. libvirt taints configurations using -cpu host because it can't migrate them. Do you care?
I have to say I haven't been happy with CPU performance. Whenever I do anything - like just opening the file Explorer it CPU usage jumps to 40 %.
So basically there are two issues here:
1. It looks like I am NOT using the the virtio_net driver
2. Something weird is going on with my CPU@aw or anyone who can help me, do you know why I get that log output?
I don't see that you're actually reporting any problems. What should the CPU usage go to when you open the file explorer? Are you using virtio for both the network and the disk?
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
ah, I see. So that output is expected behaviour.
Yes I am using virtio for disk and as of today also for network. According to http://wiki.libvirt.org/page/Virtio it should produce an output once the guest has loaded. That's why I was confused. ...
About the high CPU usage: whenever I load a game (even something just like Fifa 15, I have 100 % CPU usage once the game starts. That seems a tad too high?
Offline
ah, I see. So that output is expected behaviour.
Yes I am using virtio for disk and as of today also for network. According to http://wiki.libvirt.org/page/Virtio it should produce an output once the guest has loaded. That's why I was confused. ...
From that source...
When you boot the guest (virsh start guestname), if it worked you should still have a working network, and you should see (from inside the guest) that you are using the virtio_net driver:
# /sbin/lsmod | grep virtio [shows virtio_pci, virtio_net and others loaded] # cat /sys/devices/virtio-pci/0/net/eth0/statistics/rx_bytes ...
About the high CPU usage: whenever I load a game (even something just like Fifa 15, I have 100 % CPU usage once the game starts. That seems a tad too high?
Why? Moving data costs CPU. Games typically move a lot of data on startup. What are you using to back the disk image? If you want performance you want either a non-sparse raw file, LVM volume, or physical block device behind it. If you're using something like qcow2, you have to realize that there's a speed vs space trade-off in doing that.
Last edited by aw (2014-11-26 20:55:08)
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
yes, that's why I am using LVM. I have created a 180 GB partition from my 240 GB SSD for Windows 8.1. The other 40 GB are for arch.
I definitely prefer performance. Isn't LVM the best option? It used to with Xen.
Last edited by 4kGamer (2014-11-26 21:01:49)
Offline
yes, that's why I am using LVM. I have created a 180 GB partition from my 240 GB SSD for Windows 8.1. The other 40 GB are for arch.
I definitely prefer performance. Isn't LVM the best option? It used to with Xen.
A direct block device, like a partition is probably best, but LVM is nearly as good.
You can try some of the block tuning techniques from the Red Hat Virtualization Tuning and Optimization Guide
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
ok, thank you very much. I will try those suggestions. I will report once I get some significant improvements.
Offline
Reboot is now working. I extracted the vga bios with GPU-Z and passed it to qemu with romfile=
I tried to get audio working, but I failed. -soundhw hda seams the only one which is recognized from Windows8 64bit, but i get scraching/stutter.
Somebody said, he got -soundhw ac97 working, he used the Win7 driver for ac97 and installed it in under Win8 under Win7 compatibility mode, did not work for me eiter, maybe somebody else gets lucky with it.
Offline
Denso wrote:aw wrote:Most of these seem completely unnecessary to me, especially the vfio options. APICv is already enabled by default when available, so that's a no-op. nested=1, why? You're not doing nesting. ignore_msrs=1 is the only thing I have set on my box.
I know . There was a time when I completely commented out every line of those options and the VMs booted just fine . But they weren't stable after hours of operation . Plus , with them on , both of my VMs can boot fine at the same time (2 seperate GPUs , one to each VM) .
I do this too, one AMD + one Nvidia. AFAIK, I only need ignore_msrs so that passmark performance test works
THe masking option is needed in my case , otherwise the whole VM goes into SloMo mode .
That's specific to a device you're assigning, not likely related to the chipset. The option overrides our detection of whether the device supports INTxDisable masking and instead masks the interrupt at the APIC instead of the device. The huge downside of that is that it requires the device has an exclusive interrupt because of the difference in where we mask the interrupt. Also note:
$ modinfo vfio-pci
...
parm: nointxmask:Disable support for PCI 2.3 style INTx masking. If this resolves problems for specific devices, report lspci -vvvxxx to linux-pci@vger.kernel.org so the device can be fixed automatically via the broken_intx_masking flag. (bool)Have you done that? This option would apply automatically to just that device if you did...
I'll comment out the APICv + nested + shadow_vmcs + ept from now on though . I already commented out the disable_hugepages one for a month or so , no problem .
I didn't need any of these options when I was using Z77 .. X99 is more troublesome to setup . ALOT . And there is no way I can reboot any VM with GPU assigned to it without crashing the host It is the ONLY remaining issue I have now .
They reboot fine , until I install Nvidia's drivers and then they crash the host whenever I reboot them .
Aside from the two Radeon reset issues (R7790 and similar can only be used once per boot and HD8570 and similar mistakenly report NoSoftRst-) I don't know of any other GPU reset issues, nor do I see why X99 would be more or less problematic than Z77 in that respect.
EDIT: Nvidia cards have pretty terrible DisableINTx support on the audio device (esp. Quadro and even fake Quadro), but the better option there is typically to make the guest driver use MSI, not to run with nointxmask
I have a Linux VM (XBMCBuntu) with passed through GT620 which works well except for reboots cause it to fail completely. I believe it is something to do with the Nvidia card as I have other VMs passing through raid controller and Sat card with VFIO that can reboot fine.
Interrupts within the VM looks a bit odd (The passed through Intel VF uses MSI but the Geforce does not) and config is shown below. Can anyone help with what might be wrong or how I can collect needed debug information (Qemu log for the VM shows nothing other than on intial startup of the VM).
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<name>XBMC</name>
<uuid>f702c5eb-83d4-4b50-b909-8da878e08d36</uuid>
<memory unit='KiB'>1036288</memory>
<currentMemory unit='KiB'>1036288</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='q35'>hvm</type>
<loader>/usr/share/qemu/bios.bin</loader>
<boot dev='hd'/>
<bootmenu enable='yes'/>
</os>
<features>
<acpi/>
</features>
<clock offset='localtime'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/home/VM/XBMC.img'/>
<target dev='sda' bus='sata'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/ISO/XBMC13.iso'/>
<target dev='sdb' bus='sata'/>
<readonly/>
</disk>
<controller type='sata' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pcie-root'/>
<controller type='pci' index='1' model='dmi-to-pci-bridge'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/>
</controller>
<controller type='pci' index='2' model='pci-bridge'>
<address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/>
</controller>
<controller type='usb' index='0' />
<hostdev mode='subsystem' type='usb'>
<source>
<vendor id='0x413c'/>
<product id='0x2106'/>
</source>
</hostdev>
</devices>
<qemu:commandline>
<qemu:arg value='-device'/>
<qemu:arg value='ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1'/>
<qemu:arg value='-device'/>
<qemu:arg value='vfio-pci,host=09:10.2,bus=pcie.0'/>
<qemu:arg value='-device'/>
<qemu:arg value='vfio-pci,host=0a:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on'/>
<qemu:arg value='-device'/>
<qemu:arg value='vfio-pci,host=0a:00.1,bus=root.1,addr=00.1'/>
</qemu:commandline>
</domain>
Offline
Just got an NVIDIA Geforce GTX 970 working using vfio-pci.
Initially, when I was using SeaBios (1.7.5 from Debian), it doesn't seem to have any output. After I switched to using EFI boot (following tutorial here), my 970 works perfectly fine.
Also, kvm=off still works with 970 and the latest driver 344.75 (for resolving code 43 issue with NVIDIA's driver).
Last edited by kevinxucs (2014-11-27 08:33:23)
Offline
Regarding OVMF/UEFI: I heard rumors that this also works with Windows7. Can anybody here confirm that a Windows 7 VM is working with OVMF?
Offline
Hi again. Anyone has solution for suspend/hibernation problem (both fail - suspend sometimes, hibernation always) with kernel from OP and Nvidia-340xx-dkms drivers?
Offline
Hi again. Anyone has solution for suspend/hibernation problem (both fail - suspend sometimes, hibernation always) with kernel from OP and Nvidia-340xx-dkms drivers?
I think this is overall problem of NVIDIA cards, not kernel in particular. At least I had such problems.
Offline
Well I haven't had this problem for a long time when using standard Arch kernel.
Offline
Regarding OVMF/UEFI: I heard rumors that this also works with Windows7. Can anybody here confirm that a Windows 7 VM is working with OVMF?
Well, i can confirm that. Windows 7 supports UEFI, but you need a somewhat special way to install it(obviously, the cd should have something like .EFI loader file). Mine installation ISO supported that. But i won't provide it to you because i'm a filthy pirate and should be punished for using windows.
I've got to install and boot it, but i couldn't get vfio/pci passthrough working because, well, my asus GPUs needed to be physically flashed - i couldn't boot myself with romfile= pointing to my card's UEFI-compatible VBIOS update. So YMMV. Also, there is GPT support if you're curious.
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline
Gotta love redhat for that document. Pure awesome.
Last edited by Duelist (2014-11-27 14:02:17)
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline
Hi all,
First of all, thank you a lot for the post. It is awesome and really well documented. 10/10.
Even me, with my rudimentary linux skills, I was able to make the passthrough to the KVM, after facing all kind of issues. Yes, I had all of them!.
- My box is a:
intel i7
gigabyte mobo
nvidia gtx780
and the damn:
06:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9230 PCIe SATA 6Gb/s Controller (rev 10)
- I am running:
Linux nexus2 3.17.2-1-mainline #1 SMP PREEMPT Mon Nov 24 17:50:40 CET 2014 x86_64 GNU/Linux
And when I activate intel_iommu=on, the raid6 which is partly held by the pcie-SATA marvell controller, stops working. So I have to decide whether I want vfio or my raid storage
I dive a little bit, and I found that the hardware-monster (compliment here!) known as Alex Williamson (AKA aw) has also a patch for my problem here:
https://github.com/awilliam/linux-vfio/ … a-alias-v4
I cloned the git but I downloaded again a new kernel. But with my poor debugging skills, I cant tell if that fork will include the patches from nbhs and the fear of starting over again is killing me. Step by step checking everything.
Yeah, I have read about the help-vampire. But, after my duly apologizes, can anyone be so kind to tell me how to apply the dma-alias to the modified kernel of nbhs? For dummies, if i can abuse.
Thanks in advance,
EDIT, if this can be interesting to anyone, below you may see the steps that I took to make it work. It is been a pain!
https://bbs.archlinux.org/viewtopic.php?id=162768
http://www.firewing1.com/howtos/fedora-20/create-gaming-virtual-machine-using-vfio-pci-passthrough-kvm
asume a fresh arch install
copy linux-mainline
tar xzf linux-mainline-3.17.2.tar.gz
cd linux-mainline*
makepkg -s --asroot
(option 15 haswell)
(esc esc to exit)
(2hours)
pacman -U linux-mainline-*
lspci:
00:1b.0 Audio device: Intel Corporation 9 Series Chipset Family HD Audio Controller
01:00.0 VGA compatible controller: NVIDIA Corporation GK110B [GeForce GTX 780 Ti] (rev a1)
01:00.1 Audio device: NVIDIA Corporation GK110 HDMI Audio (rev a1)
lspci -n
00:1b.0 0403: 8086:8ca0 <--- AUDIO mobo to be pci_stub
01:00.0 0300: 10de:100a (rev a1) <-- NVIDIA graphics to be pci_stub
01:00.1 0403: 10de:0e1a (rev a1) <--- NVIDIA audio to be pci_stub
nano /etc/default/grub
GRUB_CMDLINE_LINUX="pci-stub.ids=8086:8ca0,10de:100a,10de:0e1a i915.enable_hd_vgaarb=1 intel_iommu=on pcie_acs_override=downstream"
grub-mkconfig -o /boot/grub/grub.cfg
nano /etc/mkinitcpio.conf
MODULES="i915 pci-stub vfio-pci"
mkinitcpio -p linux-mainline
lsusb
Bus 001 Device 005: ID 0738:1713 Mad Catz, Inc.
Bus 001 Device 002: ID 03f0:0324 Hewlett-Packard SK-2885 keyboard
Bus 001 Device 007: ID 0781:5571 SanDisk Corp. Cruzer Fit
pacman -S mesa-libgl qemu seabios
echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/vfio_iommu_type1.conf
echo "options kvm ignore_msrs=1" >> /etc/modprobe.d/kvm.conf
nano /usr/bin/vfio-bind
#!/bin/bash
modprobe vfio-pci
for dev in "$@"; do
vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
device=$(cat /sys/bus/pci/devices/$dev/device)
if [ -e /sys/bus/pci/devices/$dev/driver ]; then
echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
fi
echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id
done
chmod +x /usr/bin/vfio-bind
nano /etc/systemd/system/vfio-bind.service
[Unit]
Description=Binds devices to vfio-pci
After=syslog.target
[Service]
EnvironmentFile=-/etc/vfio-pci.cfg
Type=oneshot
RemainAfterExit=yes
ExecStart=-/usr/bin/vfio-bind $DEVICES
[Install]
WantedBy=multi-user.target
nano /etc/vfio-pci.cfg
DEVICES="0000:00:1b.0 0000:01:00.0 0000:01:00.1"
debug
vfio-bind 0000:00:1b.0 0000:01:00.0 0000:01:00.1
ls -l /sys/bus/pci/drivers/vfio-pci/
find /sys/kernel/iommu_groups/ -type l
dmesg | grep -e DMAR -e IOMMU
launch VM vifo
#detach usb
echo 0 > /sys/bus/usb/devices/1-2/authorized
echo 0 > /sys/bus/usb/devices/1-5/authorized
echo 0 > /sys/bus/usb/devices/1-7/authorized
# bind the graphics, sound and mobo sound to vfio
vfio-bind 0000:00:1b.0 0000:01:00.0 0000:01:00.1
#allow unsafe remapping
modprobe vfio_iommu_type1
modprobe kvm
#-M q35 is the emulated chipset
# smp how many cpus will be shown
#load emulated bios and set no emulated cirrus card
#create a pcie root port to attach the gpu called root.1
#attach the nvidia card to root.1
#attach the mobo soundcard to pcie.0
#use a phisical partition attach to q35 (ide.0)
#passthrough usbs
qemu-system-x86_64 -enable-kvm -M q35 -m 8024 -cpu host \
-smp 6,sockets=1,cores=3,threads=2 \
-bios /usr/share/qemu/bios.bin -vga none \
-device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
-device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \
-device vfio-pci,host=01:00.1,bus=root.1,addr=00.1 \
-device vfio-pci,host=00:1b.0,bus=pcie.0,addr=1b.0 \
-drive file=/dev/sda4,id=disk,format=raw -device ide-hd,bus=ide.0,drive=disk \
-drive file=/home/test/windowsos.iso,id=isocd -device ide-cd,bus=ide.1,drive=isocd \
-usb -usbdevice host:03f0:0324 -usbdevice host:0738:1713 \
-nographic
EDIT 2,
I found that appending a .patch at the end of the url i can download the branch as a patch. Then i edited the PKGBUILD and update the signature with updpkgsums. Sadly it doesnt work... it doesnt compile
I also tried the v4.1 which seems to be the new way to do so, using devfn. It also explicitly uses the vendor ID of my sata controller (9230) But i cant find it in github. And when i try to copy/paste from the mail list, it doesnt work:
patching file drivers/pci/quirks.c
patch: **** malformed patch at line 16: DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_RICOH, 0xe832, quirk_dma_func0_alias);
==> ERROR: A failure occurred in prepare().
Aborting...
EDIT 3,
I used this as a patch
http://www.jarzebski.pl/files/patch/dma … ices.patch
and after many trials and errors i ended up putting it as the last patch like this in the PKGBUILD
# Extra GCC optimizations
patch -p1 -i "${srcdir}/enable_additional_cpu_optimizations_for_gcc_v4.9+_kernel_v3.15+.patch"
# patches Marvell sata
patch -p1 -i "${srcdir}/dma-alias-devfn.patch"
but after a good looking begining it end up like always
CC drivers/pci/quirks.o
drivers/pci/quirks.c:3535:13: error: redefinition of 'quirk_dma_func1_alias'
static void quirk_dma_func1_alias(struct pci_dev *dev)
^
drivers/pci/quirks.c:3470:13: note: previous definition of 'quirk_dma_func1_alias' was here
static void quirk_dma_func1_alias(struct pci_dev *dev)
^
scripts/Makefile.build:257: recipe for target 'drivers/pci/quirks.o' failed
make[2]: *** [drivers/pci/quirks.o] Error 1
scripts/Makefile.build:404: recipe for target 'drivers/pci' failed
make[1]: *** [drivers/pci] Error 2
Makefile:929: recipe for target 'drivers' failed
make: *** [drivers] Error 2
==> ERROR: A failure occurred in build().
Aborting...
So, my hacking skills doesnt work to create a proper patch.
EDIT 4,
Ok, after lots of trial and error, I found out that I tried to patch something that was already included....
And my raid card is not compatible.... great!. Googling for a new one...
Any suggestion for a SAT 4xports cheap and linux friendly? For a storage raid
Any good Samaritan?
Last edited by wikavalier (2014-11-28 20:33:45)
Offline
this was extremely valuable find for me
so i'm leaving it here, in case someone relied purely on "static" usb passthrough but was not very happy with it
Offline
Hi all.
About my audio issue (stuttering audio using an USB Xonar U7 sound card). It is more or less fixed. I bought a new USB sound card : Hercules LT3, and it now works as expected. I passthrough the whole USB controller (including sound card, and input devices), and it works fine!
So far it seems it was only an hardware related issue. I have had this issue for months using Xen (iirc), and KVM under Ubuntu and Arch. If I can give people a piece of advice : dont buy this Xonar U7 for this usage.
Last edited by Nesousx (2014-11-28 15:37:43)
Offline
TripleSpeeder wrote:Regarding OVMF/UEFI: I heard rumors that this also works with Windows7. Can anybody here confirm that a Windows 7 VM is working with OVMF?
Well, i can confirm that. Windows 7 supports UEFI, but you need a somewhat special way to install it(obviously, the cd should have something like .EFI loader file). Mine installation ISO supported that. But i won't provide it to you because i'm a filthy pirate and should be punished for using windows.
I've got to install and boot it, but i couldn't get vfio/pci passthrough working because, well, my asus GPUs needed to be physically flashed - i couldn't boot myself with romfile= pointing to my card's UEFI-compatible VBIOS update. So YMMV. Also, there is GPT support if you're curious.
Hmm, I do have the installation iso containing UEFI loader etc. But even with a minimum setup (no pass-through devices, just the iso loaded as the only CDROM, no additional HDs) i don't have much success. I can launch the windows bootloader from the UEFI shell, it starts with the black/white screen "loading Files", followed by the graphical "Starting Windows" screen. 1-2 seconds later i get a black screen again with some kind of black/white progress bar (?) on top, which is not moving. THere it stops, with one CPU core running at 100%. I left it for +10 minutes, with nothing happening.
Just for testing I removed the emulated graphics and passed through my GTX970, which was working flawlessly, except for the installation progress stopping at the exact same place :-(
Could you share your qmeu commandline / libvirt xml config?
Thanks!
Edit: I used ovmf-svn from AUR, compiled yesterday.
Last edited by TripleSpeeder (2014-11-28 17:24:49)
Offline
Hmm, I do have the installation iso containing UEFI loader etc. But even with a minimum setup (no pass-through devices, just the iso loaded as the only CDROM, no additional HDs) i don't have much success. I can launch the windows bootloader from the UEFI shell, it starts with the black/white screen "loading Files", followed by the graphical "Starting Windows" screen. 1-2 seconds later i get a black screen again with some kind of black/white progress bar (?) on top, which is not moving. THere it stops, with one CPU core running at 100%. I left it for +10 minutes, with nothing happening.
Just for testing I removed the emulated graphics and passed through my GTX970, which was working flawlessly, except for the installation progress stopping at the exact same place :-(
Could you share your qmeu commandline / libvirt xml config?
Thanks!
Edit: I used ovmf-svn from AUR, compiled yesterday.
OVMF git repo build(i use fedora) 20141128.b817.gb04a63a
Seems like windows installation iso loses it's boot drive, but i got you a pair of screenshots getting past your point of breakage.
Here, have an imgur slideshow.
qemu-system-x86_64 \
-boot menu=on \
-enable-kvm \
-monitor stdio \
-M q35 \
-m 1024 \
-cpu host \
-net none \
-rtc base=localtime \
-smp 1,sockets=1,cores=1,threads=1 \
-drive if=pflash,format=raw,readonly,file=/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd \
-drive if=pflash,format=raw,file=/mnt/hdd/qemu/OVMF_VARS.fd \
-drive file='/mnt/hdd/qemu/qemu-uefi-win7.img',id=disk,format=raw,aio=native,cache=none,if=none \
-drive file='/mnt/hdd/qemu/virtio.iso',id=cdrom,format=raw,readonly=on,if=none \
-drive file='/mnt/hdd/qemu/windows7.iso',id=cdrom2,format=raw,readonly=on,if=none \
-device ioh3420,addr=04.0,multifunction=on,port=1,chassis=2,id=root.1 \
-device virtio-blk-pci,bus=root.1,addr=03.0,drive=disk \
-device virtio-scsi-pci,bus=root.1,addr=05.0 \
-device ide-cd,bus=ide.1,drive=cdrom \
-device scsi-cd,drive=cdrom2 \
-vga std
And here's a loading script. Just a config based on my vfio setup, so it shouldn't work because i'm lazy to fix it, but it boots and tries to install.
Notice the pure-efi image of OVMF used.
EDIT:
seems like i've figured out why it fails to find it's boot drive. There is no virtio-scsi-pci drivers in the windows installation system, and the cdrom2 drive, which is windows7.iso, is connected to it. Changing that virtio-scsi-pci to something more generic should help the issue here.
Last edited by Duelist (2014-11-28 20:42:54)
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline
Thanks a lot Duelist! I found the root cause that stopped windows setup from continuing: It is the -cpu parameter.
This is the qemu command-line as provided by libvirt:
-cpu qemu64,hv_time
The moment I remove parameter hv_time, windows setup continues. And this problem only happens with smp=2 or higher. With only one core assigned to the VM it boots also with hv_time.
Any ideas out there what might be the root cause?
Edit: Note that this is still without any device pass-through, just using the default emulated vga. Trying to get setup running completely with passed-through NVidia card is the next step
Last edited by TripleSpeeder (2014-11-29 15:15:19)
Offline
Now this is REALLY odd ...
Using "pci-assign" instead of "vfio-pci" to assign my devices to the VM seems to solves the reboot issue with GT610 GPU . This is using OVMF + 440FX by the way .
The VFIO still claims those devices at boot from the PCI-STUB . I don't know how much will this success last , but I rebooted the VM twice and it reboots fine . I thought the VFIO was supposed to fix the devices reset issues ?
dmesg :
[Sat Nov 29 18:17:31 2014] vfio-pci 0000:05:00.0: kvm assign device
[Sat Nov 29 18:17:31 2014] vfio-pci 0000:05:00.1: kvm assign device
[Sat Nov 29 18:17:32 2014] vfio-pci 0000:0f:00.0: kvm assign device
[Sat Nov 29 18:17:32 2014] vfio-pci 0000:00:1b.0: kvm assign device
CMDLine :
qemu-system-x86_64 -name main -nographic -mem-path /dev/hugepages \
-enable-kvm -m 8192 -cpu host,kvm=off -smp 2,sockets=1,cores=1,threads=2 \
-vga none \
-drive if=pflash,format=raw,readonly,file=/usr/share/ovmf/x64/ovmf_code_x64.bin \
-drive if=pflash,format=raw,file=/VMs/ovmf_main.bin \
-device pci-assign,host=05:00.0,multifunction=on \
-device pci-assign,host=05:00.1 \
-device pci-assign,host=0f:00.0 \
-device pci-assign,host=00:1b.0 \
-drive file=/VMs/Win_Main.qcow2,cache=writeback,if=none,id=drive0,aio=native \
-device virtio-blk-pci,drive=drive0,ioeventfd=on,bootindex=1 \
-device virtio-scsi-pci,id=scsi \
-drive file=/VMs/Win8.iso,id=iso_install,if=none \
-device scsi-cd,drive=iso_install \
-cdrom /VMs/virtio.iso \
-net nic,model=virtio,macaddr=64:C5:63:3C:1B:22 -net bridge,br=br0 \
-usb -usbdevice host:045e:0745 \
-localtime \
-monitor unix:/tmp/vm_main,server,nowait &
Let's see if this is going to be the solution for my ongoing issues with GT610 .
EDIT :
Installed Nvidia's drivers 344.75 (the latest I think) , reboot 2 times and it works perfectly . I'm happy , yet confused .
Last edited by Denso (2014-11-29 15:43:19)
Offline
Thanks a lot Duelist! I found the root cause that stopped windows setup from continuing: It is the -cpu parameter.
This is the qemu command-line as provided by libvirt:-cpu qemu64,hv_time
The moment I remove parameter hv_time, windows setup continues. And this problem only happens with smp=2 or higher. With only one core assigned to the VM it boots also with hv_time.
Any ideas out there what might be the root cause?
Edit: Note that this is still without any device pass-through, just using the default emulated vga. Trying to get setup running completely with passed-through NVidia card is the next step
I experienced crashes during the windows installation in a non-vfio setup as well. The root cause were missing kernel config options for libvirt, but I guess you are using the archlinux kernel, right?
Offline