You are not logged in.
Hey guys, I finally managed to passthrough my GPU. I've tried before and had lots of errors and eventually gave up. Later on I found my motherboard just didn't support IOMMU -.-
Anyways i've bought a new motherboard that does support IOMMU so now I have the following setup:
MOBO: Asrock Extreme 6/ac
CPU: i7 4790k @ 4Ghz
GPU Host: GTX 980
GPU Guest: GTX 770 (I can't turn these around sadly, it will not give any video output otherwise)
Linux Mainline 3.18.5 kernel with ACS patch enabled (from the first post).Using the following commandline I managed to set up a Windows 8 VM running on my SSD's, and having the GTX 770 as passthrough:
QEMU_PA_SAMPLES=128 QEMU_AUDIO_DRV=pa qemu-system-x86_64 -enable-kvm -M q35 -m 8096 -cpu host -soundhw hda \ -smp 6,sockets=1,cores=6,threads=1 \ -device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \ -device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \ -device vfio-pci,host=02:00.1,bus=root.1,addr=00.1 \ -device virtio-scsi-pci,id=scsi \ -drive file=/dev/sdb,id=ssddisk,format=raw,if=none -device scsi-hd,drive=ssddisk \ -drive file=/dev/sdd,id=hdddisk,format=raw,if=none -device scsi-hd,drive=hdddisk
Problem 1:
Everything appeared to be succesful, Windows runs without problems (just the mouse is a bit itchy), and the NVIDIA driver installer was able to detect the GPU and install the driver.
Rebooted VM, installed GPU-Z and it shows my GTX 770 (with 347.52 driver). However it also shows the GPU Clock is 0 Mhz, just as memory and default clock.
I also can't open the NVIDIA settings screen, first it gave me some runtime errors, now it just doesn't show up at all.
Whenever I run Geforce Experience it seems to not find any installed driver, as it automatically starts downloading the driver I already have installed.Problem 2:
Sound laggs badly. I can hear it, but if it would be measured in frames per second, it would've been around 3. As you can see in my commandline I try to use PulseAudio.
What am I doing wrong here?
Well..
...
Missing -vga none
Don't use q35
For the sound you could try using alsa, and/or playing with the parameters:
qemu-system-x86_64 -audio-help
to see all available drivers and params, also i experienced some crackling and lag until i started pinning the vcpus
Offline
Well..
aw wrote:...
Missing -vga none
Don't use q35
Well -vga none explains something. However if I change my command line to use -vga none, it only gives me a qemu command line:
compat_monitor0 console
QEMU 2.2.0 monitor - type 'help' for more information
(qemu)
This happens with both Q35 and i440fx.
For the sound you could try using alsa, and/or playing with the parameters:
qemu-system-x86_64 -audio-help
to see all available drivers and params, also i experienced some crackling and lag until i started pinning the vcpus
I'll try some things and report back.
Last edited by PureTryOut (2015-03-01 12:39:09)
Offline
Well -vga none explains something. However if I change my command line to use -vga none, it only gives me a qemu command line:
compat_monitor0 console QEMU 2.2.0 monitor - type 'help' for more information (qemu)
Yeah that's whats supposed to happen you'll get output on your gpu
Offline
Sorry I didn't make myself clear I guess.
Without -vga none it boots up Windows, although with non working GPU.
With -vga none it only shows that. It doesn't boot up the VM.
Offline
Sorry I didn't make myself clear I guess.
Without -vga none it boots up Windows, although with non working GPU.
With -vga none it only shows that. It doesn't boot up the VM.
You also need:
-cpu host,kvm=off
Offline
Sorry I didn't make myself clear I guess.
Without -vga none it boots up Windows, although with non working GPU.
With -vga none it only shows that. It doesn't boot up the VM.
Then either VGA arbitration is not working or QEMU can't read the ROM on the GPU. VGA arbitration not working was a problem we used to see using the proprietary nvidia driver in the host (driver would lock and never release the VGA arbitration lock). Nvidia might have fixed this, dunno. The latter ROM issue would produce an invalid ROM contents message in dmesg. GeForce cards do not work as secondary GPUs in the guest.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
You also need:
-cpu host,kvm=off
Sadly that did not work either. Still showing the console...
Current commandline:
sudo qemu-system-x86_64 -enable-kvm -m 8096 -cpu host,kvm=off \
-smp 6,sockets=1,cores=6,threads=1 \
-vga none -device vfio-pci,host=02:00.0,x-vga=on -device vfio-pci,host=02:00.1 \
-device virtio-scsi-pci,id=scsi \
-drive file=/dev/sdb,id=ssddisk,format=raw,if=none -device scsi-hd,drive=ssddisk \
-drive file=/dev/sdd,id=hdddisk,format=raw,if=none -device scsi-hd,drive=hdddisk
Then either VGA arbitration is not working or QEMU can't read the ROM on the GPU. VGA arbitration not working was a problem we used to see using the proprietary nvidia driver in the host (driver would lock and never release the VGA arbitration lock). Nvidia might have fixed this, dunno. The latter ROM issue would produce an invalid ROM contents message in dmesg. GeForce cards do not work as secondary GPUs in the guest.
So is there any workaround or any way I can fix this?
Offline
Has anyone else with an amd system done this with nbhs' suggested option of disabling npt? I found that with it disabled it did improve my benchmark scores, doubled in some cases, and games hit higher framerates, but its not consistent. With npt=0 set cpu load is all over the damn place and framerates in games vary anywhere from 5 fps to 150fps.
I have an AMD 8320, Asus M5A99X EVO PRO r2.0 with the latest bios, 32 GB of ram, Geforce 660 ti for host, and a Geforce 970 on the guest.
I am using npt=0. Have used npt=1 for an extended period and performance was worse. The effect was that in some games the framerate was lower, but GPU and CPU loads were lower too and a bit more kernel time on host. npt=0 is generally much better. In both cases the machines are stable.
Offline
Is CONFIG_VFIO_PCI_VGA also set?
hv_time is incompatible with nvidia
Missing -vga none
Don't use q35
Thanks for the fast replay.
1. CONFIG_VFIO_PCI_VGA=y is set by default. I double checked with my /boot/config-...
2. Removed hv_time.
3. Changed q35 to pc (as I understand this is the default).
4. I had to remove "bus=pcie.0" from ioh3420 line because of an error.
5. I had to use "rombar=0" or "romfile=..." when assigning my VGA.
So, my QEMU start line became:
sudo qemu-system-x86_64 -enable-kvm -M pc -m 4096 \
-cpu host,kvm=off -smp 2,sockets=1,cores=2,threads=1 \
-device ioh3420,multifunction=on,port=1,chassis=1,id=root.1 \
-device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on,romfile=/media/.../GF114.rom \
-device vfio-pci,host=01:00.1,bus=root.1,addr=00.1 -device virtio-scsi-pci,id=scsi \
-drive file=/storage/vm/win7.img,id=disk,format=raw,if=none -device scsi-hd,drive=disk \
-drive file=/media/.../win7.iso,id=isocd,if=none -device scsi-cd,drive=isocd \
-drive file=/media/.../virtio.iso,id=virtiocd,if=none -device ide-cd,bus=ide.1,drive=virtiocd \
-vga none
The results were the same for rombar/romfile:
1) VGA appears dead WITHOUT "-vga none".
2) No output on the nVidia DVI ports WITH "-vga none"
PS: Tried a few restarts. Also reinstalled the win7 vm.
Last edited by terusus (2015-03-01 21:20:30)
Offline
Jodaco wrote:Has anyone else with an amd system done this with nbhs' suggested option of disabling npt? I found that with it disabled it did improve my benchmark scores, doubled in some cases, and games hit higher framerates, but its not consistent. With npt=0 set cpu load is all over the damn place and framerates in games vary anywhere from 5 fps to 150fps.
I have an AMD 8320, Asus M5A99X EVO PRO r2.0 with the latest bios, 32 GB of ram, Geforce 660 ti for host, and a Geforce 970 on the guest.
I am using npt=0. Have used npt=1 for an extended period and performance was worse. The effect was that in some games the framerate was lower, but GPU and CPU loads were lower too and a bit more kernel time on host. npt=0 is generally much better. In both cases the machines are stable.
http://magazine.redhat.com/2007/11/20/r … ed-guests/
That's ridiculous. Simply crazy. I don't get it.
How BROKEN NPT should be that it slows down the VM? And I've been worrying about too big errata list on my CPU family.
EDIT:
http://www.cse.iitd.ernet.in/~sbansal/c … nal-TM.pdf
Oooh, nice document to read. I'll try reading this somewhere in future time.
Last edited by Duelist (2015-03-01 21:24:13)
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline
It's generally recommended that the graphics card for assignment is a secondary device on the host because it's often difficult to detach all the host drivers from the primary display. Even if you use pci-stub.ids to prevent PCI drivers from attaching, the low level VGA/VESA drivers can still make use of it. It's not necessarily impossible, but it's not a heavily used or documented path.
Passing a secondary device (from host perspective) is indeed easier.
However, after some fiddling, i managed to passthrough the primary card on my system, but it does involve some extra work.
Apart from the usual driver blacklisting (radeon in my case) i had to do the following as well:
- Switch grub to text-only mode by adding "GRUB_GFXPAYLOAD_LINUX=text" to /etc/defaults/grub
- Switch linux boot to text-only mode by adding "nomodeset nofb" to GRUB_CMDLINE_LINUX_DEFAULT=... in /etc/defaults/grub
After adding these parameters, I was able to passthrough the host primary graphics to qemu guest in the same way as i passthrough my secondary graphics.
Note that you will completely lose your host console when you do this, so make sure you can ssh in, and perhaps have a serial terminal ready as well.
Offline
Note that you will completely lose your host console when you do this, so make sure you can ssh in, and perhaps have a serial terminal ready as well.
You've got a headless host?.. So you're doing GPU passthrough with only one GPU? Cool.
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline
hurenkam wrote:Note that you will completely lose your host console when you do this, so make sure you can ssh in, and perhaps have a serial terminal ready as well.
You've got a headless host?.. So you're doing GPU passthrough with only one GPU? Cool.
Actually, I'm doing GPU passthrough with 3 GPU's, but indeed leaving the host headless:
- GPU1: Radeon HD6450; Passthrough to Windows 8 Guest
- GPU2: Radeon HD4350; Passthrough to OSX Mavericks Guest
- GPU3: Radeon HD4350; Passthrough to Arch Linux Guest
All this is on rather dated hardware (but since all is working fine, why bother to upgrade?):
- ASUS P7P55D EVO Mainboard
- Intel i7 860 CPU
This machine has been running Xen for several years, I've only recently switched to qemu because i wanted to see how stable it runs with pci & vga passthrough.
So far I have no regrets.
Last edited by hurenkam (2015-03-01 23:05:57)
Offline
JohnyPea wrote:Jodaco wrote:Has anyone else with an amd system done this with nbhs' suggested option of disabling npt? I found that with it disabled it did improve my benchmark scores, doubled in some cases, and games hit higher framerates, but its not consistent. With npt=0 set cpu load is all over the damn place and framerates in games vary anywhere from 5 fps to 150fps.
I have an AMD 8320, Asus M5A99X EVO PRO r2.0 with the latest bios, 32 GB of ram, Geforce 660 ti for host, and a Geforce 970 on the guest.
I am using npt=0. Have used npt=1 for an extended period and performance was worse. The effect was that in some games the framerate was lower, but GPU and CPU loads were lower too and a bit more kernel time on host. npt=0 is generally much better. In both cases the machines are stable.
http://magazine.redhat.com/2007/11/20/r … ed-guests/
That's ridiculous. Simply crazy. I don't get it.
How BROKEN NPT should be that it slows down the VM? And I've been worrying about too big errata list on my CPU family.EDIT:
http://www.cse.iitd.ernet.in/~sbansal/c … nal-TM.pdf
Oooh, nice document to read. I'll try reading this somewhere in future time.
I gave up and I built a new intel machine, pretty costly decision but in the end it is well worth it. I have not installed a bare-metal version of windows on it to test, but performance in my games seems to work atleast as well as the AMD 8320 did on bare metal. I had an issue with this stock intel cooler running a little warm with Turbo Boost turned on in the UEFI and it started throttling the cpu back which destroyed vm performance. Once I turned that option off everything seems to run great. Using synergy to switch mouse and keyboard between host and guest and it works really well, I have played several hours of Battlefield 4.
Cpu: Core i7-4790k
Mobo: Gigabyte Z97X-U3H
Memory: Corsair Vengeance 32gb DDR3-1600
GPU: EVGA Geforce 970 and using intel graphics for host
Booting on Arch Linux in an uefi setup and also using ovmf from the extra repository in arch and it worked perfectly out of the box, just using qemu and passing command line options.
Offline
sudo qemu-system-x86_64 -enable-kvm -M pc -m 4096 \
PC is not 440fx... in my xml i got "pc-i440fx-2.1", not sure if it's same for qemu command line, but pretty sure it's not PC.
Offline
terusus wrote:sudo qemu-system-x86_64 -enable-kvm -M pc -m 4096 \
PC is not 440fx... in my xml i got "pc-i440fx-2.1", not sure if it's same for qemu command line, but pretty sure it's not PC.
Pc is i440fx
[nbhs@virtbox ~]$ qemu-system-x86_64 -M ?
Supported machines are:
pc Standard PC (i440FX + PIIX, 1996) (alias of pc-i440fx-2.2)
pc-i440fx-2.2 Standard PC (i440FX + PIIX, 1996) (default)
pc-i440fx-2.1 Standard PC (i440FX + PIIX, 1996)
pc-i440fx-2.0 Standard PC (i440FX + PIIX, 1996)
pc-i440fx-1.7 Standard PC (i440FX + PIIX, 1996)
pc-i440fx-1.6 Standard PC (i440FX + PIIX, 1996)
pc-i440fx-1.5 Standard PC (i440FX + PIIX, 1996)
pc-i440fx-1.4 Standard PC (i440FX + PIIX, 1996)
pc-1.3 Standard PC (i440FX + PIIX, 1996)
pc-1.2 Standard PC (i440FX + PIIX, 1996)
pc-1.1 Standard PC (i440FX + PIIX, 1996)
pc-1.0 Standard PC (i440FX + PIIX, 1996)
pc-0.15 Standard PC (i440FX + PIIX, 1996)
pc-0.14 Standard PC (i440FX + PIIX, 1996)
pc-0.13 Standard PC (i440FX + PIIX, 1996)
pc-0.12 Standard PC (i440FX + PIIX, 1996)
pc-0.11 Standard PC (i440FX + PIIX, 1996)
pc-0.10 Standard PC (i440FX + PIIX, 1996)
q35 Standard PC (Q35 + ICH9, 2009) (alias of pc-q35-2.2)
pc-q35-2.2 Standard PC (Q35 + ICH9, 2009)
pc-q35-2.1 Standard PC (Q35 + ICH9, 2009)
pc-q35-2.0 Standard PC (Q35 + ICH9, 2009)
pc-q35-1.7 Standard PC (Q35 + ICH9, 2009)
pc-q35-1.6 Standard PC (Q35 + ICH9, 2009)
pc-q35-1.5 Standard PC (Q35 + ICH9, 2009)
pc-q35-1.4 Standard PC (Q35 + ICH9, 2009)
isapc ISA-only PC
none empty machine
Offline
aw wrote:Is CONFIG_VFIO_PCI_VGA also set?
hv_time is incompatible with nvidia
Missing -vga none
Don't use q35
Thanks for the fast replay.
1. CONFIG_VFIO_PCI_VGA=y is set by default. I double checked with my /boot/config-...
2. Removed hv_time.
3. Changed q35 to pc (as I understand this is the default).
4. I had to remove "bus=pcie.0" from ioh3420 line because of an error.
5. I had to use "rombar=0" or "romfile=..." when assigning my VGA.So, my QEMU start line became:
sudo qemu-system-x86_64 -enable-kvm -M pc -m 4096 \ -cpu host,kvm=off -smp 2,sockets=1,cores=2,threads=1 \ -device ioh3420,multifunction=on,port=1,chassis=1,id=root.1 \ -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on,romfile=/media/.../GF114.rom \ -device vfio-pci,host=01:00.1,bus=root.1,addr=00.1 -device virtio-scsi-pci,id=scsi \ -drive file=/storage/vm/win7.img,id=disk,format=raw,if=none -device scsi-hd,drive=disk \ -drive file=/media/.../win7.iso,id=isocd,if=none -device scsi-cd,drive=isocd \ -drive file=/media/.../virtio.iso,id=virtiocd,if=none -device ide-cd,bus=ide.1,drive=virtiocd \ -vga none
The results were the same for rombar/romfile:
1) VGA appears dead WITHOUT "-vga none".
2) No output on the nVidia DVI ports WITH "-vga none"PS: Tried a few restarts. Also reinstalled the win7 vm.
Why are you using -device ioh3420 with -M pc???
Offline
aw wrote:Is CONFIG_VFIO_PCI_VGA also set?
hv_time is incompatible with nvidia
Missing -vga none
Don't use q35
Thanks for the fast replay.
1. CONFIG_VFIO_PCI_VGA=y is set by default. I double checked with my /boot/config-...
2. Removed hv_time.
3. Changed q35 to pc (as I understand this is the default).
4. I had to remove "bus=pcie.0" from ioh3420 line because of an error.
5. I had to use "rombar=0" or "romfile=..." when assigning my VGA.So, my QEMU start line became:
sudo qemu-system-x86_64 -enable-kvm -M pc -m 4096 \ -cpu host,kvm=off -smp 2,sockets=1,cores=2,threads=1 \ -device ioh3420,multifunction=on,port=1,chassis=1,id=root.1 \ -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on,romfile=/media/.../GF114.rom \ -device vfio-pci,host=01:00.1,bus=root.1,addr=00.1 -device virtio-scsi-pci,id=scsi \ -drive file=/storage/vm/win7.img,id=disk,format=raw,if=none -device scsi-hd,drive=disk \ -drive file=/media/.../win7.iso,id=isocd,if=none -device scsi-cd,drive=isocd \ -drive file=/media/.../virtio.iso,id=virtiocd,if=none -device ide-cd,bus=ide.1,drive=virtiocd \ -vga none
The results were the same for rombar/romfile:
1) VGA appears dead WITHOUT "-vga none".
2) No output on the nVidia DVI ports WITH "-vga none"PS: Tried a few restarts. Also reinstalled the win7 vm.
You are supposed to use "-vga none", it's in the first post. That's the whole point of this thread, so you can use real GPU hardware as primary (and quite possibly, the only) VGA output in the guest VM. Also, if you do not use OVMF then you need to use x-vga=on parameter, so vfio knows to passthorugh VGA correctly. This "x-vga=on" is not needed if you do use OVMF because this standard (only possible for UEFI guests) has discarded the whole notion of "primary VGA output". But in any case you do need "-vga none".
If you start qemu without "-vga none" then emulated Cirrus becomes your primary VGA in guest and hardware GPU becomes secondary VGA, which you could do a long time ago in Xen and which is not what this thread is about.
Offline
Bronek wrote:Tyrewt wrote:Trying to do some USB passthough in my config with:
The camera (host:2672:000d) is seen by both the Host (Linux) and Guest (Windows7) when plugged in. Also, Windows fails to load a "MTP Device Driver". I believe it has something to do with USB passthough, usb-mtp. Can't find much information on a fix. Please advise?
I found that the best way for USB passthrough is to pass whole USB controller. You can do it also for USB controllers integrated into the south-bridge since they are attached to PCIe root like normal extension cards. This is useful for all types of USB devices because it basically removes USB root complex from host OS and also removes latency of USB serialisation through qemu. Also perfect for attaching things like USB-DAC for great quality sound.
So I followed your advice with no luck, maybe I should focus efforts on USB Device Pass-through as it seems get further with Ubuntu.
For the record, the three USB controllers were:
00:14.0 USB controller [0c03]: Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI [8086:8c31] (rev 04)
00:1a.0 USB controller [0c03]: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #2 [8086:8c2d] (rev 04)
00:1d.0 USB controller [0c03]: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #1 [8086:8c26] (rev 04)Added them to my /etc/vfio-pci.cfg
0000:00:14.0
0000:00:1a.0
0000:00:1d.0Then added the following lines to my qemu config
-usb \
-device vfio-pci,host=00:14.0 \
-device vfio-pci,host=00:1a.0 \
-device vfio-pci,host=00:1d.0 \Starting the Virtual Machine fails at the first controller with the following error.
qemu-system-x86_64: -device vfio-pci,host=00:14.0: vfio: error opening /dev/vfio/4: No such file or directory
qemu-system-x86_64: -device vfio-pci,host=00:14.0: vfio: failed to get group 4
qemu-system-x86_64: -device vfio-pci,host=00:14.0: Device initialization failed.
qemu-system-x86_64: -device vfio-pci,host=00:14.0: Device 'vfio-pci' could not be initializedAny suggestions?
ok so you are trying to add three USB controller devices (1x USB 3.0 at 00:14.0 and 2x USD 2.0 at 00:1a.0,00:1d.0) . Apparently adding USB 3.0 controller fails, although we do not see anything about the other two USB 2.0 controllers. Did you ensure you are passing whole IOMMU group? Did you remember to add vendorid:deviceid to pci-stub.ids=... in kernel command line? Did you try passing only USB 2.0 devices? I assume your host was restarted to allow for vfio to claim these USB controllers?
Offline
this topic has been one of the guides I've followed to build my new rigg:
core i7 4790
asrock z97 extreme 4
32 gb ddr3 RAM 1600 kingston hyper x savage
crucial mx 256gb SSD
NIC Intel Gigabit CT Desktop Adapter PCI-e (EXPI9301CT 1000)
PSU corsair gold CS650M
+ 4tb iSCSI target ->ZFS raid Z1 storage in a nas4free dedicated box (which I dont know if I should virtualize)
I've also got an old asus geforce GTS250 and an older ati radeon x1500. I will use one of them for the host if I choose the XEN or KVM option.
I'm also looking for a cheap hba SATA card for storage passthrough
and, of course, a new GPU. I will choose the gpu (probably an nvidia GPU, I don't like amd GPUs) when I decide which hypervisor is the best for my needs.
I'm still trying to decide between vmware sphere, xen or KVM
My idea is to use a hypervisor for something like a Multi-headed Gaming Setup, plus a few more VMs:
-linux "main" VM -> (office, video, websurfing, development...)
-windows server 2012 R2 VM -> active directory, (perhaps vCenter Server if I choose the vmware way)
-windows 8.1 gaming VM
-NAS4free file server (a BSD distro used for file sharing)-> I already use it as iSCSI target for VMware (or other hypervisor) in a dedicated old box, I'd like to implement it as a VM as well in the same rigg
-windows 7 VM (for websurfing, software testing...).
obviously not all of them will be used at the same time, only the host, windows server 2012 r2, and the bsd NAS appliance (if I decide to virtualize my NAS dedicated box) will be plugged 24/7
Only the windows 8.1 gaming VM (one GPU) and perhaps the NAS appliance, if I decide to use it, will need PCI passthrough.
I've read that the performance of them three is, nowadays, quite good in VGA passthrough, but I still don't know which of them (vmware sphere, xen or KVM) will fit my need, or is the best for my hardware/network.
I've got a GBE network at home (controlled by an asus DSL-N16U modem-router), but many VMs will be used through wifi n300 or even n150. The most demanding VMs will be used through wired gigabite network (the main linux VM and the windows 8.1 gaming VM).
KVM pros: I can use geforce cards for VGA passthrough, I can use a linux local host as my main as and as the hypervisor, all in one
KVM contras: I've read it has the worst VGA passthrough support of the 3 hypervisors. It seems the most dificult to admin (no good GUIs). Constant changes of kernel, drivers, configs...
XEN pros: it seems it has a good performance in VGA passthrough, I can use a linux local host as my main as and as the hypervisor, all in one
XEN contras: no VGA passthrough support for geforce cards, it seems easier to config & admin than KVM (but more difficult than vmware sphere for me)
VMware Sphere pros: people report it has a very good performance in VGA passthrough, and it's the hypervisor I've used the most.
VMware Sphere contras: no support for geforce GPUs on passthough, only AMD or nvidia quadro cards (or modified geforces). And all the operating systems act as VMs, no "usable" local host like KVM or ZEN (dom0), -> so one more VM that needs to be used through the home network from another computer.
Any suggestion about which hypervisor should I use?
Any suggestion about a good+cheap nvidia GPU for passthrough?
or, if I choose vmware sphere or XEN, I will not spend my money in a multiOS quadro card, so perhaps I will choose an AMD gpu ..any suggestion? should be a cheap one, passthrogh capable, and not powerhungry (my PSU is "only" 650w, and if I have to choose an amd card for the windows 8.1 gaming VM, that card will have to be in the same motherboard with my power-hungry old asus nvidia geforce GTS250 used by the linux host).
Any suggestion for a good+cheap RAID sata card (i don't need SAS, and I don't need more than 4 SATA3 ports) passthrough capable to use it with a BSD based NAS distro?
thanks for any help
PD: sorry for my poor english
Offline
http://magazine.redhat.com/2007/11/20/r … ed-guests/
That's ridiculous. Simply crazy. I don't get it.
How BROKEN NPT should be that it slows down the VM? And I've been worrying about too big errata list on my CPU family.EDIT:
http://www.cse.iitd.ernet.in/~sbansal/c … nal-TM.pdf
Oooh, nice document to read. I'll try reading this somewhere in future time.
At first I thought that it should lower host virtualization overhead, therefore I tried using it. Something isn't probably working as expected? Thanks for the document, will take a look later.
Offline
You are supposed to use "-vga none", it's in the first post. That's the whole point of this thread, so you can use real GPU hardware as primary (and quite possibly, the only) VGA output in the guest VM. Also, if you do not use OVMF then you need to use x-vga=on parameter, so vfio knows to passthorugh VGA correctly. This "x-vga=on" is not needed if you do use OVMF because this standard (only possible for UEFI guests) has discarded the whole notion of "primary VGA output". But in any case you do need "-vga none".
If you start qemu without "-vga none" then emulated Cirrus becomes your primary VGA in guest and hardware GPU becomes secondary VGA, which you could do a long time ago in Xen and which is not what this thread is about.
I understood that. But, the fact is that I do not get any graphical output from the VM with "-vga none". Nothing on the DVI ports or when connecting with VNC.
(I am not using OVMF - I'm testing with a win7 machine. I did run an OVMF win8 machine and did have "x-vga=on" + "-vga none" but the results were the same.)
And this is the strange thing for me:
- I get no errors when starting the VM.
- With standard VGA (without -vga none) I see my nVidia card in the VM. I even see the option to unplug the hardware from windows. Standard drivers run their checks and get installed successfully. And still nothing.
And obviously I got something wrong. However, I am really determined to get this working and I really appreciate your input. So, as I understand, if everything worked well I should have video output from the VM on nVidia DVI port and a monitor connected there should work normally? Do see any obvious mistakes that I have made?
Offline
I understood that. But, the fact is that I do not get any graphical output from the VM with "-vga none". Nothing on the DVI ports or when connecting with VNC.
(I am not using OVMF - I'm testing with a win7 machine. I did run an OVMF win8 machine and did have "x-vga=on" + "-vga none" but the results were the same.)
And this is the strange thing for me:
- I get no errors when starting the VM.
- With standard VGA (without -vga none) I see my nVidia card in the VM. I even see the option to unplug the hardware from windows. Standard drivers run their checks and get installed successfully. And still nothing.And obviously I got something wrong. However, I am really determined to get this working and I really appreciate your input. So, as I understand, if everything worked well I should have video output from the VM on nVidia DVI port and a monitor connected there should work normally? Do see any obvious mistakes that I have made?
are you able to verify that this card works at all, for bare metal configuration, when placed in the slot you try to use it in?
Offline
@Bronek
Yes. When I installed Ubuntu I was using my nVidia card as a primary VGA. It was working with nouveau and with the proprietary nVidia drivers.
It worked fine on win7 too.
PS: I had to set the Intel IGP as a primary VGA in the bios in order to blacklist nouveau.
Last edited by terusus (2015-03-02 16:14:41)
Offline
[ 2.529105] vboxdrv: module verification failed: signature and/or required key missing - tainting kernel [ 2.531134] vboxdrv: Found 4 processor cores. [ 2.531464] vboxdrv: fAsync=0 offMin=0x1df offMax=0xe1d [ 2.531510] vboxdrv: TSC mode is 'synchronous', kernel timer mode is 'normal'. [ 2.531511] vboxdrv: Successfully loaded version 4.3.18_Ubuntu (interface 0x001a0008). [ 2.539538] vboxpci: IOMMU found [ 2.692404] ------------[ cut here ]------------ [ 2.692446] WARNING: CPU: 3 PID: 599 at /home/terusus/ubuntu-vivid/source/drivers/gpu/drm/i915/intel_display.c:9705 intel_check_page_flip+0xa2/0xf0 [i915]() [ 2.692447] WARN_ON(!in_irq()) [ 2.692448] Modules linked in: [ 2.692449] pci_stub vboxpci(OE) vboxnetadp(OE) vboxnetflt(OE) vboxdrv(OE) hid_generic snd_hda_codec_realtek snd_hda_codec_generic snd_hda_codec_hdmi snd_usb_audio(+) snd_usbmidi_lib uvcvideo dm_multipath videobuf2_vmalloc scsi_dh intel_rapl videobuf2_memops videobuf2_core v4l2_common iosf_mbi videodev x86_pkg_temp_thermal media usbhid intel_powerclamp snd_hda_intel kvm_intel snd_hda_controller hid snd_hda_codec snd_hwdep kvm snd_pcm crct10dif_pclmul crc32_pclmul snd_seq_midi ghash_clmulni_intel snd_seq_midi_event aesni_intel snd_rawmidi aes_x86_64 lrw gf128mul glue_helper snd_seq ablk_helper cryptd snd_seq_device snd_timer serio_raw i915 snd drm_kms_helper drm lpc_ich mei_me shpchp soundcore mei i2c_algo_bit 8250_fintek nuvoton_cir rc_core bnep rfcomm video soc_button_array mac_hid bluetooth binfmt_misc [ 2.692469] parport_pc ppdev lp parport nct6775 hwmon_vid coretemp btrfs xor raid6_pq e1000e ahci psmouse ptp libahci pps_core [ 2.692476] CPU: 3 PID: 599 Comm: irq/31-i915 Tainted: G OE 3.19.0-6-kvm #6 [ 2.692476] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./H87 Pro4, BIOS P2.10 07/09/2014 [ 2.692477] ffffffffc07661b8 ffff880419b93c98 ffffffff817c6613 0000000000000007 [ 2.692479] ffff880419b93ce8 ffff880419b93cd8 ffffffff8107732a ffff880419b93cd8 [ 2.692480] ffff8804186e3000 ffff880419b88800 ffff880419b88800 0000000000000000 [ 2.692481] Call Trace: [ 2.692496] [<ffffffff817c6613>] dump_stack+0x4c/0x6e [ 2.692499] [<ffffffff8107732a>] warn_slowpath_common+0x8a/0xc0 [ 2.692501] [<ffffffff810773a6>] warn_slowpath_fmt+0x46/0x50 [ 2.692510] [<ffffffffc07111a2>] intel_check_page_flip+0xa2/0xf0 [i915] [ 2.692518] [<ffffffffc06de277>] ironlake_irq_handler+0x417/0x1000 [i915] [ 2.692521] [<ffffffff8109bc4d>] ? finish_task_switch+0x5d/0x100 [ 2.692523] [<ffffffff810d0340>] ? irq_thread_fn+0x50/0x50 [ 2.692525] [<ffffffff810d036d>] irq_forced_thread_fn+0x2d/0x70 [ 2.692526] [<ffffffff810d08b7>] irq_thread+0x137/0x160 [ 2.692527] [<ffffffff810d03e0>] ? wake_threads_waitq+0x30/0x30 [ 2.692529] [<ffffffff810d0780>] ? irq_thread_check_affinity+0x90/0x90 [ 2.692531] [<ffffffff810961eb>] kthread+0xdb/0x100 [ 2.692541] [<ffffffff81096110>] ? kthread_create_on_node+0x1c0/0x1c0 [ 2.692543] [<ffffffff817cd5bc>] ret_from_fork+0x7c/0xb0 [ 2.692545] [<ffffffff81096110>] ? kthread_create_on_node+0x1c0/0x1c0 [ 2.692546] ---[ end trace 3bc57ae31fe4d29b ]---
This doesn't look very healthy. Get rid of those vbox drivers, we have no idea what they're doing and competing hypervisors often don't get along so well with each other.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline