You are not logged in.
setup:
Distro: Ubuntu, 14.10
kernel: 3.18.0, patched ACS+VGA
qemu: 2.1.0
libvirt: 1.2.8
Platform: Laptop, clevo p370SM
proc: i7-4810MQ (VTX, VTD)
GPU1 (host): NVIDIA 870m (geforce)
GPU2 (guest): NVIDIA 980m (geforce) - aftermarket from clevo
GuestOSE: Win 8.1|Server 2012R2, Fresh installs, can reinstall if necessary.
QEMU startup params of test VM currently loaded with 8.1:
qemu-system-x86_64 -enable-kvm -name Windows81DevWorkstation -S -machine pc-i440fx-utopic,accel=kvm,usb=off -m 8192 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid 074a3dbe-51d7-4a55-85f9-9330516fba1b -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/Windows81DevWorkstation.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x5.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x5 -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x5.0x1 -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x5.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive file=/var/CG4REPO/VirtualMachines/WindowsWork_New.qcow2,if=none,id=drive-ide0-0-0,format=qcow2 -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -drive file=/var/CG4REPO/ISO/Microsoft/win81.iso,if=none,id=drive-ide0-0-1,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=1,drive=drive-ide0-0-1,id=ide0-0-1 -netdev tap,fd=24,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:73:07:3c,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev spicevmc,id=charchannel0,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0 -device usb-tablet,id=input0 -device intel-hda,id=sound0,bus=pci.0,addr=0x4 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -chardev spicevmc,id=charredir0,name=usbredir -device usb-redir,chardev=charredir0,id=redir0 -chardev spicevmc,id=charredir1,name=usbredir -device usb-redir,chardev=charredir1,id=redir1 -chardev spicevmc,id=charredir2,name=usbredir -device usb-redir,chardev=charredir2,id=redir2 -chardev spicevmc,id=charredir3,name=usbredir -device usb-redir,chardev=charredir3,id=redir3 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -device vfio-pci,host=02:00.0,x-vga=on -cpu Haswell,kvm=off -msg timestamp=on
(virsh generated, will post if necessary)
-side note-
Been reading this post for awhile going through the various issues in my setup (been fun getting this to work without direct access to a head on the secondary GPU . Thanks for the awesome tutorials / troubleshooting, it's gotten me almost to the end! and I do apologize if this was covered in the past, I've been reading / searching and haven't found a direct reference to this yet. My brain is pretty squishy right now, though. While this is an arch forum [and I do apologize for posting regarding ubuntu], this is the most active thread, so I was hoping someone could help me out .
Issue: Incorrect device ID on host passed to guest prevents secondary GPU from installing.
WITIR:
The device is passed through, however, the device hardware ID inside the guest is short compared to the current 980Ms [If you compare this to what a quadro reports as in their drivers it is more along those lines in terms of pc hardware IDs], I believe this stems from the host [host hardware ID: 10de:13d7] so when attempting driver installation it fails.
hardware ID:
PCI\VEN_10DE&DEV_13D7&SUBSYS_110B10DE&REV_A1
PCI\VEN_10DE&DEV_13D7&SUBSYS_110B10DE
PCI\VEN_10DE&DEV_13D7&CC_030000
PCI\VEN_10DE&DEV_13D7&CC_0300
Device Instance Path:
PCI\VEN_10DE&DEV_13D7&SUBSYS_110B10DE&REV_A1\3&13C0B0C5&0&10
Would look like:
%NVIDIA_DEV.13D7% = Section232, PCI\VEN_10DE&DEV_13D7&SUBSYS_110B10DE
compared to NVIDIA INF output (1 example of a 980M):
%NVIDIA_DEV.13D7.19FD.1043% = Section228, PCI\VEN_10DE&DEV_13D7&SUBSYS_19FD1043
If you mod the INF for the driver with the line first line mentioned above you can get the installer to work [and the driver to load], however, with 8.1 there is no permanent driver signing disable - and modding an INF (I used nvami.inf) breaks the hash check on a normal boot and fails out the driver. Without direct access to the head on the GPU, this can be... problematic . With a stripped down version of the libvirt xml to qemu args (only adding disk, network adapter) the same issue occurs.
So I guess the question is, does anyone have any suggestions on a quick workaround for this [a way to manipulate the hardware ID when passed from host to guest]? Or is there something obvious I'm missing in the setup that is causing this issue to begin with? I imagine it's the latter, hopefully .
Any thoughts or suggestions would be welcome - if you need more data just let me know what you'd like. I am probably overthinking this way too much at this point.
Offline
So I guess the question is, does anyone have any suggestions on a quick workaround for this [a way to manipulate the hardware ID when passed from host to guest]? Or is there something obvious I'm missing in the setup that is causing this issue to begin with? I imagine it's the latter, hopefully .
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
@aw :
Started recieving these after I upgraded to 3.19-rc1 and still reciving them on 3.19-rc5 :
Running 3.19-rc5 here, no issues. There were relatively few VT-d changes between 3.18 and 3.19 and effectively no vfio changes.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
KKF wrote:Hey guys,
I am new to all this and have wanted to try it out for a while as a learning experience for myself. I have the budget to build a PC just for this. For starters I would like to have one Windows 8 VM and Elementary OS as host; if possible. Eventually I would like to have an OSX VM and a Windows VM, though I understand how difficult that can be.
With anything of this magnitude I want to make sure I buy the right components for the job. The only components I own at this point are a 660 ti gpu and a R4 case. I will probably buy a 970 gpu at some point in the future but no rush at this point. Any advise on what to buy for MOBO, CPU, and Ram? Should I buy two separate SSD for each VM? I have a server that hosts all my documents and files so don't need to much space on those SSD's.
The OSX VM would mainly be used for Photoshop as my girlfriend prefers the layout of Photoshop in OSX to Windows. The Windows VM would be used for gaming. I play Heroes of the Storm and SC2. So nothing to intensive.
While I really do not have a budget for this I don't want to spend 2000 dollars either as it really is for learning and the challenge of putting this together. So what would be my best options for a MOBO, CPU, and Ram?
After I buy everything I plan to do an extensive how to on cost and time it took me, a novice, to do this.
Thanks!
Use a Xeon E5 or better and non-Intel graphics for the host if you don't want to deal with patching your kernel for ACS or VGA arbitration (BTW, X99 PCH root port patch just posted). Gigabyte motherboards seem to be the most configurable when it comes selecting a primary graphics device, which may be important if you end up with 3 graphics cards. If you blow all your money on an over-spec'd GPU for your needs and skimp a "client" processor, you'll pay for it patching your kernel forever. Also, make sure you scale your core count and memory for the intended usage. If you want to game at the same time as your gf is photoshop'ing, then you need to be able to dedicate some cores to each VM. Also, OSX w/ GPU assignment is not something a lot of people are doing, so be sure to do your research on whether it's really an option. There are parts of getting OSX to work that cannot be discussed on forums like this.
Hi just wanted to give a short overview over my system that i am really content with. Essentially I started with an typical workstation build and done dualboot over 2 years. But when building the ws i was already looking at functional vt-d and enough pcie slots. So after a lot of success stories in this thread i finally bought a used 2 gpu and tried kvm and was more then pleased with the result.
My build is as follows
MB ASUS P9X79WS
CPU Xeon E5 1650 (Xeon E5 16xx are a much better choice then similar i7, for same money you get ECC support and v1 and v2 still have an open multi, so totally easy to OC. Essentially this is the only Xeon line that you can OC, just make sure your MB supports it. Dont know whether v3 (Haswell) is still open multi, but i dont plan to update soon. My CPU is running on 4GHz (didnt bother to go higher, stability is king) which is still plenty.)
32GB ECC Ram
Geforce GTX 750 Ti Primary
Geforce GTX Titan Passthrough
2 SSD (1 Boot, 1 VM)
Mirrored HD for Data (ZFS, with actually a third ssd for l2arc)
Usb audiocard
Aten switch
All is running on Ubuntu 12.04 with Mainline 3.17.1 Kernel and no patches.
Selfcompiled qemu 2.1.2
libvirt 1.2 from ubuntu cloud ppa
I know ubuntu is not latest, but i am to lazy to update and important stuff (kernel, driver, qemu) is uptodate
pci-stub is done by module, this way i can disable blacklisting the titan during boot if i want. (key is to add softdep drm pre: pci-stub somewehre in modprobe.d)
i patched nvidia dkms module with the patch linked here somewhere
Windows is running on 4 HT Core pinned to them. Rest of the system is moved to remaining 2 cores when vm is spun up by cset, called from a script in libvirt hooks folder. Windows gets 12gb in hugepages.
For cpu i have kvm=off so i can run 340 driver in vm. All is working, even game streaming to raspberrypi through geforce experience
Performance is near bare metal (Firestrike 9100 and Firestrike Extreme 4800)
Windows VM gets also a passed through nic (MB has 2) and the asmedia USB 3.0 onboard controller
Audio is done by usb sound card hooked to the passed through usb 3.0 controller.
Windows VM is actually the image of the bare metal installation, didnt want to reinstall
Input and Output is switched by aten switch (got it used)
At the moment the VM is running on q35 machine and seabios. I tried also 440fx (with qemu wrapper script), it worked but for some reason some things in geforce experience were disabled, didnt look to closely into that.
Only issue is 1 out of 20 times the whole system freezes on VM start, and really on start only, even before any output on second card is to be seen. Cant really point to why. But its so seldom and only occuring during vm boot that i didnt know how to debug this.
Now i am sorting out what gives me best io performace and whether i can get trim working for sparse raw files on zfs datasets. (Virtio blk now, trim not working)
Also Windows 7 virtio net seems to be bottlenecked at 500Mbyte/s for incoming tcp connections. Outgoing and bothways with linux vm is at 2GByte/s and more. That is important metric for internal shares on ssd, but even 500Mbyte/s is not bad.
Endresult:
I dont need to double boot to play games or use lightroom with color calibration on monitor.
All important stuff (Dokuments, Pictures) is finally on zfs storage (zfs send recieve, checksums, snapshots, clones, way better than ntfs or ext4 for important data)
I can go into configuration detail (xmls, scripts) if someone is interested.
I want to say many thanks to all who write here for invaluable information and hard programming work.
And finally i can only vouch for the Xeon route.
Last edited by lordleto (2015-01-23 13:09:34)
Offline
Running 3.19-rc5 here, no issues. There were relatively few VT-d changes between 3.18 and 3.19 and effectively no vfio changes.
That's weird . I just downgraded to 3.18.2 and I no longer have any issues , the VM works correctly . No code 43 and no DMAR errors in host's dmesg output .
It surely is related to 3.19 + X99 .
Offline
aw wrote:Running 3.19-rc5 here, no issues. There were relatively few VT-d changes between 3.18 and 3.19 and effectively no vfio changes.
That's weird . I just downgraded to 3.18.2 and I no longer have any issues , the VM works correctly . No code 43 and no DMAR errors in host's dmesg output .
It surely is related to 3.19 + X99 .
$ git log --oneline --no-merges v3.18..v3.19-rc1 drivers/iommu | grep -v omap | grep -v amd | grep -v smmu | grep -v rockchip | grep -v msm | grep -v vmsa | grep -v exynos
91411da iommu/vt-d: Use helpers to access irq_cfg data structure associated with IRQ
b71a3b2 x86: irq_remapping: Use helpers to access irq_cfg data structure associated with IRQ
a42a7a1 iommu: store DT-probed IOMMU data privately
8918465 memory: Add NVIDIA Tegra memory controller support
18f2340 iommu: Decouple iommu_map_sg from CPU page size
cc4f14a iommu/vt-d: Fix an off-by-one bug in __domain_mapping()
461bfb3f iommu: fix initialization without 'add_device' callback
7eba1d5 iommu: provide helper function to configure an IOMMU for an of master
1cd076bf iommu: provide early initialisation hook for IOMMU drivers
63a7b17 PCI/MSI: Simplify PCI MSI code by initializing msi_desc.nvec_used earlier
ffebeb4 iommu/vt-d: Enhance intel-iommu driver to support DMAR unit hotplug
51acce3 iommu/vt-d: Enhance error recovery in function intel_enable_irq_remapping()
a7a3dad iommu/vt-d: Enhance intel_irq_remapping driver to support DMAR unit hotplug
d35165a iommu/vt-d: Search for ACPI _DSM method for DMAR hotplug
6b19724 iommu/vt-d: Implement DMAR unit hotplug framework
78d8e70 iommu/vt-d: Dynamically allocate and free seq_id for DMAR units
c2a0b53 iommu/vt-d: Introduce helper function dmar_walk_resources()
1a2262f x86/vt-d: Fix incorrect bit operations in setting values
d7da6bd iommu: Improve error handling when setting bus iommu
38ec010 iommu: Do more input validation in iommu_map_sg()
315786e iommu: Add iommu_map_sg() function
98b773c iommu: drop owner assignment from platform_drivers
$ git log --oneline --no-merges v3.18..v3.19-rc1 drivers/vfio
83a1891 PCI/MSI: Rename write_msi_msg() to pci_write_msi_msg()
5e9f36c drivers/vfio: allow type-1 IOMMU instantiation on top of an ARM SMMU
1d53a3a vfio: make vfio run on s390
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
SEPIPES wrote:So I guess the question is, does anyone have any suggestions on a quick workaround for this [a way to manipulate the hardware ID when passed from host to guest]? Or is there something obvious I'm missing in the setup that is causing this issue to begin with? I imagine it's the latter, hopefully .
Thanks for the link.
It reminded me that I could get a signed driver by manipulating the compat list rather than the hardware dev_ID itself [inside the guest]. I was kind of surprised that worked (well, worked enough that I could get past the hardware ID check). This gave me the latest driver that matches against a win booted host but gets me the notorious code 43 from the 980m post driver install.
Ah well, if it was easy it wouldn't be fun. Time to go reread some posts, thanks again .
Offline
@aw :
I got it that changes are only happening in the IOMMU driver for 3.19 and not VFIO . What I'm certain of , is that these issues do not exist with 3.18.2 . They only happen with 3.19-rc1~rc5 .
Thank you for posting these commit diffs , I can see that some changes list DMAR . Those errors I had were DMAR related .
Offline
@aw :
Started recieving these after I upgraded to 3.19-rc1 and still reciving them on 3.19-rc5 :
[Thu Jan 22 23:13:54 2015] dmar: DRHD: handling fault status reg 2 [Thu Jan 22 23:13:54 2015] dmar: DMAR:[DMA Read] Request device [02:00.1] fault addr 10af25000 DMAR:[fault reason 06] PTE Read access is not set ...
I've been having the same issues after the update.
The errors first appeared when booting, complaining about the integrated graphics card. I tried intel_iommu=on,igfx_off which resolved the boot time errors, but caused qemu to crash.
I should point out that I've been running both Window 7 and Mac OS X Mavericks for months without a single problem on previous kernels.
Specs:
Motherboard: Asus Maximus VII Ranger
CPU: Intel i7-4790K
GPU: Radeon R9 270X
Offline
Denso wrote:@aw :
Started recieving these after I upgraded to 3.19-rc1 and still reciving them on 3.19-rc5 :
[Thu Jan 22 23:13:54 2015] dmar: DRHD: handling fault status reg 2 [Thu Jan 22 23:13:54 2015] dmar: DMAR:[DMA Read] Request device [02:00.1] fault addr 10af25000 DMAR:[fault reason 06] PTE Read access is not set ...
I've been having the same issues after the update.
Does anything change adding sp_off to the intel_iommu options?
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
Does anything change adding sp_off to the intel_iommu options?
I've tried all the available options (igfx_off, forcedac, strict, and sp_off) but the results are the same.
I've been religiously checking the log for the past week hoping for a magical fix to appear. No such luck unfortunately.
Offline
aw wrote:Does anything change adding sp_off to the intel_iommu options?
I've tried all the available options (igfx_off, forcedac, strict, and sp_off) but the results are the same.
I've been religiously checking the log for the past week hoping for a magical fix to appear. No such luck unfortunately.
It would probably be more useful to hunt down the change that broke it rather than blindly hope for a fix.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
It would probably be more useful to hunt down the change that broke it rather than blindly hope for a fix.
I have been tempted to compile the kernel for each commit, but at the moment my system needs to stay on, so I'd have no way of testing them. Hence the waiting for a magical fix to appear.
I should also point out that unlike Denso, my problems started when I upgraded from 3.17.6 to 3.18.2. I've looked at all the changes to the iommu code between those releases but nothing stands out.
Offline
I tried a new workstation card, AMD W7100 (Tonga family) and it's causing me problems, although it's also possible that my setup (see below) or, less likely, kernel upgrade from 3.17.7 to 3.17.8 could be also to blame. Everything works fine until I shutdown VM machine win171 & and start it again. Then I receive following dmesg on the console (it fills the screen very very quickly - below was retyped by me from screen photo, I am yet to attach serial console cable):
dmar: DRHD: handling fault status reg 40
dmar: DRHD: handling fault status reg 40
dmar: DRHD: handling fault status reg 40
. . .
which is next followed by
NMI watchdog: BUG: soft lockup - CPU#3 stuck for 23s! [qemu:win171:4568]
NMI watchdog: BUG: soft lockup - CPU#9 stuck for 23s! [qemu:win249:4259]
NMI watchdog: BUG: soft lockup - CPU#3 stuck for 23s! [qemu:win171:4568]
NMI watchdog: BUG: soft lockup - CPU#9 stuck for 23s! [qemu:win249:4259]
INFO: rcu_preempt_detected stalls on CPUs/tasks: { 2} (detected by 9, t=18002 jiffies, g=56726, c=56725, q=11821)
mpt2sas0: mpt2sas_scsih_issue_tm: timeout
NMI watchdog: BUG: soft lockup - CPU#3 stuck for 23s! [qemu:win171:4568]
NMI watchdog: BUG: soft lockup - CPU#9 stuck for 23s! [qemu:win249:4259]
. . .
When this happens the computer becomes totally unresponsive and has to be powered down. Normal reset is not enough, I get dmar errors as soon as kernel is loaded again.
I'm running 2 different VMs, win171 and win249 . Both are Windows 7, each has its own GPU (and small number of PCI passed-thru devices) and both perform well and are stable - that is, until I restart win171 (which is attached to W7100).
Another factor which might be causing problems here is that card W7100 is used by system as primary VGA, i.e. first it shows BIOS startup screen, next SYSLINUX and then kernel messages on startup. Also, kernel is actually writing to it (as VGA framebuffer) after win171 has been started, as I see some low priority dmesg (warnings related to my USB devices setup) on the screen overwriting SEABIOS messages. One way I could try to prevent this is to disable ROM of this card in BIOS thus rendering my machine entirely headless, but this is a bit drastic.
Ideally I'd rather find the right combination of kernel parameters to prevent kernel from using VGA after after its been loaded to memory (while still allowing both BIOS and SYSLINUX to use it). I will try "console=ttyS0 nomodeset", any better ideas? The hypervisor is meant to run headless, but I'd rather see initial startup sequence until the kernel has loaded.
Last edited by Bronek (2015-01-26 10:01:40)
Offline
10Gbps Pipe :
Hi , I just got around setting an effective 10Gbps networking between the host and Windows 8.1 guest (For Samba shares) and I got a very good performance . Here is a SMB share on the host's RAM Disk result :
http://i.imgur.com/Ro6Jj5y.png
Execute this script BEFORE booting your VM to set up the host's interface :
ip tuntap add tap_s mode tap multi_queue
ip link set tap_s mtu 65500
ip link set tap_s txqueuelen 5000
ip link set dev tap_s up
ip addr add 195.165.1.2/24 broadcast 195.165.1.255 dev tap_s
Of course , change the IP address to whatever you like .
Then add this interface as a device to your QEMU command line :
-netdev type=tap,ifname=tap_s,id=net1,vhost=on,vhostforce=on,queues=4,script= \
-device virtio-net-pci,netdev=net1,mq=on,vectors=9 \
Now on your Windows VM , set the new interface , give it an IP address in the same space as the host's interface like : 195.165.1.3 with a netmask of 255.255.255.0 , and no gateway or DNS .
Done !
You can now ping your host and connect to your SMB shares at 10Gbps .
You can also raise the MTU value in the interface's configuration dialog in Windows Device Manager from 1500 to 65500 . I got only 5Gbps with 1500 MTU , so you should change it to 65500 .
Please note that this is going to be from host to guest and vice versa only , you still need another interface for internet / bridge networking .
Hope this helps
Last edited by Denso (2015-01-27 08:01:05)
Offline
Hello all,
My system is running perfect normally, thanks to all of you but, sometimes, after shutting down my virtual guest and turn it on again, I had a segmentation fault:
Jan 27 07:26:26 mycolonialone kernel: general protection fault: 0000 [#1] PREEMPT SMP
Jan 27 07:26:26 mycolonialone kernel: Modules linked in: vfio_pci vfio_iommu_type1 vfio ecb joydev mousedev btusb uvcvideo videobuf2_vmalloc bluetooth videobuf2_memops s
Jan 27 07:26:26 mycolonialone kernel: 8250_dw dw_dmac_core i2c_designware_platform i2c_designware_core gpio_lynxpoint spi_pxa2xx_platform acpi_pad nouveau mxm_wmi wmi t
Jan 27 07:26:26 mycolonialone kernel: CPU: 2 PID: 2176 Comm: vfio-bind Not tainted 3.18.2-2-ARCH #1
Jan 27 07:26:26 mycolonialone kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./Z97 Extreme6, BIOS P1.60 12/09/2014
Jan 27 07:26:26 mycolonialone kernel: task: ffff88028c42da90 ti: ffff88002c97c000 task.ti: ffff88002c97c000
Jan 27 07:26:26 mycolonialone kernel: RIP: 0010:[<ffffffff813c3bf3>] [<ffffffff813c3bf3>] __rpm_callback+0x33/0x90
Jan 27 07:26:26 mycolonialone kernel: RSP: 0018:ffff88002c97fd38 EFLAGS: 00010286
Jan 27 07:26:26 mycolonialone kernel: RAX: 0000000000000008 RBX: ffff880458dad098 RCX: 0000000000000000
Jan 27 07:26:26 mycolonialone kernel: RDX: ffff8804541e8b88 RSI: ffff880458dad098 RDI: ffff880458dad098
Jan 27 07:26:26 mycolonialone kernel: RBP: ffff88002c97fd58 R08: 0000000000000246 R09: ffffea0001a8b800
Jan 27 07:26:26 mycolonialone kernel: R10: ffffffffa096a1c8 R11: ffffea001138da40 R12: ffff880458dad146
Jan 27 07:26:26 mycolonialone kernel: R13: 1b48313b30315b1b R14: 0000000000000000 R15: fffffffffffffff2
Jan 27 07:26:26 mycolonialone kernel: FS: 00007f2964cef700(0000) GS:ffff88046fa80000(0000) knlGS:0000000000000000
Jan 27 07:26:26 mycolonialone kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Jan 27 07:26:26 mycolonialone kernel: CR2: 00007f2964d1b000 CR3: 00000000109ff000 CR4: 00000000001407e0
Jan 27 07:26:26 mycolonialone kernel: Stack:
Jan 27 07:26:26 mycolonialone kernel: 00000000ffffffff ffff880458dad098 ffff880458dad098 0000000000000004
Jan 27 07:26:26 mycolonialone kernel: ffff88002c97fd88 ffffffff813c4a89 ffff880458dad098 0000000000000004
Jan 27 07:26:26 mycolonialone kernel: ffff880458dad146 0000000000000246 ffff88002c97fdb8 ffffffff813c4bc3
Jan 27 07:26:26 mycolonialone kernel: Call Trace:
Jan 27 07:26:26 mycolonialone kernel: [<ffffffff813c4a89>] rpm_idle+0x259/0x340
Jan 27 07:26:26 mycolonialone kernel: [<ffffffff813c4bc3>] __pm_runtime_idle+0x53/0x70
Jan 27 07:26:26 mycolonialone kernel: [<ffffffff812f3208>] pci_device_remove+0x78/0xc0
Jan 27 07:26:26 mycolonialone kernel: [<ffffffff813bab7f>] __device_release_driver+0x7f/0xf0
Jan 27 07:26:26 mycolonialone kernel: [<ffffffff813bac13>] device_release_driver+0x23/0x30
Jan 27 07:26:26 mycolonialone kernel: [<ffffffff813b98ed>] unbind_store+0xed/0x150
Jan 27 07:26:26 mycolonialone kernel: [<ffffffff813b8b45>] drv_attr_store+0x25/0x40
Jan 27 07:26:26 mycolonialone kernel: [<ffffffff8124747a>] sysfs_kf_write+0x3a/0x50
Jan 27 07:26:26 mycolonialone kernel: [<ffffffff812469be>] kernfs_fop_write+0xee/0x180
Jan 27 07:26:26 mycolonialone kernel: [<ffffffff811cf1d7>] vfs_write+0xb7/0x200
Jan 27 07:26:26 mycolonialone kernel: [<ffffffff811cfd29>] SyS_write+0x59/0xd0
Jan 27 07:26:26 mycolonialone kernel: [<ffffffff81554ca9>] system_call_fastpath+0x12/0x17
Jan 27 07:26:26 mycolonialone kernel: Code: e5 41 55 41 54 53 4c 8d a6 ae 00 00 00 49 89 fd 48 89 f3 48 83 ec 08 f6 86 89 01 00 00 02 4c 89 e7 74 35 e8 50 06 19 00 48 89
Jan 27 07:26:26 mycolonialone kernel: RIP [<ffffffff813c3bf3>] __rpm_callback+0x33/0x90
Jan 27 07:26:26 mycolonialone kernel: RSP <ffff88002c97fd38>
Jan 27 07:26:26 mycolonialone kernel: ---[ end trace 449fe5da686b88a6 ]---
Please, let me know all the files, logs or whatever you need and I will provide them with pleasure.
Is not anything urgent because only happens sometimes but is annoying
Thanks a lot in advance
Regards,
TheArcher
Offline
KKF wrote:Hey guys,
I am new to all this and have wanted to try it out for a while as a learning experience for myself. I have the budget to build a PC just for this. For starters I would like to have one Windows 8 VM and Elementary OS as host; if possible. Eventually I would like to have an OSX VM and a Windows VM, though I understand how difficult that can be.
With anything of this magnitude I want to make sure I buy the right components for the job. The only components I own at this point are a 660 ti gpu and a R4 case. I will probably buy a 970 gpu at some point in the future but no rush at this point. Any advise on what to buy for MOBO, CPU, and Ram? Should I buy two separate SSD for each VM? I have a server that hosts all my documents and files so don't need to much space on those SSD's.
The OSX VM would mainly be used for Photoshop as my girlfriend prefers the layout of Photoshop in OSX to Windows. The Windows VM would be used for gaming. I play Heroes of the Storm and SC2. So nothing to intensive.
While I really do not have a budget for this I don't want to spend 2000 dollars either as it really is for learning and the challenge of putting this together. So what would be my best options for a MOBO, CPU, and Ram?
After I buy everything I plan to do an extensive how to on cost and time it took me, a novice, to do this.
Thanks!
Use a Xeon E5 or better and non-Intel graphics for the host if you don't want to deal with patching your kernel for ACS or VGA arbitration (BTW, X99 PCH root port patch just posted). Gigabyte motherboards seem to be the most configurable when it comes selecting a primary graphics device, which may be important if you end up with 3 graphics cards. If you blow all your money on an over-spec'd GPU for your needs and skimp a "client" processor, you'll pay for it patching your kernel forever. Also, make sure you scale your core count and memory for the intended usage. If you want to game at the same time as your gf is photoshop'ing, then you need to be able to dedicate some cores to each VM. Also, OSX w/ GPU assignment is not something a lot of people are doing, so be sure to do your research on whether it's really an option. There are parts of getting OSX to work that cannot be discussed on forums like this.
So I take it that the cores are not dynamically shared as a hyper-v? Wouldn't an AMD chip be a better fit then? (More cores cheaper... AMD FX-9590?) Thank you for your insight I am sure I will have more questions.
Last edited by KKF (2015-01-27 21:51:55)
Offline
aw wrote:KKF wrote:Hey guys,
I am new to all this and have wanted to try it out for a while as a learning experience for myself. I have the budget to build a PC just for this. For starters I would like to have one Windows 8 VM and Elementary OS as host; if possible. Eventually I would like to have an OSX VM and a Windows VM, though I understand how difficult that can be.
With anything of this magnitude I want to make sure I buy the right components for the job. The only components I own at this point are a 660 ti gpu and a R4 case. I will probably buy a 970 gpu at some point in the future but no rush at this point. Any advise on what to buy for MOBO, CPU, and Ram? Should I buy two separate SSD for each VM? I have a server that hosts all my documents and files so don't need to much space on those SSD's.
The OSX VM would mainly be used for Photoshop as my girlfriend prefers the layout of Photoshop in OSX to Windows. The Windows VM would be used for gaming. I play Heroes of the Storm and SC2. So nothing to intensive.
While I really do not have a budget for this I don't want to spend 2000 dollars either as it really is for learning and the challenge of putting this together. So what would be my best options for a MOBO, CPU, and Ram?
After I buy everything I plan to do an extensive how to on cost and time it took me, a novice, to do this.
Thanks!
Use a Xeon E5 or better and non-Intel graphics for the host if you don't want to deal with patching your kernel for ACS or VGA arbitration (BTW, X99 PCH root port patch just posted). Gigabyte motherboards seem to be the most configurable when it comes selecting a primary graphics device, which may be important if you end up with 3 graphics cards. If you blow all your money on an over-spec'd GPU for your needs and skimp a "client" processor, you'll pay for it patching your kernel forever. Also, make sure you scale your core count and memory for the intended usage. If you want to game at the same time as your gf is photoshop'ing, then you need to be able to dedicate some cores to each VM. Also, OSX w/ GPU assignment is not something a lot of people are doing, so be sure to do your research on whether it's really an option. There are parts of getting OSX to work that cannot be discussed on forums like this.
Hi just wanted to give a short overview over my system that i am really content with. Essentially I started with an typical workstation build and done dualboot over 2 years. But when building the ws i was already looking at functional vt-d and enough pcie slots. So after a lot of success stories in this thread i finally bought a used 2 gpu and tried kvm and was more then pleased with the result.
My build is as follows
MB ASUS P9X79WS
CPU Xeon E5 1650 (Xeon E5 16xx are a much better choice then similar i7, for same money you get ECC support and v1 and v2 still have an open multi, so totally easy to OC. Essentially this is the only Xeon line that you can OC, just make sure your MB supports it. Dont know whether v3 (Haswell) is still open multi, but i dont plan to update soon. My CPU is running on 4GHz (didnt bother to go higher, stability is king) which is still plenty.)
32GB ECC Ram
Geforce GTX 750 Ti Primary
Geforce GTX Titan Passthrough
2 SSD (1 Boot, 1 VM)
Mirrored HD for Data (ZFS, with actually a third ssd for l2arc)
Usb audiocard
Aten switchAll is running on Ubuntu 12.04 with Mainline 3.17.1 Kernel and no patches.
Selfcompiled qemu 2.1.2
libvirt 1.2 from ubuntu cloud ppa
I know ubuntu is not latest, but i am to lazy to update and important stuff (kernel, driver, qemu) is uptodatepci-stub is done by module, this way i can disable blacklisting the titan during boot if i want. (key is to add softdep drm pre: pci-stub somewehre in modprobe.d)
i patched nvidia dkms module with the patch linked here somewhereWindows is running on 4 HT Core pinned to them. Rest of the system is moved to remaining 2 cores when vm is spun up by cset, called from a script in libvirt hooks folder. Windows gets 12gb in hugepages.
For cpu i have kvm=off so i can run 340 driver in vm. All is working, even game streaming to raspberrypi through geforce experience
Performance is near bare metal (Firestrike 9100 and Firestrike Extreme 4800)Windows VM gets also a passed through nic (MB has 2) and the asmedia USB 3.0 onboard controller
Audio is done by usb sound card hooked to the passed through usb 3.0 controller.
Windows VM is actually the image of the bare metal installation, didnt want to reinstallInput and Output is switched by aten switch (got it used)
At the moment the VM is running on q35 machine and seabios. I tried also 440fx (with qemu wrapper script), it worked but for some reason some things in geforce experience were disabled, didnt look to closely into that.
Only issue is 1 out of 20 times the whole system freezes on VM start, and really on start only, even before any output on second card is to be seen. Cant really point to why. But its so seldom and only occuring during vm boot that i didnt know how to debug this.
Now i am sorting out what gives me best io performace and whether i can get trim working for sparse raw files on zfs datasets. (Virtio blk now, trim not working)
Also Windows 7 virtio net seems to be bottlenecked at 500Mbyte/s for incoming tcp connections. Outgoing and bothways with linux vm is at 2GByte/s and more. That is important metric for internal shares on ssd, but even 500Mbyte/s is not bad.Endresult:
I dont need to double boot to play games or use lightroom with color calibration on monitor.
All important stuff (Dokuments, Pictures) is finally on zfs storage (zfs send recieve, checksums, snapshots, clones, way better than ntfs or ext4 for important data)I can go into configuration detail (xmls, scripts) if someone is interested.
I want to say many thanks to all who write here for invaluable information and hard programming work.
And finally i can only vouch for the Xeon route.
Wow, thanks this was very helpful! I will post more questions as time progresses.
Offline
So I take it that the cores are not dynamically shared as a hyper-v? Wouldn't an AMD chip be a better fit then? (More cores cheaper...) Thank you for your insight I am sure I will have more questions.
Cores are share-able, what I'm suggesting and what many people here have confirmed is that if you intend to use one of the VMs for gaming, then you don't want the variable performance associated with sharing cores. Do you really want to get fragged in your game because your girlfriend applied a complex transform to an image and your frame rate bottomed out? If you were only running desktop applications in each VM, then by all means oversubscribe the cores between VMs. As for AMD, I just don't see them being all that competitive and they certainly don't have the enterprise related testing that Intel does.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
Dear friends,
After one of the kernel update, as i think, I have a strange problem.
Everything works fine, until the drivers are installed (latest version I tried was Catalyst Beta 14.11.2 Beta) after installing the display driver i has reboot loop.
Windows 8.1 Enterprice
Radeon 7970
supermicro ~ # uname -a
Linux supermicro 3.19.0-1-mainline #1 SMP PREEMPT Mon Jan 26 00:18:49 MSK 2015 x86_64 GNU/Linux
supermicro ~ # lspci |grep -i vga
03:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Tahiti XT [Radeon HD 7970/8970 OEM / R9 280X]
82:00.0 VGA compatible controller: NVIDIA Corporation GF108 [GeForce GT 620] (rev a1)
supermicro ~ # cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-linux-mainline root=UUID=89c5cccc-bc84-4e48-aa7a-4c2da2c3c0ff rw rootflags=subvol=root_subvolume intel_iommu=on pci-stub.ids=1002:6798,1002:aaa0
supermicro ~ # cat /etc/modprobe.d/kvm.conf
options kvm ignore_msrs=1
options vfio_iommu_type1 allow_unsafe_interrupts=1
supermicro ~ # zcat /proc/config | grep VFIO
CONFIG_VFIO_IOMMU_TYPE1=m
CONFIG_VFIO=m
CONFIG_VFIO_PCI=m
CONFIG_VFIO_PCI_VGA=y
CONFIG_VFIO_PCI_MMAP=y
CONFIG_VFIO_PCI_INTX=y
CONFIG_KVM_VFIO=y
supermicro ~ # lsmod | grep vfio
vfio_iommu_type1 17118 1
vfio_pci 35525 2
vfio 18477 6 vfio_iommu_type1,vfio_pci
supermicro ~ # cat /home/nikitos/vm_sources/w8q35/w8.sh
/usr/sbin/qemu-system-x86_64 \
--enable-kvm -M q35 -cpu host -balloon none \
-monitor none -display none -vga none -nographic \
-m 4096 -smp 6,sockets=1,cores=6,threads=1 \
-bios /usr/share/qemu/bios.bin \
-boot menu=on \
-device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
-device vfio-pci,host=03:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on,rombar=0,romfile=MSI.HD7970.3072.130104.rom \
-device vfio-pci,host=03:00.1,bus=root.1,addr=00.1 \
-device vfio-pci,host=09:00.0,bus=pcie.0 \
-device vfio-pci,host=06:00.1,bus=pcie.0 \
-drive file=/images/vdi/w8system0.qcow2,if=virtio,format=qcow2 \
-rtc base=localtime
P.S. Sory for my English
Offline
I've seen this more than once now, rombar=0,romfile=/path/to/rom is a bad idea. What this does is to disable the PCI ROM BAR, but specify via the device a ROM to load. Sound strange? It is. QEMU will then place the ROM into the "genroms" section, which disassociates it from the device. The ROM then has to go find the device itself, which may be a pretty common thing for it to do, but is really just a pointless exercise imposed by the user. The only time to use rombar=0 with vfio-pci is to disable the device from having a ROM BAR at all. If you simply pass romfile=/path/to/rom it will override the physical ROM. There is absolutely no need to both disable the ROM BAR and still provide a ROM. </psa>
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
Just wanted to report an OVMF success story. For awhile, I had been running with a standard VGA passthrough setup, but this weekend I decided to experiment with OVMF following the instructions.
Works perfectly. Windows 8.1 runs flawlessly in EFI mode with TianoCore. I can now use the stock Arch kernel and I have DRI on the host iGPU.
Kernel: 3.18.2-2-ARCH
CPU: Intel Core i5-4690
Host GPU: Intel HD Graphics 4600
Guest GPU: AMD Radeon R7 265 (rebranded HD 7850)
I will say that this bit,
If you remove the "Graphics" (ie. VNC/Spice) and "Video" (ie. VGA/QXL/Cirrus) devices from the VM, the assigned GPU will be the primary display.
didn't work for me; with emulated graphics disabled, the VM ran headless and did not use the attached GPU.
Laptop: Lenovo L440, Intel Core i3-4000M, HD Graphics 4600
Desktop: Intel Core i5-4690, HD Graphics 4600, AMD Radeon R7 265 (KVM VGA passthrough)
Offline
I will say that this bit,
http://vfio.blogspot.com/2014/08/primary-graphics-assignment-without-vga.html wrote:If you remove the "Graphics" (ie. VNC/Spice) and "Video" (ie. VGA/QXL/Cirrus) devices from the VM, the assigned GPU will be the primary display.
didn't work for me; with emulated graphics disabled, the VM ran headless and did not use the attached GPU.
How sure are you that your GPU has a UEFI ROM? AIUI, AMD GPUs often work as a secondary device regardless of VGA support. If it's not running as a primary, it's possible you might simply have been able to drop x-vga=off and let the driver initialize it w/o VGA.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
How sure are you that your GPU has a UEFI ROM?
I just ran your rom-parser program and indeed it seems to lack an UEFI ROM, despite being a relatively recent card.
Valid ROM signature found @0h, PCIR offset 22ch
PCIR: type 0, vendor: 1002, device: 6819, class: 030000
PCIR: revision 0, vendor revision: f2c
Last image
So I'm guessing that the setup works because the card doesn't need VGA initialization.
EDIT: The Radeon card is definitely being used as a secondary graphics card.
Last edited by rman (2015-01-28 03:06:57)
Laptop: Lenovo L440, Intel Core i3-4000M, HD Graphics 4600
Desktop: Intel Core i5-4690, HD Graphics 4600, AMD Radeon R7 265 (KVM VGA passthrough)
Offline
@aw :
It seems that DMAR issues made it into 3.18.3 too .
3.18.3 contains 2 IOMMU commits :
iommu/vt-d: Fix dmar_domain leak in iommu_attach_device
iommu/vt-d: Fix an off-by-one bug in __domain_mapping()
Which one do you think is to blame ?
Offline