You are not logged in.
@aw :
Thanks for taking the time to provide such information . You rock !
And yes , only K4s are single-slot
Maybe Pascal would allow them to cram more power into the single-slot spec.
Offline
@Duelist :
I wish there were high-end single-slot GPUs . I want to virtualize all of the house's PCs , but I have no PCI-E slots remaining , because both of my GPUs are dual-slot .
Oh and I wish there were PCI-E lanes splitters or something . That would allow you to split 16x into 2 8x . and 8x into 2 4x .
Is that a Fractal Design Core 1000 ? That case used to huse my gaming rig before I virtualize it .
http://www.cirrascale.com/blog/wp-conte … _Riser.jpg
If i only had pci-e 3.0, i'd stuff my pc with cards like a real mad man.
The case is.. wait for it... GMC High Five why-the-hell-i-bought-it-in-the-first-place edition.
I'm dreaming of some horizontal-mount case, but all i see is some server 4U cases, some CM HAF XB and some other company that made horizontal or "flipped"(io ports facing down or up) mounts for the motherboard. Because, obviously, all my GPUs are passively cooled, they would really be cooler if air convection worked the right way.
Well.. Risers will help you with your pci-e slots problem.
Buy something that has an insane amount of pci-e 16x slots and a bunch of regular risers. And a bunch of that cirrascale pieces of awesome, and... push the CPU's limit on pci-e lanes. My CPU supports up to 24 lanes, but there's no such motherboard.
And remember about the power supply - i'm pushing my not so fresh 550 FSP to it's limit(if i will overclock my cpu and try to burn something on the dvd writer(which i occasionally have) while mining/bruteforcing something on the radeons and play quake on linux host, it'll rack up a total of 600W). Gladly, i have a soldering iron, an oscilloscope and a bunch of stuff to repair most of the electrical damage that may happen. I hope.
BTW, the "splitter", afaik, is called a "bridge". And yea, it really does split. Examine that cirrascale thingie, it's interesting.
http://vr-zone.com/uploads/14361/P1020933.jpg?4285f0
That(or something else with less hilarious design). With that cirrascale things. 7x4=28 GPUs total. That would be insanely awesome.
But what if we go deeper and put that "riser" into it's clone, making 7 ports out of one?
I guess ACS and bridges and stuff would go batshit insane, but heck it'd be awesome. The PC would look like some nuclear power plant, especially if the GPUs would be something Fiji-based with that huge water coolers.
Oh well, sweet insane dreams and fantasies.
...whoa, that PLX PEX 8780 is 80-Lane, 20-Port PCI Express Gen 3 (8 GT/s) Switch. 80-lane. 20 port. Just wire it out right. I guess aw will shatter my dreams by saying that this switch doesn't do ACS or something else.
-- mod edit: read the Forum Etiquette and only post thumbnails http://wiki.archlinux.org/index.php/For … s_and_Code [jwr] --
Last edited by Duelist (2015-06-28 19:46:55)
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline
...whoa, that PLX PEX 8780 is 80-Lane, 20-Port PCI Express Gen 3 (8 GT/s) Switch. 80-lane. 20 port. Just wire it out right. I guess aw will shatter my dreams by saying that this switch doesn't do ACS or something else.
From the product brief:
PEX8780 Key Features
o Standards Compliant
- PCI Express Base Specification, r3.0
(compatible w/ PCIe r1.0a/1.1 & 2.0)
- PCI Power Management Spec, r1.2
- Microsoft Windows Logo Compliant
- Supports Access Control Services
- Dynamic link-width control
- Dynamic SerDes speed control
Kudos to PLX
Edit: GRID K1 & K2 use PLX PEX 8747 and have proper ACS isolation.
Last edited by aw (2015-06-28 20:13:01)
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
@ Duelist :
That's one nice post man !! My ASUS X99-E WS has 2 PLX 8747 bridges (7 Slots) and I'm filling them up already .
Why don't you get a case like mine ?
http://i.imgur.com/imkMJy9.jpg
The SAS backplanes are good quality (for mine anyway) , but the overall plastic parts are cheap and prone to crack . You can see in the image , the power button's cover is gone and one of the HDD trays is cracked .
@aw : PLX bridges are good for VFIO and passthrough , but some motherboards makers screw it up somehow , like my Z77 from ASRock . Used to crash hard whenever I launch the VM . Now with my current ASUS X99-E WS everything behind the bridges works like a charm as if its tied directly to the CPU's Root .
Anyway , I've finally purchased an SSD to store my RAW VM images (created using truncate) , what is the best Filesystem to use for that SSD ?
-- mod edit: read the Forum Etiquette and only post thumbnails http://wiki.archlinux.org/index.php/For … s_and_Code [jwr] --
Offline
That was actually something that I was curious about. I have 2 Samsung 840 Pros in a mdadm RAID 0, but I swear it hasn't really improved performance any, if anything I notice lag in games. Such as loading textures, or jumping around to view different players on a map, it takes a split second for things to load.
Ive had a stable setup for some time now, but need to work on performance tuning.
Offline
@The_Moves :
I am certain that putting my VMs files on this new SSD (850 EVO with XFS on top of it) didn't improve performance over my old 5 7200rpm MD RAID 0 . If any , there is a little improvement on Windows boot times , but launching applications (ESPECIALLY FIREFOX !!!) remains without a noticable improvement .
Congrats on reaching the stable milestone !
Last edited by Denso (2015-06-29 04:46:29)
Offline
That was actually something that I was curious about. I have 2 Samsung 840 Pros in a mdadm RAID 0, but I swear it hasn't really improved performance any, if anything I notice lag in games. Such as loading textures, or jumping around to view different players on a map, it takes a split second for things to load.
Ive had a stable setup for some time now, but need to work on performance tuning.
Damn! I notice that effects in most games, no matter if they're running from a raw image relying on a system(ext4) OCZ Agility 3 SSD, or a raw block device being the latest(FZEX) WD Black HDD.
ATTO disk benchmark shows no problems, 200MBps read-write on the SSD and specification-correct(180?.. i don't remember) speeds on the HDD, but the lag is still there.
I've tried every combination of virtio disk systems: virtio-blk-pci, virtio-scsi, native and write-through and whatever else cache modes, multiqueue... Nothing helped much. Maybe CPU pinning helps to address that issues?
The memory shouldn't be a problem - aw says it's pinned in place and isn't swappable or moved in any way while the devices are passed through.
Last edited by Duelist (2015-06-29 10:44:59)
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline
iommu=pt might help improve the latency of host disk and therefore the guest disk image stored on that disk.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
@aw, I read on your blog that you would need a wrapper script to do q35 on libvirt with windows. how does that work out?
Offline
@noctlos
Something like this:
#!/bin/sh
exec qemu-kvm \
`echo "\$@" | \
sed 's|i82801b11-bridge,id=pci.1,bus=pcie.0,addr=0x1e|i82801b11-bridge,id=pci.1,bus=pcie.0,addr=0x1e -device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=pcie.1|g' | \
sed 's|bus=pci.2,addr=0x4|bus=pcie.1,addr=0x0.0|g'`
It could use some work, this is just something I crudely hacked together. The idea is to key on the i82801b11-bridge that libvirt adds to also add a root port, then move the gpu, which happens to be pci.2/0x4 in my setup to pcie.1/0x0.0. Improved libvirt support for q35 is coming, soon.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
Nothing helped much. Maybe CPU pinning helps to address that issues?
I have pinned CPUs now, two cores w/ two threads on my X5660 and still experience the issue. I was thinking of adding another core/thread, but from reading your response that may not help. Plus while watching NMON (with option L and c enables) it didn't appear that I was CPU bound anyways. I'm sure there are better tests with this though. Also was considering updating the Kernel as well, from 3.18.7 to 4. Still running Fedora 21
I will try AWs suggestion.
Last edited by The_Moves (2015-06-29 16:22:31)
Offline
@aw
I see. Since qemu-kvm is not part of my path, i would rectify that as `qemu-system-x86_64 -enable-kvm`, right? Also, how do I get libvirt to use that wrapper script?
Offline
@aw
I see. Since qemu-kvm is not part of my path, i would rectify that as `qemu-system-x86_64 -enable-kvm`, right? Also, how do I get libvirt to use that wrapper script?
Change qemu-kvm to qemu-system-x86_64, but do not start adding random options. You're modifying how libvirt is calling QEMU, not creating a full command line. virsh edit the domain and update the <emulator> tag to point to the wrapper script.
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
Duelist wrote:Nothing helped much. Maybe CPU pinning helps to address that issues?
I have pinned CPUs now, two cores w/ two threads on my X5660 and still experience the issue. I was thinking of adding another core/thread, but from reading your response that may not help. Plus while watching NMON (with option L and c enables) it didn't appear that I was CPU bound anyways. I'm sure there are better tests with this though. Also was considering updating the Kernel as well, from 3.18.7 to 4. Still running Fedora 21
I will try AWs suggestion.
Seems like you have enough cores to spare: try doing isolcpus to fully isolate a CPU from everything except the VM. Maybe THAT will help.
Because, @aw, i have iommu=pt and it doesn't help much.
Oh, and btw...
01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Cape Verde PRO [Radeon HD 7750/8740 / R7 250E]
01:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Cape Verde/Pitcairn HDMI Audio [Radeon HD 7700/7800 Series]
02:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Cape Verde PRO [Radeon HD 7750/8740 / R7 250E]
02:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Cape Verde/Pitcairn HDMI Audio [Radeon HD 7700/7800 Series]
03:05.0 Multimedia audio controller: Ensoniq 5880B [AudioPCI] (rev 02)
04:00.0 VGA compatible controller: NVIDIA Corporation GF119 [GeForce GT 610] (rev a1)
04:00.1 Audio device: NVIDIA Corporation GF119 HDMI Audio Controller (rev a1)
05:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller
06:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 09)
07:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Cape Verde PRO [Radeon HD 7750/8740 / R7 250E]
07:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Cape Verde/Pitcairn HDMI Audio [Radeon HD 7700/7800 Series]
You know what this means;)
...
/sys/kernel/iommu_groups/8/devices:
0000:00:15.0 0000:00:15.2 0000:04:00.0 0000:05:00.0 0000:07:00.0
0000:00:15.1 0000:00:15.3 0000:04:00.1 0000:06:00.0 0000:07:00.1
D-OH! How could i forget...
Alright, port change, port change!
Oh no, we're going down! 04:00.0 is in that group too! Noooooo!
00:15.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Hudson PCI to PCI bridge (PCIE port 0)
00:15.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Hudson PCI to PCI bridge (PCIE port 1)
00:15.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Hudson PCI to PCI bridge (PCIE port 2)
00:15.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Hudson PCI to PCI bridge (PCIE port 3)
-[0000:00]-+-00.0
+-00.2
+-02.0-[01]--+-00.0
| \-00.1
+-04.0-[02]--+-00.0
| \-00.1
+-11.0
+-12.0
+-12.2
+-13.0
+-13.2
+-14.0
+-14.2
+-14.3
+-14.4-[03]----05.0
+-15.0-[04]--+-00.0
| \-00.1
+-15.1-[05]----00.0
+-15.2-[06]----00.0
+-15.3-[07]--+-00.0
| \-00.1
...
Yeaaaaah, those are not processor root ports.
Seems like i must make the host headless(detach the GT610) and networkless(use a pci or usb network card) to be able to passthrough the full group...
I guess there's no real way of shuffling the iommu groups without changing the motherboard, right?
Alright, multiply the weirdness:
<hostdev mode='subsystem' type='pci' managed='yes'>
<driver name='kvm'/>
<source>
<address domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
</source>
<rom bar='on' file='/mnt/hdd/qemu/hybridmagic.rom'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0c' function='0x0'/>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='yes'>
<driver name='kvm'/>
<source>
<address domain='0x0000' bus='0x07' slot='0x00' function='0x1'/>
</source>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0d' function='0x0'/>
</hostdev>
Note the driver name: sorry, aw, but i'll use pci-assign for select devices.
The VM booted, but i observe some weird things happening, the main card restarts the driver. Seems like they're doing their handshake mating dance...
After a dozen of screen blinks, they've settled down.
http://i.imgur.com/Mhr3izX.png
Sorry for the russian locale, but it's a habit..
(I don't know wtf is that mirage driver)
The offline device is QXL.
Well, now the fun part: GPU-Z, some benchmarks and maybe a crossfire.
http://i.imgur.com/i4JfRdt.png
CCC is overwhelmed(it was with only two cards) by awesomeness, and refuses to launch:D
CrossFireX is online. It is enabled and shown by GPU-Z 0.8.2 that it has 2 GPUs. Out of three.
It turned on much faster than usual though...
The thing is - i don't do drbridges. My cards are the first with GCN and XDMA crossfire, that's why it works with two cards in the first place.
I remember that AMD Support told me that i will need an even number of cards.
But there's a tri-way crossfire on some 7XXX cards, and there's configs even without the bridges. Something is amiss.
And i can't really determine which two cards are being used.
After i turned off crossfire, GPU-z crashed the driver, but it was reset by the system, so i was able to see three GPUs with some missing info. After a second launch of GPU-z, the driver hung and the system hung too. After a force power off, the VM no longer boots, eating 100% CPU. Time to reboot the host.
After a host reboot... The second card went offline, the third card works. Crossfire doesn't work, but i'm able to use it's video outputs. So it works as a separate device, but crossfire is unavailable... Hmm, what could be a reason.. Maybe the bus width?
PCI-e 2.0 x1 = 5Gbps = 500 MBps. A crossfire bridge is capable of 900 MBps.
So that's why XDMA crossfire was introduced - pci express bandwidth was big enough to pump all the crossfire stuff through itself, and everything AMD needed was enabling some way of DMA communication between devices.
If i'd have pci-e 3.0, even on 1x i'd have ~984 MBps as stated here.
Oh well. Is there a way of simulating higher bus throughput? I honestly don't believe that it really fills all that 900 MBps and it may be a simple software block.
So i guess the final result is semi-win: i can run three VMs simultaneously, if i have three spare screens. Or run a double-crossfire and single-card VMs, needing only two screens to spare.
Crossfire doesn't work with the third card if the second is offline. I wonder what will happen if i will remove the second card from the VM and try doing crossfire, but something hints me that fail will be the result.
Surprisingly though, that alone third card is self-contained - i can run a VM only from that card, BUT with some buts:
1. one VM launch per host boot. Legacy pci-assign doesn't do any resets right.
2. No VGA at all! Since i have it broken - i use OVMF, so it works.
3. It doesn't fit in the case.
All those problems may be fixed by pci-e 3.0 with more processor root ports and just more slots - i have no pci-e slots left free on my F2A55. Also, pci-e 3.0 alone would make crossfire through 1x possible, i think.
I'll photograph that madness tomorrow, for lulz, That was one hell of a day.
-- mod edit: read the Forum Etiquette and only post thumbnails http://wiki.archlinux.org/index.php/For … s_and_Code [jwr] --
Last edited by Duelist (2015-06-29 21:02:53)
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline
Hi,
Debian user here, but I decided to drop by and tell about my experience with Qemu + KVM and passthrough since all the relevant information is here. Thanks to everyone involved in creating the tools and documentation for doing this, it has been immensely useful.
I've set up my system with Debian as the host for work and general desktop usage and a Windows 7 guest as my toy OS for games. The system:
- Custom kernel 4.0.5 with all the necessary KVM and VFIO features enabled
- Qemu 2.1.2 (I later compiled a build of 2.3.0)
- Intel DP67BG. Needs the ACS patch and allow_unsafe_interrupts.
- i7 2600
I actually got those back in 2011 with the intention of using Xen but didn't, partly because of laziness and partly because I had Nvidia cards which meant trouble without Quadro hacks. Better late than never.
- Titan X for Windows, a GT430 for Linux
I initially passed through the GT430 to make sure everything works. It did, though the VM would only start once per boot. Suspending and resuming between VM starts sufficed. The Titan X, however, does not have that problem. I can stop and start it as many times as I want without fail!
- Delock KVM switch for USB devices. It functions as a USB hub as well so I need just one connection for each OS. There are three separate USB controllers on the DP67BG so I pass through one of them to Windows.
- Audioengine D1, a USB DAC for audio. Emulated sound through Qemu was completely distorted and stuttery no matter what settings I tried so I needed to pass through audio. I connected the D1 to the KVM switch as well. That did NOT work well. In addition to short dropouts the audio would occasionally completely cut off for about two seconds before catching up again. Very annoying. I connected it directly to a passed through USB port and the total cutoffs have not occurred since (for the past day, knock on wood). I suppose putting a time-sensitive USB device behind a hub on a VM is asking for trouble. I'll have to figure out a hassle-free way to play audio from either OS to one set of headphones. Manually plugging/unplugging cables is the last resort.
- A raw format disk image on an ext4 formatted SSD partition used as a VirtIO drive. As I debugged my skipping audio, I noticed disk IO causes rather high amounts of DPC latency in the guest. Games produce noticeable hiccups whenever they are loading stuff. I'm planning on passing through the second SATA controller (Marvell) on the motherboard so I can forget about optimizing IO. Its only connector is an eSata in the back panel so I'll have to route the cable back in the case.
Speaking of DPC latency in the guest, I'm getting oddly inconsistent readings. Sometimes the baseline is ~1000 µs, sometimes ~2000 µs. I can't figure out why. Setting the cpu governor to performance on the host halves this to either 500 or 1000 µs. Any ideas to what's causing this or if the values are higher than they should be?
Another question regarding vCpu = pCpu pinning which I don't quite understand: I have so far run into CPU bottlenecks in some games that didn't exist on a bare metal OS. Let's say I want to provide all the CPU performance to the guest should it need it. The i7 2600 is a hyper-threaded quad-core so I use -smp 8,sockets=1,cores=4,threads=2. Since I won't use both systems at the very same time, I don't care if either one of can hog all the resources. In fact it would be desirable. Do I benefit anything from, say, using taskset 0xFFFFFFFF which according to the man page means "all processors"? I have noticed that the CPU usage seems to lack affinity on the guest, with all 8 vCPUs often showing some usage. On a bare metal Windows, the CPU usage was much more concentrated, with one core or two cores typically maxed out and others partially loaded. I do not know if this is a symptom of anything or related to performance issues at all. I'd appreciate any insight on this.
Offline
Hello geeky KVM users!
I am stuck while trying to set up passthrough for my compute GPU NVIDIA cards (Tesla k20c). The cards do not have ports to plug monitor cables on. They are only meant for computation. My host is "uname -a: Linux server 3.13.0-55-generic #94-Ubuntu SMP Thu Jun 18 00:27:10 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux" (I know Ubuntu is not Archlinux, but it seems if you know Archlinux you know everthing ). My guest is Windows 7, 64bit. Below is my startup script. Withouth the line that enables GPU passthrough (containing vfio-pci), I can boot and run the Windows guest successfully. But when I enable the passthrough, qemu crashes with error shown at the end of this post. Any help is much appreciated.
#!/bin/bash
configfile=/etc/vfio-pci1.cfg
vfiobind() {
dev="$1"
vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
device=$(cat /sys/bus/pci/devices/$dev/device)
if [ -e /sys/bus/pci/devices/$dev/driver ]; then
echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
fi
echo "Binding vendor: $vendor and device: $device"
echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id
}
echo Loading vfio...
sudo modprobe vfio-pci
echo Start binding devices...
cat $configfile | while read line;do
echo $line | grep ^# >/dev/null 2>&1 && continue
vfiobind $line
done
echo Done binding devices
echo Starting virtual machine ...
sudo qemu-system-x86_64 -enable-kvm -M q35 -m $((1024*32)) -cpu host \
-smp cpus=2,sockets=2,cores=4,threads=1 \
-bios /usr/share/qemu/bios.bin -vga std \
-device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
-device vfio-pci,multifunction=off,x-vga=off,host=02:00.0,bus=root.1,addr=00.0 \
-drive file=/data/win7kvm.img,id=disk,format=raw -device ide-hd,bus=ide.0,drive=disk \
-drive file=/data/Windows7.iso,id=isocd -device ide-cd,bus=ide.1,drive=isocd \
-boot menu=on \
-runas kvmuser
echo Done!
exit 0
qmeu crash message when enable vfio-pci passthough for GPU:
qemu-system-x86_64: vfio_dma_map(0x7ff529f9ecb0, 0xc0000, 0x8000, 0x7fed080c0000) = -12 (Cannot allocate memory)
qemu: hardware error: vfio: DMA mapping failed, unable to continue
CPU #0:
EAX=80000033 EBX=80000090 ECX=00000033 EDX=00000cfd
ESI=00000001 EDI=00000090 EBP=00000097 ESP=00006f94
EIP=ffff1e1c EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=0
ES =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA]
CS =0008 00000000 ffffffff 00c09b00 DPL=0 CS32 [-RA]
SS =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA]
DS =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA]
FS =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA]
GS =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA]
LDT=0000 00000000 0000ffff 00008200 DPL=0 LDT
TR =0000 00000000 0000ffff 00008b00 DPL=0 TSS32-busy
GDT= 000f6f98 00000037
IDT= 000f6fd6 00000000
CR0=60000011 CR2=00000000 CR3=00000000 CR4=00000000
DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000
DR6=00000000ffff0ff0 DR7=0000000000000400
EFER=0000000000000000
FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
XMM00=00000000000000000000000000000000 XMM01=00000000000000000000000000000000
XMM02=00000000000000000000000000000000 XMM03=00000000000000000000000000000000
XMM04=00000000000000000000000000000000 XMM05=00000000000000000000000000000000
XMM06=00000000000000000000000000000000 XMM07=00000000000000000000000000000000
CPU #1:
EAX=00000000 EBX=00000000 ECX=00000000 EDX=000206d7
ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=0
ES =0000 00000000 0000ffff 00009300
CS =f000 ffff0000 0000ffff 00009b00
SS =0000 00000000 0000ffff 00009300
DS =0000 00000000 0000ffff 00009300
FS =0000 00000000 0000ffff 00009300
GS =0000 00000000 0000ffff 00009300
LDT=0000 00000000 0000ffff 00008200
TR =0000 00000000 0000ffff 00008b00
GDT= 00000000 0000ffff
IDT= 00000000 0000ffff
CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000
DR6=00000000ffff0ff0 DR7=0000000000000400
EFER=0000000000000000
FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
XMM00=00000000000000000000000000000000 XMM01=00000000000000000000000000000000
XMM02=00000000000000000000000000000000 XMM03=00000000000000000000000000000000
XMM04=00000000000000000000000000000000 XMM05=00000000000000000000000000000000
XMM06=00000000000000000000000000000000 XMM07=00000000000000000000000000000000
Last edited by biocyberman (2015-06-29 22:33:55)
Offline
So I have a question to everybody in the house :
Using rsync to backup the VMs' hdd images while the VMs are running causes the host to crash hard .
Any way to backup while the VMs are running ? It has to be from within the host not the guest + it has to be rsync for its inplace feature .
Offline
So I have a question to everybody in the house :
Using rsync to backup the VMs' hdd images while the VMs are running causes the host to crash hard .
Any way to backup while the VMs are running ? It has to be from within the host not the guest + it has to be rsync for its inplace feature .
Surely there are many ways of doing this, but I wonder how could you possibly be using rsync for this. Does hits mean that you have mounted single block device both in guest and in host? If so, then no surprise that one of them is crashing. FWIW, I am using snapshot feature from ZFS and use zvols as block devices attached to virtual machines. Also I have installed inside guest qemu-ga from stable virtio-win and use it to freeze guest filesystem right before taking a snapshot.
Last edited by Bronek (2015-06-30 09:44:05)
Offline
Denso, please don't hijack threads: https://wiki.archlinux.org/index.php/Fo … _hijacking
Offline
Im using OVMF and passing through onboard P9X79 PRO "ASMedia USB3.0 controller":
When keyboard is attached to the controller, the VM gets a hang for 2-3 minutes before booting Windows 8.1 (on POST) and the USB controller no longer works in the VM. When I physically disconnect the keyboard BEFORE starting the VM it works normally, I can attach it later when OS boots (but need to disconnect it again if I reboot OS).
In other words, it seems like OVMF doesnt like the controller+keyboard attached to it while booting?
Can I somehow disable USB or keyboard support before booting?
What could be the problem? Is there a known fix I cant seem to find out? Tried alot of combinations already... Please help!
EDIT:
Using UEFI minimal Fedora 22 with everything latest and greatest (virt-preview)
Last edited by devianceluka (2015-06-30 10:46:14)
Offline
Im using OVMF and passing through onboard P9X79 PRO "ASMedia USB3.0 controller":
When keyboard is attached to the controller, the VM gets a hang for 2-3 minutes before booting Windows 8.1 (on POST) and the USB controller no longer works in the VM. When I physically disconnect the keyboard BEFORE starting the VM it works normally, I can attach it later when OS boots (but need to disconnect it again if I reboot OS).
I had the same issue with my ASMedia USB3 Controllers. I would be able to boot the VM with Keyboard and Mouse connected, but they would randomly just stop working. Only a reboot of the host would fix them. Instead, I have started passing through my onboard USB 2.0 Controllers and had no such stability issues. I would stay away from the ASMEDIA USB3 Controllers
Last edited by The_Moves (2015-06-30 19:53:57)
Offline
nbhs wrote:Yes deveject is used to automate the ejecting, but you can eject the card from windows just like you would eject a pendrive.
EDIT: make sure ejecting is done on the vm first run.
deveject works perfect, the vm switches from the cirrus card to the radeon on bootup and back und shutdown
I'm still a bit unclear on the virtio stuff though, I downloaded the drivers, but how do I install them (srsly, windows is confusing…)?
I'm using OVMF to pass through a EVGA GTX 960 card, and experiencing the reset issue too (VM cannot be restarted without host restart).
I searched deveject on Google, but there are multiple projects called deveject, and it seems they are all for USB devices.
Could you give the link to deveject you used?
Also is there any way to reset the card from host side? E.g. power off/on the card from some Linux command?
Thanks!
Offline
The_Moves wrote:Duelist wrote:Nothing helped much. Maybe CPU pinning helps to address that issues?
I have pinned CPUs now, two cores w/ two threads on my X5660 and still experience the issue. I was thinking of adding another core/thread, but from reading your response that may not help. Plus while watching NMON (with option L and c enables) it didn't appear that I was CPU bound anyways. I'm sure there are better tests with this though. Also was considering updating the Kernel as well, from 3.18.7 to 4. Still running Fedora 21
I will try AWs suggestion.
Seems like you have enough cores to spare: try doing isolcpus to fully isolate a CPU from everything except the VM. Maybe THAT will help.
Because, @aw, i have iommu=pt and it doesn't help much.
I tried the CPU Pinning, it may have helped a little, however I do not think I am fully setup correcty. Here is a snipit from my VM Config as well as my /etc/default/grub:
# GamingMachine.xml:
<vcpu placement='static'>4</vcpu>
<cputune>
<vcpupin vcpu='0' cpuset='2'/>
<vcpupin vcpu='1' cpuset='8'/>
<vcpupin vcpu='2' cpuset='3'/>
<vcpupin vcpu='3' cpuset='9'/>
</cputune>
# Using the above, I setup the below CPUs for isolation
# Snipit from GRUB_CMDLINE line in /etc/default/grub:
isolcpus=2,3,8,9
I re-read the OP and it brought me here: http://www.linux-kvm.com/content/tip-ru … cific-cpus
Using the commands there, I noticed that my VM was not in fact running on the cores I specified:
# Original Setup
[root@kvmhost1 ~]# taskset -p 1214
pid 1214's current affinity mask: fff
[root@kvmhost1 ~]# taskset -c -p 1214
pid 1214's current affinity list: 0-11
# Correcting the afinity
[root@kvmhost1 ~]# taskset -p -c 2,3,8,9 1214
pid 1214's current affinity list: 0-11
pid 1214's new affinity list: 2,3,8,9
# Final
[root@kvmhost1 ~]# taskset -c -p 1214
pid 1214's current affinity list: 2,3,8,9
[root@kvmhost1 ~]# taskset -p 1214
pid 1214's current affinity mask: 30c
I have yet to test to above tweak as I found this while on lunch break. Why isn't the <cputune> section in my VM config not setting the affinity for the PID which represents my VM?
As for iommu=pt, I started receiving strange jerkiness in games, so I have removed that flag. Maybe some other kernel flags are conflicting with it, or maybe i'm missing something. It makes sense that iommu=pt should help performance as it eliminates overhead - which i read from here: https://bugzilla.redhat.com/show_bug.cgi?id=1201503
Here is my full GRUB_CMDLINE line from /etc/default/grub:
GRUB_CMDLINE_LINUX="rd.lvm.lv=fedora-server/root rd.lvm.lv=fedora-server/swap rd.driver.blacklist=nouveau kvm-intel.nested=1 intel_iommu=on isolcpus=2,3,8,9 pci-stub.ids=8086:3a34,8086:3a35,8086:3a36,8086:3a3a,1102:0012,1103:0611,1b4b:91a4,1b21:1042,1000:0072,10de:1187,10de:0e0a,8086:10d3,8086105e"
Offline
I tried the CPU Pinning, it may have helped a little, however I do not think I am fully setup correcty. Here is a snipit from my VM Config as well as my /etc/default/grub:
# GamingMachine.xml: <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='8'/> <vcpupin vcpu='2' cpuset='3'/> <vcpupin vcpu='3' cpuset='9'/> </cputune> # Using the above, I setup the below CPUs for isolation # Snipit from GRUB_CMDLINE line in /etc/default/grub: isolcpus=2,3,8,9
Without knowing the host cpu or guest cpu topology, this is pretty meaningless.
I re-read the OP and it brought me here: http://www.linux-kvm.com/content/tip-ru … cific-cpus
Using the commands there, I noticed that my VM was not in fact running on the cores I specified:# Original Setup [root@kvmhost1 ~]# taskset -p 1214 pid 1214's current affinity mask: fff [root@kvmhost1 ~]# taskset -c -p 1214 pid 1214's current affinity list: 0-11 # Correcting the afinity [root@kvmhost1 ~]# taskset -p -c 2,3,8,9 1214 pid 1214's current affinity list: 0-11 pid 1214's new affinity list: 2,3,8,9 # Final [root@kvmhost1 ~]# taskset -c -p 1214 pid 1214's current affinity list: 2,3,8,9 [root@kvmhost1 ~]# taskset -p 1214 pid 1214's current affinity mask: 30c
I have yet to test to above tweak as I found this while on lunch break. Why isn't the <cputune> section in my VM config not setting the affinity for the PID which represents my VM?
You're messing with the cpu affinity of the qemu process, not the vCPUs. There's an <emulatorpin> control for this https://libvirt.org/formatdomain.html#elementsCPUTuning
http://vfio.blogspot.com
Looking for a more open forum to discuss vfio related uses? Try https://www.redhat.com/mailman/listinfo/vfio-users
Offline
I think isolcpus should really help in a way of isolating host CPUs fully and dedicating them to the guest, so no IRQs are able to steal the guest's CPU time.
Maybe i misunderstand it.
Last edited by Duelist (2015-06-30 20:09:06)
The forum rules prohibit requesting support for distributions other than arch.
I gave up. It was too late.
What I was trying to do.
The reference about VFIO and KVM VGA passthrough.
Offline