You are not logged in.
Ok so I think I installed the right package because I did do the makepkg route. Are these Alex Williamson's branches?
yes
I can't win. I got qemu, Seabios, and the kernel built with reset support. Still the same issue. Radeon never initializes unless it's the primary. Is this a motherboard issue? Would there be a way to fake it by having Linux send an image to it beforehand?
Most amd boards + cpu work fine, it seems people with intel are having some problems, though some have been successful with or without (with another discrete card) the IGP as primary
Could we have a separate thread to focus on building vfio-reset? This opening post is AMAZINGLY comprehensive for getting things up and running, and it almost feels like the building process would be better served a separate discussion.
nbhs : Found your 3.9 building package: http://www.filesend.net/download.php?f= … 468af53600 Thanks!
You should try the lastest packages i posted i might have forgotten a comit from alex tree
the building process is on the arch wiki, like you mentioned, unpack the file then run
makepkg -s
then:
pacman -U something_packagever_arch.pkg.tar.xz
Last edited by nbhs (2013-07-29 19:12:40)
Offline
I'm not sure if I should ask my question here, but since the "problem" is somewhat related and others may have had the same doubts, I hope this is the right place.
I've got 3 video cards:
- Club3D Radeon HD 4770 750MHz (Memory: 512MB GDDR5 3200MHz)
- XFX Radeon HD 6570 650MHz (Memory: 1GB DDR3 1300MHz)
- Sapphire Radeon HD 5450 650MHz (Memory: 1GB DDR3 1300MHz)
I want to mount them on the same motherboard (ASRock Z87 Extreme6) and passthrough them to different VMs: the 4770 to my main Windows guest, the 6570 to my main Linux guest and I'll be leaving the 5450 to the host.
The thing is, the only configuration the 3 PCIe (3.0, though all the cards are 2.0) slots in my motherboard can assume when all of them are used is x8 x4 x4. Should I even care and put the most powerful card (that would be the 4770) in the x8 slot or even that can't possibly hope to saturate a x8 slot, and so I should just place them in the more aestetically pleasing/cooling efficient order?
Also I read that dual/tri monitor configurations use much more PCIe bandwidth, and I would connect at least two displays to the 5450, even though the cards that would be rendering things for those same displays would be the other two, the 5450 being only the one to which they are directly connected (since I understand the monitors must be connected to the host's graphic card, or I'm wrong?). So should I put the 5450 in the x8 slot after all?
Yeah I'm pretty confused here. Hope you can clarify all this mess for me.
Offline
I'm not sure if I should ask my question here, but since the "problem" is somewhat related and others may have had the same doubts, I hope this is the right place.
I've got 3 video cards:
- Club3D Radeon HD 4770 750MHz (Memory: 512MB GDDR5 3200MHz)
- XFX Radeon HD 6570 650MHz (Memory: 1GB DDR3 1300MHz)
- Sapphire Radeon HD 5450 650MHz (Memory: 1GB DDR3 1300MHz)I want to mount them on the same motherboard (ASRock Z87 Extreme6) and passthrough them to different VMs: the 4770 to my main Windows guest, the 6570 to my main Linux guest and I'll be leaving the 5450 to the host.
The thing is, the only configuration the 3 PCIe (3.0, though all the cards are 2.0) slots in my motherboard can assume when all of them are used is x8 x4 x4. Should I even care and put the most powerful card (that would be the 4770) in the x8 slot or even that can't possibly hope to saturate a x8 slot, and so I should just place them in the more aestetically pleasing/cooling efficient order?
Also I read that dual/tri monitor configurations use much more PCIe bandwidth, and I would connect at least two displays to the 5450, even though the cards that would be rendering things for those same displays would be the other two, the 5450 being only the one to which they are directly connected (since I understand the monitors must be connected to the host's graphic card, or I'm wrong?). So should I put the 5450 in the x8 slot after all?
Yeah I'm pretty confused here. Hope you can clarify all this mess for me.
You should probably start with 2 cards and see if it works
Last edited by nbhs (2013-07-29 19:23:16)
Offline
Well yeah, I'm creating problems when I don't even know if it'll work in the first place. But what can I do, I still haven't got hold of the necessary equipment, aside from the video cards that I already own. Sorry.
But disregarding for a moment whether it'll work or not, just to satisfy my curiosity, would it be more ideal to put the most powerful card in the fastest slot, or reserve it for the host's card to which the displays are connected to?
Offline
Well yeah, I'm creating problems when I don't even know if it'll work in the first place. But what can I do, I still haven't got hold of the necessary equipment, aside from the video cards that I already own. Sorry.
But disregarding for a moment whether it'll work or not, just to satisfy my curiosity, would it be more ideal to put the most powerful card in the fastest slot, or reserve it for the host's card to which the displays are connected to?
Well you'll need the first slot for the host gpu so unless you can get it to work with the igp, you should put the 5450 on the first one, also why would you want another linux guest?
Last edited by nbhs (2013-07-29 19:35:12)
Offline
It seems if I set my primary card in the host's BIOS to PCI instead of Integrated then ArchLinux boots on the host using my radeon card! :s Then when I boot the VM, the display goes blank. Since I've blacklisted and used pci-stub driver then why is ArchLinux still using the radeon card?
[root@localhost ~]# cat /etc/modprobe.d/blacklist.conf
blacklist radeon
lspci -v...
01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Tahiti PRO [Radeon HD 7950] (prog-if 00 [VGA controller])
Subsystem: Micro-Star International Co., Ltd. Device 2761
Flags: fast devsel, IRQ 16
Memory at e0000000 (64-bit, prefetchable) [size=256M]
Memory at f7e00000 (64-bit, non-prefetchable) [size=256K]
I/O ports at e000 [size=256]
Expansion ROM at f7e40000 [disabled] [size=128K]
Capabilities: [48] Vendor Specific Information: Len=08 <?>
Capabilities: [50] Power Management version 3
Capabilities: [58] Express Legacy Endpoint, MSI 00
Capabilities: [a0] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
Capabilities: [150] Advanced Error Reporting
Capabilities: [270] #19
Capabilities: [2b0] Address Translation Service (ATS)
Capabilities: [2c0] #13
Capabilities: [2d0] #1b
Kernel driver in use: vfio-pci
Kernel modules: radeon
01:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Tahiti XT HDMI Audio [Radeon HD 7970 Series]
Subsystem: Micro-Star International Co., Ltd. Device aaa0
Flags: bus master, fast devsel, latency 0, IRQ 17
Memory at f7e60000 (64-bit, non-prefetchable) [size=16K]
Capabilities: [48] Vendor Specific Information: Len=08 <?>
Capabilities: [50] Power Management version 3
Capabilities: [58] Express Legacy Endpoint, MSI 00
Capabilities: [a0] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
Capabilities: [150] Advanced Error Reporting
Kernel driver in use: vfio-pci
Kernel modules: snd_hda_intel
It seems to still be using the Kernel module... is it possible that this is causing the problem?
[root@localhost ~]# dmesg | grep Kernel
[ 0.000000] Kernel command line: BOOT_IMAGE=/vmlinuz-linux-mainline root=/dev/md1 ro quiet pci-stub.ids=1002:679a,1002:aaa0 intel_iommu=on
Last edited by BulliteShot (2013-07-29 21:13:10)
Offline
Evonat wrote:Well yeah, I'm creating problems when I don't even know if it'll work in the first place. But what can I do, I still haven't got hold of the necessary equipment, aside from the video cards that I already own. Sorry.
But disregarding for a moment whether it'll work or not, just to satisfy my curiosity, would it be more ideal to put the most powerful card in the fastest slot, or reserve it for the host's card to which the displays are connected to?
Well you'll need the first slot for the host gpu so unless you can get it to work with the igp, you should put the 5450 on the first one.
If I remember correctly, some Gigabyte motherboards allow for selecting which of PCIe slot boots first, or at least they did at one time. So with a board that allows for such a selection, the host gpu could be placed in a slower slot.
Also, I once inadvertently added the card in my first PCIe slot to the pci-stub list in grub cmdline. To my surprise, the card in the second PCIe slot displayed my host on boot. Although such an approach certainly is not optimal, it might be worth a try.
Offline
Sounds like you're having the opposite problem i'm having. I can only get my radon card to work as a secondary adapter using pci-attach (but first you have to enable cirrus VGA and then install the catalyst drivers... after rebooting, the cirrus VGA will stop responding at the windows boot logo and then you'll the graphics card will take over)
Can you do pci-attach with the q35 motherboard? I'll gladly take a workaround.
Offline
BulliteShot wrote:Sounds like you're having the opposite problem i'm having. I can only get my radon card to work as a secondary adapter using pci-attach (but first you have to enable cirrus VGA and then install the catalyst drivers... after rebooting, the cirrus VGA will stop responding at the windows boot logo and then you'll the graphics card will take over)
Can you do pci-attach with the q35 motherboard? I'll gladly take a workaround.
No sorry, just the default motherboard. If I set using q35 then the Windows device manager complains that the device doesn't have enough resources to start.
Offline
No sorry, just the default motherboard. If I set using q35 then the Windows device manager complains that the device doesn't have enough resources to start.
Yeah, that's where I'm at.
The most aggravating part is the Qemu provides ZERO logs to figure out why it's not starting up.
Another bit I noticed is that when Intel is primary and the Radeon card is secondary, not only dose the image not show up on-screen, NOTHING LOADS. There is zero hard drive activity. If I could find out why that'd be great, but I get NOTHING for error messages. Heck, I can't find ANY Qemu logs, ANYWHERE. dmesg gives me this:
[ 2956.122793] vfio_ecap_init: 0000:01:00.0 hiding ecap 0x19@0x270
[ 2956.122800] vfio_ecap_init: 0000:01:00.0 hiding ecap 0x1b@0x2d0
Last edited by mukiex (2013-07-30 15:04:08)
Offline
Well the damn thing just randomly blue screened while I was running Steam chat on the VM and watching youtube on the host (not demanding in the slightest)... so i'm gonna try plugging in a 2nd card and disable the Intel integrated graphics to see what happens.
I couldn't see the error because I couldn't open the VNC fast enough.. but it was an ati----.sys driver that caused the BSOD
Offline
And it WORKS! Here's what I did...
1) Went into the BIOS and Disabled Integrated Graphics
2) Still in the BIOS, changed the primary graphics from Integrated to PCI
3) Saves changes and powered off my system.
4) Moved my 7950 GPU to my 2nd slot on the mobo.
5) Inserted a 7790 GPU from my girlfriend's PC into the 1st slot on my mobo.
6) Booted, changed my kernel pci-stub params to match the PCI ID of my 7950 then rebooted.
7) Host archlinux booted on the 7790 in my first slot. Booted the VM using the q35 method and the BIOS loaded on the screen attached to the 7950.
So I can confirm using the Intel Integrated Graphics breaks the VFIO passthrough but if you use a 2nd card and disable the integrated graphics then it should work!
I'm getting a BSOD in windows now but I bet that's because it doesn't like the change to the q35 motherboard. I'm currently reinstalling it.
Last edited by BulliteShot (2013-07-30 17:48:36)
Offline
I GOT IT!!!
Okay, so this is the most bum-backwards solution you will ever see, but:
- Boot with Intel GFX as primary
- Bind Intel graphics to PCI-STUB but NOT VFIO (the latter will lock your system)
- SUCCESS (graphics card turns on without issue)
And now to fix my USB binding... can I bind USB 3.0?
Offline
Awsome guys nice to see it works, yes you can bind your usb3 controller thats why i do with my vm
Offline
I GOT IT!!!
Okay, so this is the most bum-backwards solution you will ever see, but:
- Boot with Intel GFX as primary
- Bind Intel graphics to PCI-STUB but NOT VFIO (the latter will lock your system)
- SUCCESS (graphics card turns on without issue)And now to fix my USB binding... can I bind USB 3.0?
Wait so you're using binding the primary card ( the igp ) to pci-stub that that allowed it to work?, It might be that the intel driver is the one causing problems then, take into consideration that the host wont use the it and you'll lose 3d acceleration
Last edited by nbhs (2013-07-30 19:20:15)
Offline
IF you got it working please post how and your hardware specs so we can all know (i might even get an intel setup)
Offline
I GOT IT!!!
Okay, so this is the most bum-backwards solution you will ever see, but:
- Boot with Intel GFX as primary
- Bind Intel graphics to PCI-STUB but NOT VFIO (the latter will lock your system)
- SUCCESS (graphics card turns on without issue)And now to fix my USB binding... can I bind USB 3.0?
mukiex, you sweet sweet genius! That fixed it for me too!
@nbhs: My board is B75-D3V
Offline
mukiex wrote:I GOT IT!!!
Okay, so this is the most bum-backwards solution you will ever see, but:
- Boot with Intel GFX as primary
- Bind Intel graphics to PCI-STUB but NOT VFIO (the latter will lock your system)
- SUCCESS (graphics card turns on without issue)And now to fix my USB binding... can I bind USB 3.0?
mukiex, you sweet sweet genius! That fixed it for me too!
@nbhs: My board is B75-D3V
So it looks like the intel driver its the one causing problems
Offline
It's not the intel driver causing issues: I found this fix originally on two separate discrete cards.
The Intel card isn't being loaded on the guest, but if it's not bound to pci-stub, the guests actual card won't ever turn on.
Offline
It's not the intel driver causing issues: I found this fix originally on two separate discrete cards.
The Intel card isn't being loaded on the guest, but if it's not bound to pci-stub, the guests actual card won't ever turn on.
Have you tried blacklisting the intel driver instead of binding it to pci-stub?
Last edited by nbhs (2013-07-30 20:32:13)
Offline
When I install the ATI drivers I get a BSOD at boot with atikmpag.sys
Offline
When I install the ATI drivers I get a BSOD at boot with atikmpag.sys
Are you using the lastest version? im using catalyst 12.10 with win8, no bsods or anything, try installing the drivers manually from the device manager
Offline
Driver installed is 12.104.0.0 along with Catalyst 13.4 and I'm using Windows 7 in the VM. I'll mess around with the drivers and see if it likes the BETA or 12.10 versions.
Windows boots fine with the standard driver, but thats not much better than the standard VGA in Qemu to be honest
Offline
Driver installed is 12.104.0.0 along with Catalyst 13.4 and I'm using Windows 7 in the VM. I'll mess around with the drivers and see if it likes the BETA or 12.10 versions.
Windows boots fine with the standard driver, but thats not much better than the standard VGA in Qemu to be honest
Try catalyst 12.10, works for me
Offline
* set the controller to AHCI mode in UEFI/BIOS (actually no UEFI here, the Gigabyte has a legacy BIOS mode)
* added its vendor:id to pci-stub.ids boot parameter (as said, it's completely useless for the host anyways, so make the AHCI driver ignore it completely)
* passed it through to the qemu VM using "-device vfio-pci,host=${SATA_DEVICE},bus=pcie.0" (actually it shows up with two entries in lscpi, one of wich seems to be for the E-SATA ports (they also do not work in the Linux host, same issues))
* Win7 automatically installed some ATA drivers for it (I didn't download anything anywhere)
* it works, without issues
Sorry for the late reply and thanks for your attemp to help, I tried to replicate your setup already a few days ago but kind of gave up (it works anyways…). The Windows 7 install disk doesn't recognize the device and (an already installed version of) Windows 8 BSODs after bootup, I can write doen the exact error message in case anyone knows what these mean.
What I didn't try is a fresh, "untouched" in a qcow file and attach it there. And for some reason I think I didn't try linux guests.
I'll probably try those options if I find the motivation, or I might replace 2 of the 1TB disks in my PC with one 2 or 3TB disk (I should do that anyways), attach the SSD to the amd controller and use virtio.
Maybe someday, someone will work around this in the amd-iommu driver or something else will happen. If I look at my graphics drivers problems on my notebook with an e350 apu (amd graphics), the problems I had with it in the last ~1.5 years are mostly solved by switching to the open source driver, which wasn't ready back then. What I want to say is that waiting or doing nothing is sometimes the best solution to non-critical issues.
i'm sorry for my poor english wirting skills…
Offline