You are not logged in.

#1 2020-05-07 19:45:54

adrusi
Member
Registered: 2014-08-18
Posts: 4

eGPU - PRIME configuration issue

I'm running a eGPU configuration with an intel iGPU and a recent AMD eGPU connected over thunderbolt 3 and using the amdgpu driver. For outputs I have the laptop's internal display, and two external displays which are connected to the eGPU's outputs over displayport. I have tested this configuration on Windows to ensure there are no hardware issues.

My desired configuration is to have the eGPU render everything on the outputs connected to it, and the iGPU render everything on the internal display. If that's not achievable, next best is to disable the internal display and have just the eGPU driving the external displays connected directly to it.

I think I'm running into a similar roadblock as Bassface ran into in this old unresolved thread.

I'm not sure what the default configuration is when I start X, but there's some noticeable video latency on (only) the external displays.

$ xrandr --listproviders
Providers: number : 2
Provider 0: id: 0x47 cap: 0xf, Source Output, Sink Output, Source Offload, Sink Offload crtcs: 3 outputs: 5 associated providers: 1 name:modesetting
Provider 1: id: 0xd6 cap: 0xf, Source Output, Sink Output, Source Offload, Sink Offload crtcs: 6 outputs: 4 associated providers: 1 name:AMD Radeon RX 5700 XT @ pci:0000:05:00.0

Running applications by specifying the DRI_PRIME environment variable appears to work, with a caveat. Results of running glxgears on the internal display without touching the PRIME configuration:

$ DRI_PRIME=0 vblank_mode=0 glxgears
ATTENTION: default value of option vblank_mode overridden by environment.
6812 frames in 5.0 seconds = 1362.335 FPS
$ DRI_PRIME=1 vblank_mode=0 glxgears
ATTENTION: default value of option vblank_mode overridden by environment.
485 frames in 5.0 seconds = 96.858 FPS

In the test above, the eGPU is giving significantly worse performance than the iGPU. Conceivably that's a bus bandwidth issue; I haven't tested this kind of loopback rendering on Windows, so it could be, but I have my doubts. This isn't the issue I'm trying to resolve at the moment, but I figured I'd mention it in case it serves as a clue. When I run glxgears on an external display without fiddling with PRIME configuration at all (no DRI_PRIME variable), I get around 450 FPS.

I don't fully understand what the PRIME related options to xrandr do, but here are the results of various invocations:

$ xrandr --setprovideroffloadsink 1 0

I think this is supposed to instruct the eGPU to render everything on all displays. I don't immediately notice any difference, but when I run glxgears on an external display after running this, I only get 425 FPS on the internal display and 190 FPS on the external one.

$ xrandr --setprovideroffloadsink 0 1
X Error of failed request:  BadValue (integer parameter out of range for operation)
  Major opcode of failed request:  140 (RANDR)
  Minor opcode of failed request:  34 (RRSetProviderOffloadSink)
  Value in failed request:  0x47
  Serial number of failed request:  16
  Current serial number in output stream:  17

I think this is supposed to instruct the iGPU to render everything on all displays, but it fails. Not sure why. I can imagine that the issue is that the eGPU doesn't support simply copying in-memory renders from the iGPU, but if that were the case, I would expect a different error message.

xrandr --setprovideroffloadsink 1 1

I expected this to instruct the eGPU to render only outputs connected to it directly (i.e. the external monitors but NOT the internal display). Instead it crashes X.

xrandr --setprovideroffloadsink 0 0

I haven't run this, and because I'm worried it will also crash X I'm not going to run it while composing this post. I ran it and it produced the same error message as the 0 1 permutation.

I expect I have at least one misunderstanding about the systems I'm trying to use here. I also do not understand the relationship between the --setprovideroffloadsink and --setprovideroutputsource options --- based on the manpage, my best guess is that they do the same thing but with reversed parameter order, but I'm fairly sure that's wrong.

Thanks to anyone who can correct my misunderstandings or provide help in any form.

Last edited by adrusi (2020-05-07 19:48:28)

Offline

#2 2020-05-08 13:16:31

Lone_Wolf
Member
From: Netherlands, Europe
Registered: 2005-10-04
Posts: 11,911

Re: eGPU - PRIME configuration issue

[nitpick]
Glxgears is NOT a benchmark

glmark2 is NOT a good benchmark for modern hardware/ OSes

glmark2 uses OpenGL 2.0 and ES 2.0 techniques, modern (post 2012) applications / hardware should not use them anymore.

https://wiki.archlinux.org/index.php/Be … g#Graphics mentions several useful graphics benchamrks.
[end-of-nitpitck]


VC1 = videocard 1 , VC2 = videocard 2 .
Note that 1 and 2 are arbitrary numbers and say nothing about what videocard they represent.

xrandr --setprovideroffloadsink VC2 VC1

VC2 acts as offload provider, while VC1 acts as offload sink.
Basically VC2 renders asdfg then sends it to VC1 which displays it (following the same rules it would if VC1 had rendered asdfg itself) .

Mostly useful if VC2 is more powerful then VC1 but the display is connected to VC1 .


xrandr --setprovideroutputsource VC2 VC1

VC2 is Source Offload, VC1 is Source Sink .
Allows VC2 to use the displays connected to VC1 as if they were connected to VC2 .

Mainly useful if you want to use displays connected to the other card .



My desired configuration is to have the eGPU render everything on the outputs connected to it, and the iGPU render everything on the internal display.
If that's not achievable, next best is to disable the internal display and have just the eGPU driving the external displays connected directly to it.

Your desired configuration equals having 2 separate discrete cards. Unfortunately iGPUs systems are designed to work differently and don't allow that*.


Your best choice appears to be to set the eGPU as primary gpu .
Usually that setting can be made through bios/uefi firmware , but it can also be done through X config files.
The config example on the prime wikipage will need to be adjusted for your setup.

No idea how thunderbolt 3 devices are seen by the kernel , but please post full lspci - , lsusb -tv outputs and xorg log with the eGPU connected.



*
you may want to choose  a processor without an iGPU and add a separate discrete videocard for your next system if this is important for you


Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.


(A works at time B)  && (time C > time B ) ≠  (A works at time C)

Offline

Board footer

Powered by FluxBB