You are not logged in.
Hey everyone,
I recently posted about getting my optimus setup to respect using the iGPU with vulkan and solved most of my problems by making a bash script that mirrors prime-run, but forces vulkan onto the iGPU.
KDE menuedit reformats any .desktop environment variables you pass to it in a way that makes the particular env variable string you need invalid, so the wrapper script lets you get around that by passing it as the primary command and the actual app as an argument.
I'm currently using
subprime-run:
#!/bin/bash
MESA_VK_DEVICE_SELECT=1002:1638! VK_DRIVER_FILES=/usr/share/vulkan/icd.d/radeon_icd.json __GLX_VENDOR_LIBRARY_NAME=mesa DXVK_FILTER_DEVICE_NAME="AMD Radeon Graphics (RADV RENOIR)" "$@"in that wrapper script, placed in /usr/local/bin.
It does seem to put the gpu load on the iGPU when you discrete gpu for the most part. My understanding is that GLX processes and video decode processes should default to the iGPU anyway and vulkan/dxvk are the primary offenders. However: for some apps, in particular jellyfin-desktop, it doesn't completely stop processes from being passed to the discrete gpu.
When a video is playing, you go from this output on lsof +c0 /dev/nvidia0:
jellyfin-deskto 21102 tyson mem CHR 195,0 955 /dev/nvidia0
jellyfin-deskto 21102 tyson mem CHR 195,255 953 /dev/nvidiactl
jellyfin-deskto 21102 tyson mem CHR 237,0 775 /dev/nvidia-uvm
jellyfin-deskto 21102 tyson 124u CHR 195,255 0t0 953 /dev/nvidiactl
jellyfin-deskto 21102 tyson 125u CHR 237,0 0t0 775 /dev/nvidia-uvm
jellyfin-deskto 21102 tyson 126u CHR 237,0 0t0 775 /dev/nvidia-uvm
jellyfin-deskto 21102 tyson 127u CHR 195,0 0t0 955 /dev/nvidia0
jellyfin-deskto 21102 tyson 128u CHR 195,0 0t0 955 /dev/nvidia0
jellyfin-deskto 21102 tyson 129u CHR 195,0 0t0 955 /dev/nvidia0
jellyfin-deskto 21102 tyson 130u CHR 195,0 0t0 955 /dev/nvidia0
jellyfin-deskto 21102 tyson 189u CHR 195,0 0t0 955 /dev/nvidia0
jellyfin-deskto 21102 tyson 190u CHR 195,0 0t0 955 /dev/nvidia0
jellyfin-deskto 21102 tyson 191u CHR 195,0 0t0 955 /dev/nvidia0
jellyfin-deskto 21102 tyson 192u CHR 195,0 0t0 955 /dev/nvidia0
jellyfin-deskto 21102 tyson 193u CHR 195,0 0t0 955 /dev/nvidia0
jellyfin-deskto 21102 tyson 194u CHR 195,0 0t0 955 /dev/nvidia0
jellyfin-deskto 21102 tyson 196u CHR 195,0 0t0 955 /dev/nvidia0
jellyfin-deskto 21102 tyson 197u CHR 195,0 0t0 955 /dev/nvidia0
jellyfin-deskto 21102 tyson 198u CHR 195,0 0t0 955 /dev/nvidia0
jellyfin-deskto 21102 tyson 199u CHR 195,0 0t0 955 /dev/nvidia0
jellyfin-deskto 21102 tyson 200u CHR 195,0 0t0 955 /dev/nvidia0
jellyfin-deskto 21102 tyson 201u CHR 195,0 0t0 955 /dev/nvidia0
jellyfin-deskto 21102 tyson 202u CHR 195,0 0t0 955 /dev/nvidia0
jellyfin-deskto 21102 tyson 203u CHR 195,0 0t0 955 /dev/nvidia0
jellyfin-deskto 21102 tyson 220u CHR 195,255 0t0 953 /dev/nvidiactl
jellyfin-deskto 21102 tyson 222u CHR 195,0 0t0 955 /dev/nvidia0
jellyfin-deskto 21102 tyson 228u CHR 195,0 0t0 955 /dev/nvidia0
jellyfin-deskto 21102 tyson 229u CHR 195,0 0t0 955 /dev/nvidia0
jellyfin-deskto 21102 tyson 232u CHR 195,0 0t0 955 /dev/nvidia0to this:
jellyfin-deskto 24325 tyson mem CHR 195,0 955 /dev/nvidia0
jellyfin-deskto 24325 tyson mem CHR 195,255 953 /dev/nvidiactl
jellyfin-deskto 24325 tyson 124u CHR 195,255 0t0 953 /dev/nvidiactl
jellyfin-deskto 24325 tyson 125u CHR 237,0 0t0 775 /dev/nvidia-uvm
jellyfin-deskto 24325 tyson 126u CHR 237,0 0t0 775 /dev/nvidia-uvm
jellyfin-deskto 24325 tyson 127u CHR 195,0 0t0 955 /dev/nvidia0
jellyfin-deskto 24325 tyson 128u CHR 195,0 0t0 955 /dev/nvidia0
jellyfin-deskto 24325 tyson 129u CHR 195,0 0t0 955 /dev/nvidia0
jellyfin-deskto 24325 tyson 130u CHR 195,0 0t0 955 /dev/nvidia0So it is reducing the number of processes, but not entirely stopping jellyfin-desktop from picking up the discrete gpu.
My suspicion is that jellyfin-desktop is polling nvidia-smi or initializing ffpmeg/ffprobe in a certain way or similar for automatic hardware detection, or nvenc/nvdec gets picked up as an environment variable automatically instead of VAAPI as primary.
Ideally I don't want any app to be able to poll sensors or otherwise wake up the discrete gpu when the wrapper script is in use, because it still drains battery even if it's not doing anything but keeping the GPU awake.
I can't find anything on the wiki about what to pass to prevent this, or if it's even possible.
Any help here would be appreciated.
UPDATE:
I tried adding __EGL_VENDOR_LIBRARY_FILENAMES=/usr/share/glvnd/egl_vendor.d/50_mesa.json and get exactly the same results. overriding the mesa driver seems to fail, but that probably has to do with the fact I'm using hybrid graphics.
The jellyfin-desktop app launches mpv and chrome as subprocesses, but those should inherit the environment of the main process, no?
Last edited by tkoham (2026-03-10 00:36:16)
Offline
I know nothing about jellyfishes, but if it's run as a service you could abuse https://wiki.archlinux.org/title/Jellyfin#Hardening to block access to /dev/nvidia*
https://man.archlinux.org/man/systemd.exec.5#SANDBOXING ("space-separated list of paths relative to the host's root directory", afaiu no globbing)
Online
I know nothing about jellyfishes, but if it's run as a service you could abuse https://wiki.archlinux.org/title/Jellyfin#Hardening to block access to /dev/nvidia*
https://man.archlinux.org/man/systemd.exec.5#SANDBOXING ("space-separated list of paths relative to the host's root directory", afaiu no globbing)
Thanks for the info!
Unfortunately, I'm running the desktop media player client, not the server, so at least by default, this isn't the case. I've got a lot more experience writing sysrc scripts than I am with systemd units, but my guess is there's not a lot of room to integrate service start/stop in .desktop launchers.
That's an interesting workaround, though. I guess in theory I could also containerize it and block access that way too.
I was hoping there was a more straightforward solution but if that's what needs to happen I guess I'll give it a go.
Just seems like a lot of effort to go to to stop chrome/mpv/ffmpeg from misbehaving.
Last edited by tkoham (2026-03-10 11:48:37)
Offline
Just seems like a lot of effort to go to to stop chrome/mpv/ffmpeg from misbehaving.
The process accessing the devices is jellyfin-desktop, do you actually get similar behavior w/ ffmpeg or mpv?
https://wiki.archlinux.org/title/Hardwa … figuration explicitly away from nvidia (vaapi and vdpau) might make it ignore the GPU but it could access it for any amount of useless reasons.
Online
The process accessing the devices is jellyfin-desktop, do you actually get similar behavior w/ ffmpeg or mpv?
Well that's the thing, I'm pretty sure the jellyfin people roll their own ffmpeg, and the regular arch packaged version doesn't behave this way, but theirs might.
restricting the mpv configuration to vulkan-video keeps the decode processes off the dgpu when I have the vulkan icd/device variables, I did actually already try that, but there's still other stuff running on the card, and I have no way of telling which of the three subprocesses is responsible, or if it's just the application itself doing something annoying like polling gpu utilization on every available device.
EDIT:
So mpv does behave like this by default (n fact it completely swamps the output with a bunch of dGPU processes when you lsof nvidia0,) so this might be a matter of digging through a bunch of badly documented mpv.conf options. Maybe encode/decode is off the card but not filters that can run direct with cuda, or something.
Don't have time to mess with it tonight, just got off a double shift, but I at least have something to go on. I'll report back if I get anything to work.
passing __EGL_VENDOR_LIBRARY_FILENAMES="/usr/share/glvnd/egl_vendor.d/50_mesa.json" fixes this on raw mpv, and my conf file is blank, and jellyfin is set to copy the default config. So it isn't mpv.
and my chromium/chromecdp paths don't start any processes on the gpu
So that leaves the app itself or the jellyfin version of ffmpeg.
God I'm gonna have to containerize it or edit the makefile to not compile nvidia support aren't I
Last edited by tkoham (2026-03-11 00:17:52)
Offline
Sure that mpv --hwdec=vaapi* --vaapi-device=/dev/dri/… uses the GPU at all (notably if you don't have a vaapi shim for nvidia installed, those are all bad options)
Online
Sure that mpv --hwdec=vaapi* --vaapi-device=/dev/dri/… uses the GPU at all (notably if you don't have a vaapi shim for nvidia installed, those are all bad options)
Yeah, I've got mpv set up to use vulkan video for this reason, which respects the subprime run script, because it's vulkan. can't remember how to see which dri device corresponds to which gpu (i think this shuffles around if you don't block simpledrm at a certain stage in the boot process iirc) but both report vaapi as being unsupported.
Offline
both report vaapi as being unsupported
Wut?
lspcihttps://wiki.archlinux.org/title/Hardwa … stallation
You'd ideally want to use the GPUs/APUs/IGPs dedicated video decoder which is typically way more efficient than vulkan implementations.
Online