You are not logged in.
Hey all, I've got a Thinkpad W540 with an nvidia K2100M, and Intel i7-4800MQ with HD Graphics 4600. It's currently running the latest Arch 6.1.12-arch1-1 kernel. My WM is i3 built on X11 in a somewhat minimal setup I'm gradually building up from scratch as I try to better understand X11 and its lower-level function. I am not using a display environment, and start every session with startx after initial login (I want it this way).
To note, the K2100M is using nvidia-390xx-dkms as it's Maxwell-based.
That said, while I'm content with it running on the integrated graphics for everyday use, I would like to do as much as I can to move away from Windows and into Arch, and will need to get my K2100M working to achieve that for the few games (Minecraft, lol) and workloads assisted by GPU-acceleration (Discord streaming and video decode) on top of the fact this model has a 2880x1620 display that's very very taxing on 2012 integrated graphics even being high-spec for the era.
I've followed the official Wiki guide for NVIDIA Optimus for X11 configuration, the Wiki guide for drivers and kernel params (and, for the sake of sanity, initramfs early loading, though I start my WM manually on each boot with startx), and this forum post. I can't seem to get X11 to load up the card as a primary renderer, and I only get the following output from xrandr every time:
$ xrandr --listproviders
Providers: number : 1
Provider 0: id: 0x49 cap: 0xb, Source Output, Sink Output, Sink Offload crtcs: 4 outputs: 7 associated providers: 0 name:Intel
I suppose the main info is that I can't specify xrandr --provideroutputsource due to the fact NVIDIA-0 isn't listed as a provider, and as such adding it to my xinitrc won't do anything for me. I'm unsure what I am/could be missing here, or if my hardware configuration allows me to make this work at all.
Here's the info I can provide, let me know if any more is needed!
Specifications:
Thinkpad W540
NVIDIA Quadro K2100M
Intel I7-4800MQ + Intel HD Graphics 4600
i3 + X11 with no Display Environment
Arch 6.1.12-arch1-1
$ nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.157 Driver Version: 390.157 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Quadro K2100M Off | 00000000:01:00.0 Off | N/A |
| N/A 40C P8 N/A / N/A | 11MiB / 1999MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 958 G /usr/lib/Xorg 8MiB |
+-----------------------------------------------------------------------------+
$ lspci
00:00.0 Host bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor DRAM Controller (rev 06)
00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller (rev 06)
00:02.0 VGA compatible controller: Intel Corporation 4th Gen Core Processor Integrated Graphics Controller (rev 06)
00:03.0 Audio device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller (rev 06)
00:14.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI (rev 04)
00:16.0 Communication controller: Intel Corporation 8 Series/C220 Series Chipset Family MEI Controller #1 (rev 04)
00:16.3 Serial controller: Intel Corporation 8 Series/C220 Series Chipset Family KT Controller (rev 04)
00:19.0 Ethernet controller: Intel Corporation Ethernet Connection I217-LM (rev 04)
00:1a.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #2 (rev 04)
00:1b.0 Audio device: Intel Corporation 8 Series/C220 Series Chipset High Definition Audio Controller (rev 04)
00:1c.0 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #1 (rev d4)
00:1c.1 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #2 (rev d4)
00:1c.2 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #3 (rev d4)
00:1c.4 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #5 (rev d4)
00:1d.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #1 (rev 04)
00:1f.0 ISA bridge: Intel Corporation QM87 Express LPC Controller (rev 04)
00:1f.2 SATA controller: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] (rev 04)
00:1f.3 SMBus: Intel Corporation 8 Series/C220 Series Chipset Family SMBus Controller (rev 04)
01:00.0 VGA compatible controller: NVIDIA Corporation GK106GLM [Quadro K2100M] (rev a1)
01:00.1 Audio device: NVIDIA Corporation GK106 HDMI Audio Controller (rev a1)
02:00.0 SD Host controller: O2 Micro, Inc. SD/MMC Card Reader Controller (rev 01)
03:00.0 Network controller: Intel Corporation Wireless 7260 (rev 83)
$ ls /etc/X11/xorg.conf.d
10-nvidia-drm-outputclass.conf
$ cat /etc/X11/xorg.conf.d/10-nvidia-drm-outputclass.conf
Section "OutputClass"
Identifier "intel"
MatchDriver "i915"
Driver "modesetting"
EndSection
Section "OutputClass"
Identifier "nvidia"
MatchDriver "nvidia-drm"
Driver "nvidia"
Option "AllowEmptyInitialConfiguration"
Option "PrimaryGPU" "yes"
ModulePath "/usr/lib/nvidia/xorg"
ModulePath "/usr/lib/xorg/modules"
EndSection
$ cat /etc/mkinitcpio.conf
...
# MODULES
# The following modules are loaded before any boot hooks are
# run. Advanced users may wish to specify all system modules
# in this array. For instance:
# MODULES=(usbhid xhci_hcd)
MODULES=(nvidia nvidia_modeset nvidia_uvm nvidia_drm)
...
$ cat /etc/default/grub
...
GRUB_CMDLINE_LINUX_DEFAULT="loglevel=3 nvidia_drm.modeset-1"
...
Logs:
cat /var/log/Xorg.0.log
dmesg
Last edited by mr_cheese (2023-02-27 05:53:08)
Offline
https://wiki.archlinux.org/title/PRIME# … er_offload
NVIDIA driver since version 435.17 supports this method
If you don't care about the IGP and battery and can: disable the IGP in the BIOS.
Otherwise you'll have to use eg. bumblebee.
Offline
https://wiki.archlinux.org/title/PRIME# … er_offload
NVIDIA driver since version 435.17 supports this method
If you don't care about the IGP and battery and can: disable the IGP in the BIOS.
Otherwise you'll have to use eg. bumblebee.
Sorry if I got my wording wrong here and in the title, I'm not trying to use Prime, I'm trying to strictly use my K2100M for my X server. If you look in the forum post I linked that user is using a Geforce 840M which is built on the same Maxwell architecture and therefore same drivers as me, so if it worked for them, this should work for me by association.
Also, iGPU cannot be disabled in BIOS on my machine, and while I could use bumblebee I'm reading it takes a massive performance hit over any other method.
I did try nvidia-xrun..., but neglected to get the logs before spinning up the working X11 configuration. I'll update this post with those logs after I try it again.
aaaaaaaand here are the logs from that
Last edited by mr_cheese (2023-02-26 22:04:02)
Offline
I'm not trying to use Prime
Ftr:
I suppose the main info is that I can't specify xrandr --provideroutputsource due to the fact NVIDIA-0 isn't listed as a provider, and as such adding it to my xinitrc won't do anything for me
That'd be prime offloading.
I'm trying to strictly use my K2100M for my X server
If you don't care about the IGP and battery and can: disable the IGP in the BIOS.
There's only one output (eDP-1-1) and it's attached to the IGP, so you cannot not use the IGP, as you'll require it to make pixels glow.
If the device allows you to deactivate the IGP, the nvidia CRTs will be re-wired to the output.
nvidia-xrun hasn't seen any development in 4 years and you're hitting the same segfault as https://bbs.archlinux.org/viewtopic.php?id=278132 - that's probably not a config issue.
Offline
I see, I guess I assumed our system configurations were similar enough/fundamentally compatible to make this work. I'll see if I can get any kind of mileage out of bumblebee but for now the Windows dual-boot stays
Thank you for your help and clarification!
Offline
I should have spent more attention on your setup description…
The main difference is that the linked post was on 440.82 w/ a Maxwell GM108M
You're using 390xx on a GK106GLM which contrary to your assertion is a Kepler chip, not Maxwell.
BUT:
Kepler is still supported by the 470xx drivers and thus supports prime offloading fine.
=> Swap the 390xx drivers for the 470xx drivers, remove any xorg config files, the system will start on the intel chip and you should™ be able to "prime-run minecraft" OOTB.
(prime-run from https://archlinux.org/packages/extra/any/nvidia-prime/ )
Offline
Kepler is still supported by the 470xx drivers and thus supports prime offloading fine.
=> Swap the 390xx drivers for the 470xx drivers, remove any xorg config files, the system will start on the intel chip and you should™ be able to "prime-run minecraft" OOTB.
(prime-run from https://archlinux.org/packages/extra/any/nvidia-prime/ )
Well, this.... worked? There's 0 performance benefit over integrated in Minecraft and it looks like the K2100M is only showing 40-60% usage while doing so, a major difference from my Windows machine's behaviour using similar technology, but no sweat, at least it, on paper, is working. It'll probably come in handy for using Wine programs (Lightroom here I come)
Thanks again for the help I really appreciate it!
Offline
Sounds like either vsync or CPU limited?
https://archlinux.org/packages/communit … 4/glmark2/
Offline
I'm running into intermittent errors with OpenGL and nvidia right now, it's worked once (in my minecraft test) but now I'm getting the following from glmark2 and Minecraft (somewhat similar errors). I don't really know what I did to fix it though... it just kinda Worked™
$ prime-run glmark2
X Error of failed request: BadValue (integer parameter out of range for operation)
Major opcode of failed request: 152 (GLX)
Minor opcode of failed request: 24 (X_GLXCreateNewContext)
Value in failed request: 0x0
Serial number of failed request: 32
Current serial number in output stream: 33
$ prime-run prismlauncher
...
[in the Minecraft java instance logs]
[15:29:02] [Render thread/WARN]: Failed to create window:
net.minecraft.class_1041$class_4716: GLFW error 65543: GLX: Failed to create context: GLXBadFBConfig
...
When it works, it looks like the Quadro is limited by the iGPU only being able to serve so many frames, as performance with the Quadro and the iGPU is the samein Minecraft. I'm limited in testing until I figure out what I did wrong here tho, doing some more research.
Last edited by mr_cheese (2023-02-27 21:34:09)
Offline
Update: it was optimus manager, everything works fine now:
K2100M glmark2: 1256
HD 4600 glmark2: 985
I would've expected a larger performance gap between the two but, maybe this is expected performance for this configuration. I'll have to do some comparisons on Windows to see if this is expected or within-marginal performance in Minecraft (a stupid benchmark, but funcitonal). I'll also have to edit my .xinitrc to change my screen resolution.
Offline
Random google result: https://gpu.userbenchmark.com/Compare/N … 811vsm7676
Matches your glmark results.
Offline
Yup, concur in Windows things are working-as-usual. Turns out the K2100M and HD 4600 equally do not enjoy rendering graphics at 2880x1620. Oh well, that's the price of "legacy" tech
Thanks again [part 3] for your help!
Offline