Integrated gpus however use a different technology called Dynamic switching.
https://wiki.archlinux.org/index.php/Hy … hing_Model gives a basic description .
#!/bin/bash
__NV_PRIME_RENDER_OFFLOAD=1 __VK_LAYER_NV_optimus=NVIDIA_only __GLX_VENDOR_LIBRARY_NAME=nvidia "$@"
That's the content of prime-run , you'll have to ask nvidia what it does.
Most archlinux users with intel + nvidia systems appear to have switched to optimus-manager.
Check https://wiki.archlinux.org/index.php/NVIDIA_Optimus for your options.
It makes sense that I get lower frame rates when it's actually the Intel GPU flipping the frame buffer based on a texture copied from nvidia video memory to system memory. Just the copy operation alone will slow down the frame rate enormously. You only notice that when you're talking about frame rates in the 10000 range of course.
It's a laptop with a 4K screen, and the Intel GPU clearly has a hard job with that resolution. Even the most simple games have laggy frame rates when played on 4K but go very smooth when I lower the screen resolution to Full HD.
But it still doesn't explain the multisampling problem indeed. If prime is indeed just rendering on the nvidia gpu and then copying the contents to an intel gpu texture, then multisampling should just work.
Is prime actually a wrapper around the nvidia driver? Because then it could be that prime doesn't pass on the glx extensions as it should...
]]>No idea about multisampling, but keep in mind that with prime render offload both cards are involved.
- nvidia card renders and stores result in its own dedicated memory
- intel card has no dedicated memory, it uses main memory (which is much slower)
- system transfers data from nvidia dedicated videomemory to intel shared videomem
- intel gpu displays data
How well this works depends on a lot of factors, some of which are : cpu , chipset, Pci Express version / speed, type and speed of main memory.
Is this a laptop or a desktop ?
]]>$ glxinfo -B
name of display: :0
display: :0 screen: 0
direct rendering: Yes
Extended renderer info (GLX_MESA_query_renderer):
Vendor: Intel (0x8086)
Device: Mesa Intel(R) UHD Graphics 630 (CFL GT2) (0x3e9b)
Version: 20.2.3
Accelerated: yes
Video memory: 3072MB
Unified memory: yes
Preferred profile: core (0x1)
Max core profile version: 4.6
Max compat profile version: 4.6
Max GLES1 profile version: 1.1
Max GLES[23] profile version: 3.2
OpenGL vendor string: Intel
OpenGL renderer string: Mesa Intel(R) UHD Graphics 630 (CFL GT2)
OpenGL core profile version string: 4.6 (Core Profile) Mesa 20.2.3
OpenGL core profile shading language version string: 4.60
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL version string: 4.6 (Compatibility Profile) Mesa 20.2.3
OpenGL shading language version string: 4.60
OpenGL context flags: (none)
OpenGL profile mask: compatibility profile
OpenGL ES profile version string: OpenGL ES 3.2 Mesa 20.2.3
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20
$ prime-run glxinfo -B
name of display: :0
display: :0 screen: 0
direct rendering: Yes
Memory info (GL_NVX_gpu_memory_info):
Dedicated video memory: 8192 MB
Total available memory: 8192 MB
Currently available dedicated video memory: 7973 MB
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce RTX 2070/PCIe/SSE2
OpenGL core profile version string: 4.6.0 NVIDIA 455.45.01
OpenGL core profile shading language version string: 4.60 NVIDIA
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL version string: 4.6.0 NVIDIA 455.45.01
OpenGL shading language version string: 4.60 NVIDIA
OpenGL context flags: (none)
OpenGL profile mask: (none)
OpenGL ES profile version string: OpenGL ES 3.2 NVIDIA 455.45.01
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20
Providers: number : 2
Provider 0: id: 0x43 cap: 0xf, Source Output, Sink Output, Source Offload, Sink Offload crtcs: 3 outputs: 1 associated providers: 1 name:modesetting
Provider 1: id: 0x255 cap: 0x2, Sink Output crtcs: 4 outputs: 8 associated providers: 1 name:NVIDIA-G0
$ glxinfo -B
$ prime-run glxinfo -B
$ xrandr --listproviders
When I configure my system to only use the nvidia gpu, it performs perfectly, high frame rates and every functionality is available. Power usage sucks though and the noisy fan spins up for almost everything.
So I tried nvidia-prime and it works fine. I can select the gpu with the prime-run command.
But the nvidia performance (can't tell about the intel performance) is much lower. Frame rates are much lower. There is a simple 3D render that gets up to 9000 frames per second on the pure nvidia installation, and only around 1000 frames per second on the prime installed nvidia (around 200 on the intel but that's not important). Of course, 1000 is still more than enough, but this reduction is there too on games with more normal frame rates.
Also, for some reason, multisampling won't work on the prime installed nvidia while it works perfectly on the pure nvidia installation.
So I wonder, did I do something wrong? Did I screw up the configuration somehow? What can I check?
I searched the forum and internet and I don't see anyone else with similar complaints, so I hope it's something I did so it can be fixed.