You are not logged in.
Pages: 1
Okay, I know this question is very silly but I saw something like this happening on some older Windows machines and now I'm curious.
I have 4gb 3050. Right now, if I'll reach the limit and fill it's VRAM, my FPS in ANY game will drop to single digits. But the thing is that before, I saw Windows machines with GTX 950 using main RAM as swap file for VRAM to avoid such performance drops. And even if it's not possible - then it'll just blur the textures before it will get critical.
Is there any way to do such stuff on Linux? How can I use my VRAM more efficiently?
Born to lose
Offline
I saw Windows machines with GTX 950 using main RAM as swap file for VRAM to avoid such performance drops. And even if it's not possible - then it'll just blur the textures before it will get critical.
"Saw"? How did you "see" the internal procedures of the GPU driver?
Any link that describes that behavior?
If you're running out of VRAM the textures have to be re-read from disk and the disk is cached in your RAM (unless you explicitly disabled that) - on top of that the GPU driver can map RAM into its own address space, though that's more common for IGPs/APUs (and results in "lots of ram used away by apparently nothing" posts )
Using alternative textures/resolutions/compressions is the games job - I'm not aware of any official feature in the driver like that and it typically causes outrage if the GPU driver silently does this to "improve" the game performance (faking better GPU performance by altering the metric. aka "cheating")
If the windows driver provides that as an official global feature, you'd have to ask nvidia to port it but I couldn't find anything like that (as a global setting)
Are you looking for DLDSR & DSR? (upscaling a framebuffer rendered at lower resolution)
Offline
Yes, it's enabled by default:
'cat /sys/class/drm/card*/device/mem_info_gtt_total'
And aren't you sure you have a 6gb card? 3050 only has 8G/6G models.
And as already said, it's only fast for igpus, which either way already live off it.
You're chances are best with a fast connector. Got a free PCIe 4x8 slot?
Last edited by jl2 (2024-04-07 18:23:07)
Why I run Arch? To "BTW I run Arch" the guy one grade younger.
And to let my siblings and cousins laugh at Arsch Linux...
Offline
A game running with DXVK (including DXVK on Windows) will generally use more VRAM than DirectX on Windows. I do not recall if VKD3D-Proton (DirectX 12) has similar VRAM consumption.
Offline
A game running with DXVK (including DXVK on Windows) will generally use more VRAM than DirectX on Windows. I do not recall if VKD3D-Proton (DirectX 12) has similar VRAM consumption.
Now THIS may be the answer here...
Born to lose
Offline
Yes, it's enabled by default:
'cat /sys/class/drm/card*/device/mem_info_gtt_total'And aren't you sure you have a 6gb card? 3050 only has 8G/6G models.
And as already said, it's only fast for igpus, which either way already live off it.
You're chances are best with a fast connector. Got a free PCIe 4x8 slot?
It's laptop. And it's 3050 with 4 gigs of VRAM.
Born to lose
Offline
The vram BIOS max setting is 2gb. Is the os able to bypass this limit? Or is it possible to enable zram compression for vram?
Offline
The vram BIOS max setting is 2gb. Is the os able to bypass this limit? Or is it possible to enable zram compression for vram?
That's not how vram works, and you will get VERY big performance penalties by compressing vram.
The OS bypasses this limit by "swapping" to normal memory. This is called GTT or GART and is usually half the system memory per GPU. On iGPUs this will not cause any performance penalty at all (as the "normal" vram is already on system memory), but for dGPUs/normal GPUs this will decrease performance, as it has to go over the PCIe bus to the system memory. While for dGPUs this is suboptimal, it's better than running out of vram, and has surely been optimized by the GPU driver folks so that for normal operations this is not noticeable as far as the GPU doesn't require it for rendering each frame.
However, If you have games that require more graphics card than your dGPU has vram, then it's time to lower the graphics settings or buy a new card.
Optimally for iGPUs, you will want to have as little vram as possible, so that in case the CPU needs more ram, it has some extra.
Last edited by jl2 (2024-11-02 15:19:05)
Why I run Arch? To "BTW I run Arch" the guy one grade younger.
And to let my siblings and cousins laugh at Arsch Linux...
Offline
surely been optimized by the GPU driver folks
Ahhh… the optimism of the youthful
Offline
Ahhh… the optimism of the youthful tongue
better than being a grumpy old man (or even worse, german ) I live next to a retirement home ...
anyway, I think the point isn't that wrong though.
Last edited by jl2 (2024-11-02 15:56:56)
Why I run Arch? To "BTW I run Arch" the guy one grade younger.
And to let my siblings and cousins laugh at Arsch Linux...
Offline
I live next to a retirement home ...
So hanging around more old people is like a weird kink?
The problem with optimization is that you optimize processes, not stages.
Here's a scenario:
The game thinks "Ohhhh… soo much VRAM. Nnnoice! Immagonna load aaalll these textures, even for areas the player won't reach in a month. Because I can."
It doesn't (necessarily) understand that part of that VRAM has an extra price tag.
Yes, the driver can then optimize the location at runtime by keeping the hot pages in the actual VRAM but you'd get very much likely better overall performance if the game would only load the textures for this level into the actual VRAM. The driver cannot predict what data is actually relevant.
Offline
So hanging around more old people is like a weird kink?
nooo, swiss old grumpy people are even grumpier.
Yeah makes sense, but I meant a other point: vram on iGPUs won't give a that big penalty, were on dGPUs it definitely will be noticable
Why I run Arch? To "BTW I run Arch" the guy one grade younger.
And to let my siblings and cousins laugh at Arsch Linux...
Offline
Pages: 1