You are not logged in.
I am observing lagging and frame drops when playing games.
The strange thing about this issue is that I am unable to replicate it, it happens with different games at seemingly random times, i.e. I can have a game open for hours and it won't start lagging, at other times it would start to lag right away.
Also worth noting is that sometimes it's enough to close and re-open the game to stop the lag, sometimes going to menus that are less intense, other times I need to restart my PC entirely. I have seen this happen on multiple different games, be it Steam or GOG.
Sometimes the entire DE locks up so I can't even switch to terminal (ctrl-alt-f2...4), because nothing responds, except for sound.
I have tried keeping system monitor open to observe the resources during the time the game is open, but nothing seems to be out of the ordinary, CPU sits virtually idle all the time, RAM never exceeds ~60%, so I suspect the graphics card or the drivers.
I am using latest proprietary drivers (nvidia-440.44-9), the same was happening on older versions as well, as well as on older kernel versions (currently 5.4).
My DE is gnome, 3.34.
Is there a way to better diagnose or possibly resolve this issue?
Offline
I had a similar issue, only in my case only the process using the GPU was locked up.
Add `Option "HardDPMS" "false"` to `/etc/bumblee/xorg.conf.nvidia` as a child of `Section "Device"`. (source: https://devtalk.nvidia.com/default/topi … -minutes/)
I am on a laptop with a "discrete" GPU, so I have Intel CPU/GPU + a separate NVIDIA card, so to use it I need to explicitly say "use the GPU, please", so things like my DE were still on my CPU's integrated GPU. I'd imagine a desktop would've caused my whole system to freeze, so my bets are on it being the same issue.
For me it was _precisely_ 10 minutes after starting the GPU, tested by running these commands in 3 different terminal prompts `glxspheres64`, `primusrun glxspheres64`, `optirun -b primus glxspheres64`. The first used my CPU and the latter two just say "use my GPU". Precisely 10mins in, it would drop to 1FPS.
Best of luck!
Offline
I had a similar issue, only in my case only the process using the GPU was locked up.
Add `Option "HardDPMS" "false"` to `/etc/bumblee/xorg.conf.nvidia` as a child of `Section "Device"`. (source: https://devtalk.nvidia.com/default/topi … -minutes/)
I am on a laptop with a "discrete" GPU, so I have Intel CPU/GPU + a separate NVIDIA card, so to use it I need to explicitly say "use the GPU, please", so things like my DE were still on my CPU's integrated GPU. I'd imagine a desktop would've caused my whole system to freeze, so my bets are on it being the same issue.
For me it was _precisely_ 10 minutes after starting the GPU, tested by running these commands in 3 different terminal prompts `glxspheres64`, `primusrun glxspheres64`, `optirun -b primus glxspheres64`. The first used my CPU and the latter two just say "use my GPU". Precisely 10mins in, it would drop to 1FPS.
Best of luck!
I am not using bumblebee at all, and I did not manually configure any xorg files at all, this happens on a machine that never uses integrated graphics (not a laptop)
Last edited by tofiffe (2020-01-06 09:15:43)
Offline
You need to provide more information here, what exactly are you running which exact GPU is in use, do you get any errors in your journal/dmesg ?
Offline
this happens on a machine that never uses integrated graphics (not a laptop)
Hybrid Graphics depend on the cpu & chipset , not on the formfactor.
please post lspci -k and full dmesg .
Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.
(A works at time B) && (time C > time B ) ≠ (A works at time C)
Offline
Here's the lspci output:
Here's dmesg:
If it's not obvious from these, my card is nVidia GeForce 650Ti, not sure how much journalctl would help, since this happens sporadically, and is not easy to reproduce.
Offline
[ 0.193747] smpboot: CPU0: Intel(R) Core(TM) i7-4770K CPU @ 3.50GHz (family: 0x6, model: 0x3c, stepping: 0x3)
https://ark.intel.com/content/www/us/en … 0-ghz.html shows that processor does come with an integrated intel cpu .
There's no sign of an intel gpu in lspci or dmesg though, so it's probably disabled in firmware.
Nothing in dmesg stands out to me, could you try with another WM/DE like openbox to see if that also gives the same problems ?
Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.
(A works at time B) && (time C > time B ) ≠ (A works at time C)
Offline
I have tried this previously (used deepin for a few days) in which time it didn't happen, but as I said it does not occur on a pattern, sometimes on gnome I get 2+ weeks without this happening, so I am unsure whether or not that was the reason. I will try this out again.
Offline
If it's that "systemic" you will probably want to generally change your approach, have you tried setting the performance governor on your CPU or so? Read through https://wiki.archlinux.org/index.php/Im … erformance in particular the I/O and CPU sections.
Offline
Sorry, I cannot find anything on performance governor on the link you shared, did you mean I should use a different scheduler?
Offline
You're right, no I actually mean: https://wiki.archlinux.org/index.php/CP … _governors
Offline
OK, so I assume I need to set governor to performance and monitor using sensors utility?
Offline
for example.
Offline
Looks like this alone won't do, with the settings I have experienced the same problem, luckily I was able to close the game and capture sensors output, by the looks of the CPU doesn't seem to be to blame here:
coretemp-isa-0000
Adapter: ISA adapter
Package id 0: +59.0°C (high = +80.0°C, crit = +100.0°C)
Core 0: +59.0°C (high = +80.0°C, crit = +100.0°C)
Core 1: +51.0°C (high = +80.0°C, crit = +100.0°C)
Core 2: +47.0°C (high = +80.0°C, crit = +100.0°C)
Core 3: +46.0°C (high = +80.0°C, crit = +100.0°C)
nct6776-isa-0290
Adapter: ISA adapter
Vcore: 872.00 mV (min = +0.00 V, max = +1.74 V)
in1: 1.85 V (min = +0.00 V, max = +0.00 V) ALARM
AVCC: 3.41 V (min = +2.98 V, max = +3.63 V)
+3.3V: 3.41 V (min = +2.98 V, max = +3.63 V)
in4: 840.00 mV (min = +0.00 V, max = +0.00 V) ALARM
in5: 1.69 V (min = +0.00 V, max = +0.00 V) ALARM
in6: 800.00 mV (min = +0.00 V, max = +0.00 V) ALARM
3VSB: 3.47 V (min = +2.98 V, max = +3.63 V)
Vbat: 3.30 V (min = +2.70 V, max = +3.63 V)
fan1: 0 RPM (min = 0 RPM)
fan2: 1445 RPM (min = 0 RPM)
fan3: 0 RPM (min = 0 RPM)
fan4: 0 RPM (min = 0 RPM)
fan5: 0 RPM (min = 0 RPM)
SYSTIN: +38.0°C (high = +0.0°C, hyst = +0.0°C) ALARM sensor = thermistor
CPUTIN: +42.5°C (high = +80.0°C, hyst = +75.0°C) sensor = thermistor
AUXTIN: +45.0°C (high = +80.0°C, hyst = +75.0°C) sensor = thermistor
PECI Agent 0: +59.0°C (high = +80.0°C, hyst = +75.0°C)
(crit = +100.0°C)
PCH_CHIP_TEMP: +0.0°C
PCH_CPU_TEMP: +0.0°C
PCH_MCH_TEMP: +0.0°C
intrusion0: ALARM
intrusion1: ALARM
beep_enable: disabled
Last edited by tofiffe (2020-01-11 21:04:49)
Offline
Post a
sudo journalctl -b
after reproducing the issue and please use [ code ] [ /code ] tags for posting the output (and edit this former post to have the output in code tags) but really this could also just be a leak or so in GNOME if it doesn't happen in other environments
Offline
I don't know why I didn't try this sooner, but I checked the GPU usage via nvidia-smi and got the following result:
Sun Jan 12 09:14:51 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.44 Driver Version: 440.44 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 650 Ti Off | 00000000:05:00.0 N/A | N/A |
| 32% 30C P0 N/A / N/A | 733MiB / 1996MiB | N/A Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 Not Supported |
+-----------------------------------------------------------------------------+
This is on idle, just after login, the card already uses almost half the memory, if I happen to open the browser it quickly jumps to ~1000MiB
It looks like the lag starts after the card is maxed-out memory wise, I have tried using LXDE, which had a much lower idle footprint (~300MiB), but I have managed to reproduce it just the same.
I guess I'll need to upgrade my GPU, not sure why the lag doesn't appear more often though.
Last edited by tofiffe (2020-01-12 09:04:52)
Offline