You are not logged in.

#51 2018-02-12 06:12:52

elvisvinicius
Member
From: Londrina
Registered: 2015-06-05
Posts: 6
Website

Re: Terrible performance regression with Nvidia 390.25 driver

Could it be a specific problem of bumblebee?


In KDE I have no problem, especially in this latest version of Plasma, 5.12.
Firefox is my main browser, but I also use Chromium and Opera, and I have no performance issues with them.

Follow my configs, if that can help you.


KDE 5.12.0
KERNEL 4.15.2-2-ARCH
NVIDIA 390.25


/etc/X11/xorg.conf.d/20-nvidia.conf

Section "Device"
       Identifier "Device0"
        VendorName "NVIDIA Corporation"
        BoardName "GeForce GTX 1060 6GB"
        Driver "nvidia"
        [...]
        Option "metamodes" "nvidia-auto-select +0+0 {ForceFullCompositionPipeline=On}"
        Option "AllowIndirectGLXProtocol" "off"
        Option "TripleBuffer" "On"
        Option "NoFlip"
        [...]
EndSection

.config/plasma-workspace/env/kwin_env.sh

#!/bin/sh
export KWIN_TRIPLE_BUFFER=1

KDE COMPOSITOR SETTINGS
No changes (apparently changing the version of OpenGL causes small problems).

Last edited by elvisvinicius (2018-02-12 06:14:38)


“Simplicity is the Ultimate Sophistication”
- Leonardo da Vinci

Offline

#52 2018-02-12 06:19:42

Pryka
Member
Registered: 2018-02-07
Posts: 85

Re: Terrible performance regression with Nvidia 390.25 driver

elvisvinicius wrote:

Could it be a specific problem of bumblebee?

Probably not. I'm affected, and do not use bumblebee.

Offline

#53 2018-02-12 09:56:08

Enverex
Member
From: UK
Registered: 2007-06-13
Posts: 159
Website

Re: Terrible performance regression with Nvidia 390.25 driver

I'm getting some stuttering and big black tearing down the screen in some situations now as of 390.25 (387.34 was fine). I'm using ForceCompositionPipeline for sync/tearing prevention - removing that results in crazy screen tearing so obviously that needs to stay.

Standard PC setup with a 1060, so no Bumblebee or anything.

Offline

#54 2018-02-12 15:06:10

hrkristian
Member
Registered: 2013-06-28
Posts: 34

Re: Terrible performance regression with Nvidia 390.25 driver

Omar007 wrote:
hrkristian wrote:

How did this even make it out of testing repos when the regression is discussed there?

Not everyone is affected by this. I've been on 390.25 since it hit the testing repos and haven't had or noticed any problem other than the missing /sys/class/drm/card0-* entries.
And that only prevented me from running/selecting the Gnome on Wayland session, which I wasn't using anyway since even if I can select it, it still falls back to llvmpipe (which is a whole other problem that existed long before 390.25; https://bugs.archlinux.org/task/53284)

Well that's all well and good, but I don't see how that's relevant.

Okay. It's not a regression for you, but it is for many others and the regression was well known. Why was a driver with a well known regression pushed to extra?

That is just not okay, not by any standard. If it had been a novel bug which slipped through testing that's okay, but this is a g-damn Nvidia blob with a severe, known, regression. Used by I'm sure the majority of users, no less.

Offline

#55 2018-02-12 15:17:27

Omar007
Member
Registered: 2015-04-09
Posts: 368

Re: Terrible performance regression with Nvidia 390.25 driver

hrkristian wrote:

Well that's all well and good, but I don't see how that's relevant.

It is very relevant. If the people using testing find no issues and enough sign off on the package, it goes to stable.

hrkristian wrote:

Okay. It's not a regression for you, but it is for many others and the regression was well known. Why was a driver with a well known regression pushed to extra?

It is only known if it is reported. Apparently nobody bothered to do so and most likely the people running on the testing repo had no issues (as in my case) and signed off on it.
390.25 was pushed to extra on 2018-02-06 around mid-day, this topic was only created at least about 8 hours later and I'm not aware of any other reports here on this.
Unless you have one for me, there would have been no reason at all to keep it around in testing.

hrkristian wrote:

That is just not okay, not by any standard. If it had been a novel bug which slipped through testing that's okay, but this is a g-damn Nvidia blob with a severe, known, regression. Used by I'm sure the majority of users, no less.

Feel free to help out and catch these problems before they hit the stable repos by running the testing repositories smile

Last edited by Omar007 (2018-02-12 15:25:26)

Offline

#56 2018-02-12 17:06:24

loqs
Member
Registered: 2014-03-06
Posts: 18,039

Re: Terrible performance regression with Nvidia 390.25 driver

hrkristian wrote:

Well that's all well and good, but I don't see how that's relevant.

Okay. It's not a regression for you, but it is for many others and the regression was well known. Why was a driver with a well known regression pushed to extra?

You are aware package maintainers rarely visit the forums so you are unlikely to receive any response from them.

hrkristian wrote:

That is just not okay, not by any standard. If it had been a novel bug which slipped through testing that's okay, but this is a g-damn Nvidia blob with a severe, known, regression. Used by I'm sure the majority of users, no less.

You know the majority of users have nvidia cards and use the nvidia driver how?  I would suggest instead of complaining you supply nvidia with the output from nvidia-bug-report.sh on their forums so that nvidia can resolve the issue.

Offline

#57 2018-02-12 19:05:42

Tom B
Member
Registered: 2014-01-15
Posts: 187
Website

Re: Terrible performance regression with Nvidia 390.25 driver

I've been trying various options to try to narrow down the issue. Unfortunately no luck so far.

ForceFullCompositionPipeline - On/Off
export __GL_YIELD="USLEEP" - On/Off
export KWIN_TRIPLE_BUFFER=1 - On/Off

None of these made any difference to me. It could be the case that enabling two or more at the same time has some effect as I tried them in isolation.

Offline

#58 2018-02-12 19:26:37

Nekroman
Member
Registered: 2011-10-02
Posts: 51

Re: Terrible performance regression with Nvidia 390.25 driver

As a workaround I've turned off "Use hardware acceleration when available" in Chromium settings and it's now working perfect.

Offline

#59 2018-02-13 01:09:36

blispx
Member
Registered: 2017-11-29
Posts: 53

Re: Terrible performance regression with Nvidia 390.25 driver

Problem when starting DRM:

[drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:32:crtc-0] flip_done timed out

kernel 4.15.2-2, nvidia 390.25-9, KMS enabled

there is no problem with it in 390.25-8, because there is no 4.15-FS57305.patch

Last edited by blispx (2018-02-13 01:34:43)

Offline

#60 2018-02-13 03:31:55

take-
Member
Registered: 2015-08-26
Posts: 2

Re: Terrible performance regression with Nvidia 390.25 driver

Just going to hop on and say this happened to me too after the upgrade, and the (patched) older version fixed it.

Offline

#61 2018-02-13 04:52:15

blispx
Member
Registered: 2017-11-29
Posts: 53

Re: Terrible performance regression with Nvidia 390.25 driver

It was different for me, on linux-4.14.x & nvidia 387.x KMS did not work
It only worked on 4.15 & nvidia-390.25.1-8, on nvidia-390.25.9 the same story as above

Offline

#62 2018-02-13 08:53:55

kokoko3k
Member
Registered: 2008-11-14
Posts: 2,420

Re: Terrible performance regression with Nvidia 390.25 driver

Reading from various comments, i think that the problem comes from the new "memory allocation bug workaround" present in 390.xx series.
Older drivers had a bug that allocated huge amount of memory for a particular type of textures, this is fixed (tried with Feral's Mad Max with opengl renderer)

That said, since the issue affects multiple users but not everyone,and nvidia devs are silent about it, maybe it would be helpful to post detailed system informations, even the things one could consider not relevant, like video card brand or video connector used.
Also, starting the offending application (chromium?) in a clean and isolated session (plain Xorg without even a window manager) could be helpful to narrow down the issue.
If even firefox is giving problem, one could try to download the 32-bit version and start it to see if something changes, as it uses a complete different set of libraries and cpu architecture, maybe it could gives you some interesting hints.

My system, UNaffected:
Cpu: Intel(R) Core(TM) i5-4590 CPU @ 3.30GHz
Motherboard: Asus Z97-K (chipset Intel Z97)
RAM: 16GB DDR3
Display configuration: Dual monitor setup, both 1280x1024@75hz, one connected via HDMI, other via DVI-D
GPU: Asus GTX 750Ti OC edition 2GB
Boot configuration: syslinux/Legacy/CSM (no UEFI)
VT configuration: 1280x1024@60hz
DE: plasma5
xorg.conf: none used
(Force)fullcompositionpipeline: used or not: no change.
Compositor: Used or not: no change
custom nvidia-settings "settings": none
system fully updated (nvidia driver 390.25,xorg-server 1.19.6+13+gd0d1a694f-1)
Kernel: linux 4.15.2-2 booted with the following parameters: "vga=775 intel_iommu=on nopti"
nvidia-smi output whan opening https://edition.cnn.com/ with chromium and scrolling all way down and back up:

koko@Gozer# nvidia-smi
Tue Feb 13 10:11:51 2018       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.25                 Driver Version: 390.25                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 750 Ti  Off  | 00000000:01:00.0  On |                  N/A |
| 29%   33C    P0     2W /  38W |    348MiB /  2000MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0       740      G   /usr/lib/xorg-server/Xorg                    169MiB |
|    0      3308      G   /bin/krunner                                  11MiB |
|    0      5122      G   ...-token=7E7E7A9D0C5CC5063E94B0451F75D375    46MiB |
|    0     21499      G   /usr/lib/firefox/firefox                      49MiB |
|    0     23572      G   /bin/plasmashell                              42MiB |
|    0     24138      G   /usr/lib/firefox/firefox                       1MiB |
|    0     30736      G   kwin_x11                                      18MiB |
+-----------------------------------------------------------------------------+

Last edited by kokoko3k (2018-02-13 09:12:15)


Help me to improve ssh-rdp !
Retroarch User? Try my koko-aio shader !

Offline

#63 2018-02-13 09:30:10

jaergenoth
Member
Registered: 2015-01-16
Posts: 85

Re: Terrible performance regression with Nvidia 390.25 driver

Just to chime in on this, I have a Reverse PRIME setup using the discrete nvidia card as the primary gpu (GTX960M).

The nvidia releases between 390.25-1 and 390.25-8 all had broken vsync, glxgears was running at 12000 fps. The 390.25-9 release fixed this.

I never had any of the other issues people are mentioning on this thread, for some reason.

Last edited by jaergenoth (2018-02-13 09:34:17)

Offline

#64 2018-02-13 09:43:51

Tom B
Member
Registered: 2014-01-15
Posts: 187
Website

Re: Terrible performance regression with Nvidia 390.25 driver

@kokoko3k

Good idea.

My system, Affected

CPU: AMD Threadripper 1950x
Motherboard: Gigabyte Aorus 7 X399
RAM: 32GB DDR4 3466mhz
GPU: Inno3d 980Ti Black [no other GPUs installed]
Boot Configuration: GRUB, UEFI.
Boot drive: m2. NVMe Samsung 960 evo
Video configuration: HDMI-0: 3840x2160 60hz, DP-4: 3840x2160 60hz
DE: Plasma 5
xorg.conf: none used
forcefullcompositionpipeline: On or off makes no difference
Compositor: on/off makes no difference
Nvidia control panel settings: Sync to Vblank [on], Allow flipping [on], Use Conformant Texture Clamping [on], Antialiasing "Use application settings"
Kernel Parameters: No custom options set
nvidia-smi output whan opening https://edition.cnn.com/ with chromium and scrolling all way down and back up:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.25                 Driver Version: 390.25                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 980 Ti  Off  | 00000000:42:00.0  On |                  N/A |
| 36%   24C    P0    75W / 290W |   1053MiB /  6080MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0       877      G   /usr/lib/xorg-server/Xorg                    521MiB |
|    0      1033      G   /usr/bin/kwin_x11                             91MiB |
|    0      1042      G   /usr/bin/krunner                               3MiB |
|    0      1045      G   /usr/bin/plasmashell                         229MiB |
|    0      1488      G   ...-token=BFDFADF9AFD0768206F966CC826AA289   198MiB |
+-----------------------------------------------------------------------------+

Affected applications are Chromium and 3d applications. Gaming under WINE is immediately noticeably worse on both FPS and stutter when things load.

Offline

#65 2018-02-13 10:13:51

loqs
Member
Registered: 2014-03-06
Posts: 18,039

Re: Terrible performance regression with Nvidia 390.25 driver

blispx wrote:

Problem when starting DRM:

[drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:32:crtc-0] flip_done timed out

kernel 4.15.2-2, nvidia 390.25-9, KMS enabled

there is no problem with it in 390.25-8, because there is no 4.15-FS57305.patch

Is this under Plasma 5?  Do you have an improved patch that fixes 57305 and 57401 without triggering the above?

Offline

#66 2018-02-13 10:28:50

kokoko3k
Member
Registered: 2008-11-14
Posts: 2,420

Re: Terrible performance regression with Nvidia 390.25 driver

Tom B:
Look at the nvidia-smi chromium memory consumption:
Your:

|    0      1488      G   ...-token=BFDFADF9AFD0768206F966CC826AA289   198MiB

Mine:

|    0      5122      G   ...-token=7E7E7A9D0C5CC5063E94B0451F75D375    46MiB 

Is the bbc page the only tab opened in chromium?
Could you please repeat with a lower screen resolution? one closer to mine (1280x1024; 1.3Mpixels)
If it remains the same, that could supports my theory of some memory allocation weirdness.

Last edited by kokoko3k (2018-02-13 10:31:10)


Help me to improve ssh-rdp !
Retroarch User? Try my koko-aio shader !

Offline

#67 2018-02-13 10:39:07

Tom B
Member
Registered: 2014-01-15
Posts: 187
Website

Re: Terrible performance regression with Nvidia 390.25 driver

As you suspected, the screen resolution I was using causes higher memory usage (I also had this thread open in another tab). I was running it maximised at 4k. Running at 1/4 of the screen (roughly 1080p minus some height for it to calculate my taskbar) it shows

|    0      1488      G   ...-token=BFDFADF9AFD0768206F966CC826AA289    88MiB |

I do have a couple of addons loaded as well (adblock, markdown here) which, along with the slightly higher resolution likely accounts for the 46 -> 88mb usage.

Offline

#68 2018-02-13 14:01:18

blispx
Member
Registered: 2017-11-29
Posts: 53

Re: Terrible performance regression with Nvidia 390.25 driver

loqs wrote:
blispx wrote:

Problem when starting DRM:

[drm:drm_atomic_helper_wait_for_dependencies [drm_kms_helper]] *ERROR* [CRTC:32:crtc-0] flip_done timed out

kernel 4.15.2-2, nvidia 390.25-9, KMS enabled

there is no problem with it in 390.25-8, because there is no 4.15-FS57305.patch

Is this under Plasma 5?  Do you have an improved patch that fixes 57305 and 57401 without triggering the above?


Gnome but the desktop environment does not matter because this is error when running drm kms and the kernel

This is not the case the same as 4.15-FS57305.patch?

Last edited by blispx (2018-02-13 14:02:58)

Offline

#69 2018-02-13 14:22:25

kokoko3k
Member
Registered: 2008-11-14
Posts: 2,420

Re: Terrible performance regression with Nvidia 390.25 driver

Tom B wrote:

As you suspected, the screen resolution I was using causes higher memory usage (I also had this thread open in another tab). I was running it maximised at 4k. Running at 1/4 of the screen (roughly 1080p minus some height for it to calculate my taskbar) it shows

|    0      1488      G   ...-token=BFDFADF9AFD0768206F966CC826AA289    88MiB |

I do have a couple of addons loaded as well (adblock, markdown here) which, along with the slightly higher resolution likely accounts for the 46 -> 88mb usage.

Ok, another shoot in the dark, could you please try to compile the following?
(I slightly modified the source found here:
https://www.khronos.org/opengl/wiki/Pro … _a_Pixmap)

#include<stdio.h>
#include<stdlib.h>
#include<string.h>
#include<X11/Xlib.h>
#include<GL/gl.h>
#include<GL/glx.h>
#include<GL/glu.h>
#include <unistd.h>

Display                 *dpy;
Window                  root;
GLint                   att[] = { GLX_RGBA, GLX_DEPTH_SIZE, 24, GLX_DOUBLEBUFFER, None };
XVisualInfo             *vi;
XSetWindowAttributes    swa;
Window                  win;
GLXContext              glc;
Pixmap                  pixmap;
int                     pixmap_width = 8192, pixmap_height = 8192;
GC                      gc;
XImage                  *xim;
GLuint                  texture_id;

void Redraw() {
 XWindowAttributes      gwa;

 XGetWindowAttributes(dpy, win, &gwa);
 glViewport(0, 0, gwa.width, gwa.height);
 glClearColor(0.3, 0.3, 0.3, 1.0);
 glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

 glMatrixMode(GL_PROJECTION);
 glLoadIdentity();
 glOrtho(-1.25, 1.25, -1.25, 1.25, 1., 20.);

 glMatrixMode(GL_MODELVIEW);
 glLoadIdentity();
 gluLookAt(0., 0., 10., 0., 0., 0., 0., 1., 0.);

 glColor3f(1.0, 1.0, 1.0);

 glBegin(GL_QUADS);
  glTexCoord2f(0.0, 0.0); glVertex3f(-1.0,  1.0, 0.0);
  glTexCoord2f(1.0, 0.0); glVertex3f( 1.0,  1.0, 0.0);
  glTexCoord2f(1.0, 1.0); glVertex3f( 1.0, -1.0, 0.0);
  glTexCoord2f(0.0, 1.0); glVertex3f(-1.0, -1.0, 0.0);
 glEnd(); 

 glXSwapBuffers(dpy, win); }

/*                */
/*  MAIN PROGRAM  */
/*                */
int main(int argc, char *argv[]) {
 XEvent         xev;

 dpy = XOpenDisplay(NULL);
 
 if(dpy == NULL) {
        printf("\n\tcannot open display\n\n");
        exit(0); }
        
 root = DefaultRootWindow(dpy);
 
 vi = glXChooseVisual(dpy, 0, att);

 if(vi == NULL) {
        printf("\n\tno appropriate visual found\n\n");
        exit(0); }
        
 swa.event_mask = ExposureMask | KeyPressMask;
 swa.colormap   = XCreateColormap(dpy, root, vi->visual, AllocNone);

 win = XCreateWindow(dpy, root, 0, 0, 600, 600, 0, vi->depth, InputOutput, vi->visual, CWEventMask  | CWColormap, &swa);
 XMapWindow(dpy, win);
 XStoreName(dpy, win, "PIXMAP TO TEXTURE");

 glc = glXCreateContext(dpy, vi, NULL, GL_TRUE);

 if(glc == NULL) {
        printf("\n\tcannot create gl context\n\n");
        exit(0); }

 glXMakeCurrent(dpy, win, glc);
 glEnable(GL_DEPTH_TEST);
 
 /* CREATE A PIXMAP AND DRAW SOMETHING */

 pixmap = XCreatePixmap(dpy, root, pixmap_width, pixmap_height, vi->depth);
 gc = DefaultGC(dpy, 0);

 XSetForeground(dpy, gc, 0x00c0c0);
 XFillRectangle(dpy, pixmap, gc, 0, 0, pixmap_width, pixmap_height);

 XSetForeground(dpy, gc, 0x000000);
 XFillArc(dpy, pixmap, gc, 15, 25, 50, 50, 0, 360*64);

 XSetForeground(dpy, gc, 0x0000ff);
 XDrawString(dpy, pixmap, gc, 10, 15, "PIXMAP TO TEXTURE", strlen("PIXMAP TO TEXTURE"));

 XSetForeground(dpy, gc, 0xff0000);
 XFillRectangle(dpy, pixmap, gc, 75, 75, 45, 35);

 XFlush(dpy);
 xim = XGetImage(dpy, pixmap, 0, 0, pixmap_width, pixmap_height, AllPlanes, ZPixmap);

 if(xim == NULL) {
        printf("\n\tximage could not be created.\n\n"); }

 /*     CREATE TEXTURE FROM PIXMAP */

 glEnable(GL_TEXTURE_2D);
 glGenTextures(1, &texture_id);
 glBindTexture(GL_TEXTURE_2D, texture_id);
 glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
 glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
 glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
 glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, pixmap_height, pixmap_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, (void*)(&(xim->data[0])));

 XDestroyImage(xim);
	int i = 1;
        XNextEvent(dpy, &xev);
        
                Redraw();

                
                glXMakeCurrent(dpy, None, NULL);
                glXDestroyContext(dpy, glc);
                XDestroyWindow(dpy, win);

                XCloseDisplay(dpy);
                exit(0); 
                
} 

compile that way:

gcc -o pixmap pixmap.c -lX11 -lGL -lGLU

...then see how much time it spend to do the following:

time for i in $(seq 1 10) ; do ./pixmap ; done

My results:

real    0m4,216s
user    0m1,265s
sys     0m2,629s

Help me to improve ssh-rdp !
Retroarch User? Try my koko-aio shader !

Offline

#70 2018-02-13 15:06:59

exaos
Member
Registered: 2012-03-18
Posts: 17

Re: Terrible performance regression with Nvidia 390.25 driver

Too many problems with the NVIDIA drivers! It drives me crazy. :-(

I don't know how to describe those problems. So, just start with an may-be obvious one: How to solve these error messages in Xorg logs?

$ grep EE /var/log/Xorg.?.log
/var/log/Xorg.0.log:	(WW) warning, (EE) error, (NI) not implemented, (??) unknown.
/var/log/Xorg.0.log:[    95.471] (EE) NVIDIA(0): Failed to initialize the GLX module; please check in your X
/var/log/Xorg.0.log:[    95.471] (EE) NVIDIA(0):     log file that the GLX module has been loaded in your X
/var/log/Xorg.0.log:[    95.471] (EE) NVIDIA(0):     server, and that the module is the NVIDIA GLX module.  If
/var/log/Xorg.0.log:[    95.471] (EE) NVIDIA(0):     you continue to encounter problems, Please try
/var/log/Xorg.0.log:[    95.471] (EE) NVIDIA(0):     reinstalling the NVIDIA driver.
/var/log/Xorg.0.log:[    97.085] (EE) AIGLX: reverting to software rendering
/var/log/Xorg.0.log:[    97.371] (EE) Wacom Bamboo Pen Pen stylus: Invalid type 'cursor' for this device.
/var/log/Xorg.0.log:[    97.371] (EE) Wacom Bamboo Pen Pen stylus: Invalid type 'touch' for this device.
/var/log/Xorg.0.log:[    97.371] (EE) Wacom Bamboo Pen Pen stylus: Invalid type 'pad' for this device.
/var/log/Xorg.1.log:	(WW) warning, (EE) error, (NI) not implemented, (??) unknown.
/var/log/Xorg.1.log:[   143.697] (EE) NVIDIA(0): Failed to initialize the GLX module; please check in your X
/var/log/Xorg.1.log:[   143.697] (EE) NVIDIA(0):     log file that the GLX module has been loaded in your X
/var/log/Xorg.1.log:[   143.697] (EE) NVIDIA(0):     server, and that the module is the NVIDIA GLX module.  If
/var/log/Xorg.1.log:[   143.697] (EE) NVIDIA(0):     you continue to encounter problems, Please try
/var/log/Xorg.1.log:[   143.697] (EE) NVIDIA(0):     reinstalling the NVIDIA driver.
/var/log/Xorg.1.log:[   144.719] (EE) AIGLX: reverting to software rendering
  • Kernel: 4.15.2

  • nvidia-dkms: 390.25-9

Second problem:
The screen goes into black when the box wake from suspending. etc.......

Offline

#71 2018-02-13 15:11:15

V1del
Forum Moderator
Registered: 2012-10-16
Posts: 23,196

Re: Terrible performance regression with Nvidia 390.25 driver

Post complete logs, greping for EE loses all the context that might be used to help fixing this. In addition to a complete log post

pacman -Qs nvidia
lspci -k

Last edited by V1del (2018-02-13 15:11:56)

Offline

#72 2018-02-13 15:22:26

Tom B
Member
Registered: 2014-01-15
Posts: 187
Website

Re: Terrible performance regression with Nvidia 390.25 driver

Here's what I get

real    0m5.512s
user    0m0.888s
sys     0m3.000s

Interestingly I have a higher `real` but lower `user` but the numbers aren't vastly different.

If it matters, watching nvidia-smi while this was running, pixmap used 3mb vram and peaked at 30% GPU utilization.


edit: I'm getting

19083 frames in 5.0 seconds = 3816.577 FPS

In GLXGears. Unfortunately I didn't try it before the 390 driver but it seems low from memory. I seem to remember getting that years ago. According to this thread https://bbs.archlinux.org/viewtopic.php?id=35726 someone using a 10 year old mid-range card got 6000.

Out of interest, what do you get on your 750ti?

Last edited by Tom B (2018-02-13 15:35:12)

Offline

#73 2018-02-13 15:36:12

loqs
Member
Registered: 2014-03-06
Posts: 18,039

Re: Terrible performance regression with Nvidia 390.25 driver

blispx wrote:

Gnome but the desktop environment does not matter because this is error when running drm kms and the kernel

This is not the case the same as 4.15-FS57305.patch?

I asked about the desktop environment as I did not know when the error was generated in relation to other components starting up.
On this system I can use the patch with nvidia-drm.modeset=1 without any error.  Does your system also suffer from the performance regression with 920.25?

Offline

#74 2018-02-13 15:58:35

blispx
Member
Registered: 2017-11-29
Posts: 53

Re: Terrible performance regression with Nvidia 390.25 driver

Suffers mainly from bad vsync, for reasons related to performance I can not use compositionpipeline in xorg conf

Last edited by blispx (2018-02-13 15:59:06)

Offline

#75 2018-02-13 16:04:33

kokoko3k
Member
Registered: 2008-11-14
Posts: 2,420

Re: Terrible performance regression with Nvidia 390.25 driver

.

Tom B wrote:

.

Out of interest, what do you get on your 750ti?

About 35k without compositor, 20k with compositor and 5k with compositor and maximized window.
I don't think is relevant at all...


Help me to improve ssh-rdp !
Retroarch User? Try my koko-aio shader !

Offline

Board footer

Powered by FluxBB