You are not logged in.

#101 2018-02-14 15:47:16

loqs
Member
Registered: 2014-03-06
Posts: 17,196

Re: Terrible performance regression with Nvidia 390.25 driver

@Batou that patch became https://git.archlinux.org/svntogit/pack … ges/nvidia nvidia/nvidia-dkms 390.25-9 or later include it.
@deafeningsylence do you have the same issue if you use the linux-lts kernel with the nouveau driver?

Offline

#102 2018-02-14 15:47:35

Guiluge
Member
Registered: 2016-04-12
Posts: 9

Re: Terrible performance regression with Nvidia 390.25 driver

deafeningsylence wrote:

Quick update, changing to nouveau drivers did not solve the issue, it is the same lag/buffering every 2 seconds present as before for me. I checked via lspci -v if the driver is in use and it is.

Strange...
Nouveau driver is in use, but did you blacklist / uninstall nvidia driver ?
If not, you may have some great issues, indeed.
Be sure to disable Xorg configuration files related to nvidia, too.

Offline

#103 2018-02-14 15:50:30

deafeningsylence
Member
Registered: 2016-09-23
Posts: 52

Re: Terrible performance regression with Nvidia 390.25 driver

Guiluge wrote:
deafeningsylence wrote:

Quick update, changing to nouveau drivers did not solve the issue, it is the same lag/buffering every 2 seconds present as before for me. I checked via lspci -v if the driver is in use and it is.

Strange...
Nouveau driver is in use, but did you blacklist / uninstall nvidia driver ?
If not, you may have some great issues, indeed.
Be sure to disable Xorg configuration files related to nvidia, too.

I removed it with pacman -Rns and then removed the config from /etc/X11/xorg.conf.d/20-nvidia... . However, I did not blacklist the driver. Reverted back to nvidia for now.

Offline

#104 2018-02-14 15:52:46

deafeningsylence
Member
Registered: 2016-09-23
Posts: 52

Re: Terrible performance regression with Nvidia 390.25 driver

loqs wrote:

@Batou that patch became https://git.archlinux.org/svntogit/pack … ges/nvidia nvidia/nvidia-dkms 390.25-9 or later include it.
@deafeningsylence do you have the same issue if you use the linux-lts kernel with the nouveau driver?

I will try that later, currently I do not have the time to severly fuck up my system with a kernel change and its pitfalls big_smile

Offline

#105 2018-02-14 15:53:35

Guiluge
Member
Registered: 2016-04-12
Posts: 9

Re: Terrible performance regression with Nvidia 390.25 driver

deafeningsylence wrote:
Guiluge wrote:
deafeningsylence wrote:

Quick update, changing to nouveau drivers did not solve the issue, it is the same lag/buffering every 2 seconds present as before for me. I checked via lspci -v if the driver is in use and it is.

Strange...
Nouveau driver is in use, but did you blacklist / uninstall nvidia driver ?
If not, you may have some great issues, indeed.
Be sure to disable Xorg configuration files related to nvidia, too.

I removed it with pacman -Rns and then removed the config from /etc/X11/xorg.conf.d/20-nvidia... . However, I did not blacklist the driver. Reverted back to nvidia for now.

If nvidia driver has been removed, no need to blacklist then.

Offline

#106 2018-02-14 15:56:45

loqs
Member
Registered: 2014-03-06
Posts: 17,196

Re: Terrible performance regression with Nvidia 390.25 driver

deafeningsylence wrote:

I will try that later, currently I do not have the time to severly fuck up my system with a kernel change and its pitfalls big_smile

You do not need to remove the existing kernel you can have multiple kernels.  You would need to remove the nvidia package as the blacklist for the nouveau affects all kernels.

Offline

#107 2018-02-14 15:56:50

blispx
Member
Registered: 2017-11-29
Posts: 53

Re: Terrible performance regression with Nvidia 390.25 driver

loqs wrote:
blispx wrote:

I have built nvidia 390.25-10 without 4.15-FS57305.patch and KMS works

When using nvidia 390.25-10 without 4.15-FS57305.patch what is the output of

$ lsmod | grep drm
nvidia_drm             24576  4
drm                   466944  6 nvidia_drm
agpgart                49152  1 drm
nvidia_modeset       1097728  7 nvidia_drm

nvidia-drm.modeset=1 grub kernel parameter & nvidia, nvidia_modeset, nvidia_uvm, nvidia_drm in mkinitcpio modules

Last edited by blispx (2018-02-14 15:57:16)

Offline

#108 2018-02-14 15:57:21

deafeningsylence
Member
Registered: 2016-09-23
Posts: 52

Re: Terrible performance regression with Nvidia 390.25 driver

Maybe I'll setup another system on a second partition with arch and nouveau + gnome instead of nvidia+kde. Also I've been thinking, I use a Ryzen processor and thought that maybe after all it is not nvidia but the fact that Intel support is much better than Ryzen support in linux.

Last edited by deafeningsylence (2018-02-14 15:59:57)

Offline

#109 2018-02-14 16:02:53

Batou
Member
Registered: 2017-01-03
Posts: 259

Re: Terrible performance regression with Nvidia 390.25 driver

loqs wrote:

@Batou that patch became https://git.archlinux.org/svntogit/pack … ges/nvidia nvidia/nvidia-dkms 390.25-9 or later include it.

Thanks for the info and thanks for the patch!!! I remember when 4.15 landed in the repo, it was completely unusable with nvidia binary. At least now it's usable.


Please vote for all the AUR packages you're using. You can mass-vote for all of them by doing: "pacman -Qqm | xargs aurvote -v" (make sure to run "aurvote --configure"  first)

Offline

#110 2018-02-14 16:43:12

loqs
Member
Registered: 2014-03-06
Posts: 17,196

Re: Terrible performance regression with Nvidia 390.25 driver

@blispx with the patch

nvidia_drm             45056  1
drm_kms_helper        163840  1 nvidia_drm
drm                   397312  4 nvidia_drm,drm_kms_helper
syscopyarea            16384  1 drm_kms_helper
sysfillrect            16384  1 drm_kms_helper
sysimgblt              16384  1 drm_kms_helper
fb_sys_fops            16384  1 drm_kms_helper
nvidia_modeset       1085440  4 nvidia_drm

Offline

#111 2018-02-14 17:11:50

Batou
Member
Registered: 2017-01-03
Posts: 259

Re: Terrible performance regression with Nvidia 390.25 driver

This is on mine:

 lsmod | grep drm
nvidia_drm             45056  1
drm_kms_helper        200704  1 nvidia_drm
syscopyarea            16384  1 drm_kms_helper
sysfillrect            16384  1 drm_kms_helper
sysimgblt              16384  1 drm_kms_helper
fb_sys_fops            16384  1 drm_kms_helper
drm                   466944  4 nvidia_drm,drm_kms_helper
agpgart                49152  1 drm
nvidia_modeset       1097728  7 nvidia_drm

@Ioqs, one thing I don't understand... is it possible, and is there any benefit, to using an older Nvidia driver with 4.15? Say, using 387.x with 4.15? Is the root cause with the kernel or the driver?


Please vote for all the AUR packages you're using. You can mass-vote for all of them by doing: "pacman -Qqm | xargs aurvote -v" (make sure to run "aurvote --configure"  first)

Offline

#112 2018-02-14 17:21:06

loqs
Member
Registered: 2014-03-06
Posts: 17,196

Re: Terrible performance regression with Nvidia 390.25 driver

Issues have been reported with 390.25 with older kernels so I would suspect the cause is 390.25.
Edit:
versioned patch for 4.15 for 387.34

diff --git a/NVIDIA-Linux-x86_64-387.34-no-compat32/kernel/conftest.sh b/NVIDIA-Linux-x86_64-387.34-no-compat32/kernel/conftest.sh
index 6bf8f5e..be10812 100755
--- a/NVIDIA-Linux-x86_64-387.34-no-compat32/kernel/conftest.sh
+++ b/NVIDIA-Linux-x86_64-387.34-no-compat32/kernel/conftest.sh
@@ -2107,6 +2107,7 @@ compile_test() {
             #endif
             #include <drm/drm_atomic.h>
             #include <drm/drm_atomic_helper.h>
+            #include <linux/version.h>
             #if !defined(CONFIG_DRM) && !defined(CONFIG_DRM_MODULE)
             #error DRM not enabled
             #endif
@@ -2129,8 +2130,12 @@ compile_test() {
                 /* 2014-12-18 88a48e297b3a3bac6022c03babfb038f1a886cea */
                 i = DRIVER_ATOMIC;
 
+                #if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
                 /* 2015-04-10 df63b9994eaf942afcdb946d27a28661d7dfbf2a */
                 for_each_crtc_in_state(s, c, cs, i) { }
+                #else
+                for_each_new_crtc_in_state(s, c, cs, i) {}
+                #endif
             }"
 
             compile_check_conftest "$CODE" "NV_DRM_ATOMIC_MODESET_AVAILABLE" "" "generic"
diff --git a/NVIDIA-Linux-x86_64-387.34-no-compat32/kernel/nvidia-drm/nvidia-drm-connector.c b/NVIDIA-Linux-x86_64-387.34-no-compat32/kernel/nvidia-drm/nvidia-drm-connector.c
index b834021..dec0245 100644
--- a/NVIDIA-Linux-x86_64-387.34-no-compat32/kernel/nvidia-drm/nvidia-drm-connector.c
+++ b/NVIDIA-Linux-x86_64-387.34-no-compat32/kernel/nvidia-drm/nvidia-drm-connector.c
@@ -33,6 +33,7 @@
 
 #include <drm/drm_atomic.h>
 #include <drm/drm_atomic_helper.h>
+#include <linux/version.h>
 
 static void nvidia_connector_destroy(struct drm_connector *connector)
 {
@@ -107,7 +108,11 @@ nvidia_connector_detect(struct drm_connector *connector, bool force)
             break;
         }
 
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
         encoder = drm_encoder_find(dev, id);
+#else
+        encoder = drm_encoder_find(dev, NULL, id);
+#endif
 
         if (encoder == NULL)
         {
diff --git a/NVIDIA-Linux-x86_64-387.34-no-compat32/kernel/nvidia-drm/nvidia-drm-crtc.c b/NVIDIA-Linux-x86_64-387.34-no-compat32/kernel/nvidia-drm/nvidia-drm-crtc.c
index 33af2c7..2bd45ea 100644
--- a/NVIDIA-Linux-x86_64-387.34-no-compat32/kernel/nvidia-drm/nvidia-drm-crtc.c
+++ b/NVIDIA-Linux-x86_64-387.34-no-compat32/kernel/nvidia-drm/nvidia-drm-crtc.c
@@ -34,6 +34,7 @@
 
 #include <drm/drm_atomic.h>
 #include <drm/drm_atomic_helper.h>
+#include <linux/version.h>
 
 static const u32 nv_default_supported_plane_drm_formats[] = {
     DRM_FORMAT_ARGB1555,
@@ -434,7 +435,11 @@ int nvidia_drm_get_crtc_crc32(struct drm_device *dev,
         goto done;
     }
 
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
     crtc = drm_crtc_find(dev, params->crtc_id);
+#else
+    crtc = drm_crtc_find(dev, NULL, params->crtc_id);
+#endif
     if (!crtc) {
         NV_DRM_DEV_LOG_DEBUG(nv_dev, "Unknown CRTC ID %d\n", params->crtc_id);
         ret = -ENOENT;
diff --git a/NVIDIA-Linux-x86_64-387.34-no-compat32/kernel/nvidia-drm/nvidia-drm-modeset.c b/NVIDIA-Linux-x86_64-387.34-no-compat32/kernel/nvidia-drm/nvidia-drm-modeset.c
index 116b14a..dc615d6 100644
--- a/NVIDIA-Linux-x86_64-387.34-no-compat32/kernel/nvidia-drm/nvidia-drm-modeset.c
+++ b/NVIDIA-Linux-x86_64-387.34-no-compat32/kernel/nvidia-drm/nvidia-drm-modeset.c
@@ -37,6 +37,7 @@
 #include <drm/drm_atomic.h>
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_crtc.h>
+#include <linux/version.h>
 
 #if defined(NV_DRM_MODE_CONFIG_FUNCS_HAS_ATOMIC_STATE_ALLOC)
 struct nvidia_drm_atomic_state {
@@ -252,7 +253,11 @@ static int drm_atomic_state_to_nvkms_requested_config(
 
     /* Loops over all crtcs and fill head configuration for changes */
 
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
     for_each_crtc_in_state(state, crtc, crtc_state, i)
+#else
+    for_each_new_crtc_in_state(state, crtc, crtc_state, i)
+#endif
     {
         struct nvidia_drm_crtc *nv_crtc;
         struct NvKmsKapiHeadRequestedConfig *head_requested_config;
@@ -303,7 +308,11 @@ static int drm_atomic_state_to_nvkms_requested_config(
 
             head_requested_config->flags.displaysChanged = NV_TRUE;
 
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
             for_each_connector_in_state(state, connector, connector_state, j) {
+#else
+            for_each_new_connector_in_state(state, connector, connector_state, j) {
+#endif
                 if (connector_state->crtc != crtc) {
                     continue;
                 }
@@ -324,7 +333,11 @@ static int drm_atomic_state_to_nvkms_requested_config(
 
     /* Loops over all planes and fill plane configuration for changes */
 
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
     for_each_plane_in_state(state, plane, plane_state, i)
+#else
+    for_each_new_plane_in_state(state, plane, plane_state, i)
+#endif
     {
         struct NvKmsKapiHeadRequestedConfig *head_requested_config;
 
@@ -634,7 +647,11 @@ void nvidia_drm_atomic_helper_commit_tail(struct drm_atomic_state *state)
          nvidia_drm_write_combine_flush();
     }
 
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
     for_each_crtc_in_state(state, crtc, crtc_state, i) {
+#else
+    for_each_new_crtc_in_state(state, crtc, crtc_state, i) {
+#endif
         struct nvidia_drm_crtc *nv_crtc = DRM_CRTC_TO_NV_CRTC(crtc);
         struct nv_drm_crtc_state *nv_crtc_state = to_nv_crtc_state(crtc->state);
         struct nv_drm_flip *nv_flip = nv_crtc_state->nv_flip;
@@ -755,7 +772,11 @@ static void nvidia_drm_atomic_commit_task_callback(struct work_struct *work)
             "Failed to commit NvKmsKapiModeSetConfig");
     }
 
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
     for_each_crtc_in_state(state, crtc, crtc_state, i) {
+#else
+    for_each_new_crtc_in_state(state, crtc, crtc_state, i) {
+#endif
         struct nvidia_drm_crtc *nv_crtc = DRM_CRTC_TO_NV_CRTC(crtc);
 
         if (wait_event_timeout(
diff --git a/NVIDIA-Linux-x86_64-387.34-no-compat32/kernel/nvidia-modeset/nvidia-modeset-linux.c b/NVIDIA-Linux-x86_64-387.34-no-compat32/kernel/nvidia-modeset/nvidia-modeset-linux.c
index edeb152..cd0ce2b 100644
--- a/NVIDIA-Linux-x86_64-387.34-no-compat32/kernel/nvidia-modeset/nvidia-modeset-linux.c
+++ b/NVIDIA-Linux-x86_64-387.34-no-compat32/kernel/nvidia-modeset/nvidia-modeset-linux.c
@@ -21,6 +21,7 @@
 #include <linux/random.h>
 #include <linux/file.h>
 #include <linux/list.h>
+#include <linux/version.h>
 
 #include "nvstatus.h"
 
@@ -566,9 +567,17 @@ static void nvkms_queue_work(nv_kthread_q_t *q, nv_kthread_q_item_t *q_item)
     WARN_ON(!ret);
 }
 
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
 static void nvkms_timer_callback(unsigned long arg)
+#else
+static void nvkms_timer_callback(struct timer_list * t)
+#endif
 {
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
     struct nvkms_timer_t *timer = (struct nvkms_timer_t *) arg;
+#else
+    struct nvkms_timer_t *timer = from_timer(timer, t, kernel_timer);
+#endif
 
     /* In softirq context, so schedule nvkms_kthread_q_callback(). */
     nvkms_queue_work(&nvkms_kthread_q, &timer->nv_kthread_q_item);
@@ -606,10 +615,16 @@ nvkms_init_timer(struct nvkms_timer_t *timer, nvkms_timer_proc_t *proc,
         timer->kernel_timer_created = NV_FALSE;
         nvkms_queue_work(&nvkms_kthread_q, &timer->nv_kthread_q_item);
     } else {
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
         init_timer(&timer->kernel_timer);
+#else
+        timer_setup(&timer->kernel_timer,nvkms_timer_callback,0);
+#endif
         timer->kernel_timer_created = NV_TRUE;
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
         timer->kernel_timer.function = nvkms_timer_callback;
         timer->kernel_timer.data = (unsigned long) timer;
+#endif
         mod_timer(&timer->kernel_timer, jiffies + NVKMS_USECS_TO_JIFFIES(usec));
     }
     spin_unlock_irqrestore(&nvkms_timers.lock, flags);
diff --git a/NVIDIA-Linux-x86_64-387.34-no-compat32/kernel/nvidia/nv.c b/NVIDIA-Linux-x86_64-387.34-no-compat32/kernel/nvidia/nv.c
index ad5091b..a469bf9 100644
--- a/NVIDIA-Linux-x86_64-387.34-no-compat32/kernel/nvidia/nv.c
+++ b/NVIDIA-Linux-x86_64-387.34-no-compat32/kernel/nvidia/nv.c
@@ -320,7 +320,11 @@ static irqreturn_t   nvidia_isr             (int, void *, struct pt_regs *);
 #else
 static irqreturn_t   nvidia_isr             (int, void *);
 #endif
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
 static void          nvidia_rc_timer        (unsigned long);
+#else
+static void          nvidia_rc_timer        (struct timer_list *);
+#endif
 
 static int           nvidia_ctl_open        (struct inode *, struct file *);
 static int           nvidia_ctl_close       (struct inode *, struct file *);
@@ -2472,10 +2476,18 @@ nvidia_isr_bh_unlocked(
 
 static void
 nvidia_rc_timer(
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
     unsigned long data
+#else
+    struct timer_list * t
+#endif
 )
 {
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
     nv_linux_state_t *nvl = (nv_linux_state_t *) data;
+#else
+    nv_linux_state_t *nvl = from_timer(nvl, t, rc_timer);
+#endif
     nv_state_t *nv = NV_STATE_PTR(nvl);
     nvidia_stack_t *sp = nvl->sp[NV_DEV_STACK_TIMER];
 
@@ -3386,9 +3398,13 @@ int NV_API_CALL nv_start_rc_timer(
         return -1;
 
     nv_printf(NV_DBG_INFO, "NVRM: initializing rc timer\n");
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
     init_timer(&nvl->rc_timer);
     nvl->rc_timer.function = nvidia_rc_timer;
     nvl->rc_timer.data = (unsigned long) nvl;
+#else
+    timer_setup(&nvl->rc_timer,nvidia_rc_timer,0);
+#endif
     nv->rc_timer_enabled = 1;
     mod_timer(&nvl->rc_timer, jiffies + HZ); /* set our timeout for 1 second */
     nv_printf(NV_DBG_INFO, "NVRM: rc timer initialized\n");

Last edited by loqs (2018-02-14 18:02:41)

Offline

#113 2018-02-14 18:13:04

Batou
Member
Registered: 2017-01-03
Posts: 259

Re: Terrible performance regression with Nvidia 390.25 driver

@Ioqs thanks.

One thing I've noticed... 390.25 runs a lot cooler. Check this out:

0QPiYvH.png

I'm pretty sure my temps were in the 40s before this driver. Maybe all these performance issues are due to a change how PowerMizer works? Mine's set to auto. Hmmm....


Please vote for all the AUR packages you're using. You can mass-vote for all of them by doing: "pacman -Qqm | xargs aurvote -v" (make sure to run "aurvote --configure"  first)

Offline

#114 2018-02-14 20:05:48

Tom B
Member
Registered: 2014-01-15
Posts: 187
Website

Re: Terrible performance regression with Nvidia 390.25 driver

For me it's hotter. I used to get 25-28 now sits at 35 almost consistently.
Still maxes out at 41c full load but definitely idles hotter. I'm probably a special case though as I have a water cooled build with overkill rad space.

Offline

#115 2018-02-14 20:23:17

blispx
Member
Registered: 2017-11-29
Posts: 53

Re: Terrible performance regression with Nvidia 390.25 driver

loqs wrote:

@blispx with the patch

nvidia_drm             45056  1
drm_kms_helper        163840  1 nvidia_drm
drm                   397312  4 nvidia_drm,drm_kms_helper
syscopyarea            16384  1 drm_kms_helper
sysfillrect            16384  1 drm_kms_helper
sysimgblt              16384  1 drm_kms_helper
fb_sys_fops            16384  1 drm_kms_helper
nvidia_modeset       1085440  4 nvidia_drm

drm_kms_helper  indicates nvidia-drm.modeset=0, give nvidia-drm.modeset=1 in grub cfg kernel cmd, reload grub and check

or otherwise see yes:

sudo cat /sys/module/nvidia_drm/parameters/modeset

Last edited by blispx (2018-02-14 20:27:44)

Offline

#116 2018-02-14 20:27:19

loqs
Member
Registered: 2014-03-06
Posts: 17,196

Re: Terrible performance regression with Nvidia 390.25 driver

blispx wrote:

drm_kms_helper  indicates nvidia-drm.modeset=0, give nvidia-drm.modeset=1 in grub cfg kernel cmd, reload grub and check

$ sudo cat /sys/module/nvidia_drm/parameters/modeset 
Y

Will do a reboot and switch the option to 0 and see which modules are loaded then.
Edit:

$ lsmod | grep drm
nvidia_drm             45056  1
drm_kms_helper        163840  1 nvidia_drm
drm                   397312  4 nvidia_drm,drm_kms_helper
syscopyarea            16384  1 drm_kms_helper
sysfillrect            16384  1 drm_kms_helper
sysimgblt              16384  1 drm_kms_helper
fb_sys_fops            16384  1 drm_kms_helper
nvidia_modeset       1085440  4 nvidia_drm
 sudo cat /sys/module/nvidia_drm/parameters/modeset 
N

Edit2:
unpatched

$ lsmod | grep drm
nvidia_drm             24576  1
drm                   397312  3 nvidia_drm
nvidia_modeset       1085440  4 nvidia_drm
$ sudo cat /sys/module/nvidia_drm/parameters/modeset 
Y
$ lsmod | grep drm
nvidia_drm             24576  1
drm                   397312  3 nvidia_drm
nvidia_modeset       1085440  4 nvidia_drm
$ sudo cat /sys/module/nvidia_drm/parameters/modeset 
N

unpatched modules not in initrd nvidia-drm.modeset=1 drm_kms_helper is not loaded
unpatched modules not in initrd nvidia-drm.modeset=0 drm_kms_helper is not loaded
patched modules not in initrd nvidia-drm.modeset=1 drm_kms_helper is not loaded
patched modules not in initrd nvidia-drm.modeset=0 drm_kms_helper is not loaded

So the only time drm_kms_helper is loaded is when the modules are in the initrd and nvidia-drm.modeset=1 that indicates to me kms is in use in that combination but not the others.
Anecdotally that combination is the only one that gives a slight glitch on the screen when a modeswitch occurrs.

Last edited by loqs (2018-02-14 20:48:04)

Offline

#117 2018-02-15 01:20:28

Batou
Member
Registered: 2017-01-03
Posts: 259

Re: Terrible performance regression with Nvidia 390.25 driver

Tom B wrote:

For me it's hotter. I used to get 25-28 now sits at 35 almost consistently.
Still maxes out at 41c full load but definitely idles hotter. I'm probably a special case though as I have a water cooled build with overkill rad space.

Well, I'm out of ideas then. I was thinking that maybe the new driver underclocks heavily in the automatic mode but who knows what's really going on.


Please vote for all the AUR packages you're using. You can mass-vote for all of them by doing: "pacman -Qqm | xargs aurvote -v" (make sure to run "aurvote --configure"  first)

Offline

#118 2018-02-15 09:14:42

kokoko3k
Member
Registered: 2008-11-14
Posts: 2,390

Re: Terrible performance regression with Nvidia 390.25 driver

On another system i run an outdated arch installation, same motherboard, but instead of a 750Ti, it has an e-vga gtx 1060/3G
So, instead of updating the whole system, i decided to make a basic archiso installation (no UEFI), fully updated with just lxde, networkmanager, firefox,chromium and latest nvidia drivers (nouveau blacklisted in kernel command line) and tried it on that system.

Guess what? No problems here too.
Is there a chance that it is a software issue that does not depend "entirely" on drivers?

For you to test it, i made an image of the live usb i made.
It is quite handy too, (it works with archiso snapshots, allows you to save your customizations).
Everything runs with root account (sorry), so to test chromium you have to start it from lxterminal via "chromium --no-sandbox".

Decompress it and use dd to write it to an usb stick, 16GB at least.
When booting it, select the second entry (snapshot)

-> Download Live usb torrent file <-
Magnet link: magnet:?xt=urn:btih:LXIA6L5CPBUYSCH2O4OZPL2S7H5BXHSO


-EDIT-
Whops, i left /etc/vconsole.conf with the italian layout,sorry tongue

Last edited by kokoko3k (2018-02-15 09:34:12)


Help me to improve ssh-rdp !
Retroarch User? Try my koko-aio shader !

Offline

#119 2018-02-15 11:40:26

Roken
Member
From: South Wales, UK
Registered: 2012-01-16
Posts: 1,251

Re: Terrible performance regression with Nvidia 390.25 driver

Well, I've noticed another improvement. One of the main GPU tasks I use is blender rendering, which would kill mouse updates as well as any GL stuff. With the latest, my mouse is a lot more responsive during rendering.

However, with latest updates from testing, I still get less than optimal results on vsynctester and I get the black desktop with pointer if I disable compton and enable ForceFullCompositingPipeline.

Back to compton with driver level compositing disabled.

GTX680 4Gb.

Last edited by Roken (2018-02-15 11:41:05)


Ryzen 5900X 12 core/24 thread - RTX 3090 FE 24 Gb, Asus Prime B450 Plus, 32Gb Corsair DDR4, Cooler Master N300 chassis, 5 HD (1 NvME PCI, 4SSD) + 1 x optical.
Linux user #545703

Offline

#120 2018-02-15 12:41:37

cirrus9
Member
Registered: 2016-04-15
Posts: 49

Re: Terrible performance regression with Nvidia 390.25 driver

The latest kernel and nvidia (from core, not testing), are better, but I still have some issues. The worst of which I notice mainly with Chromium, and that has been the case for me all along. I have a Gigabyte Z77-DH3 motherboard, 32GB RAM, i7-3770 CPU, EVGA GTX-1050,boot drive: Samsung 850 pro, display:  Samsung CF791 running at 3440x1440 at 100hz, Cinnamon desktop,ForceFullCompositingPipeline is off. I'm using systemd boot.

Offline

#121 2018-02-15 14:24:15

krogen
Member
Registered: 2008-02-11
Posts: 13

Re: Terrible performance regression with Nvidia 390.25 driver

Nekroman wrote:

As a workaround I've turned off "Use hardware acceleration when available" in Chromium settings and it's now working perfect.

"Fixes" the Chromium problem for me also.

One thing that completely breaks with hardware acceleration on (with these newest drivers) is casting to Chromecast. Sound works but no video, Chromium becomes extremely choppy.

Hardware:
Intel BOXDX79TO Motherboard
Intel Xeon E5-2670
GeForce GTX 1060 3GB
16GB RAM

Software:
nvidia 390.25-10
linux 4.15.3-1

Offline

#122 2018-02-15 15:50:42

Kabir
Member
From: India
Registered: 2016-12-06
Posts: 59

Re: Terrible performance regression with Nvidia 390.25 driver

cirrus9 wrote:

The latest kernel and nvidia (from core, not testing), are better, but I still have some issues.

here too, there is an improvement in the most recent NV 390 driver. Scrolling on chromium has improved, its not as fluid as it was with the NV 387 series, but better than earier NV 390 versions. The only thing that annoys me is not being able to resume from suspend or hibernate without having insane amounts of "suspend swap group failed / resume swap group failed" warnings in the xorg logs. Also, I thought the Xid errors were resolved as I wasnt getting them in the earlier versions of NV 390 but right now I just tried hibernate and on resume I got the following:

Feb 15 20:38:33 aries kernel: Suspending console(s) (use no_console_suspend to debug)
Feb 15 20:38:33 aries kernel: ACPI: Preparing to enter system sleep state S4
Feb 15 20:38:33 aries kernel: ACPI: EC: event blocked
Feb 15 20:38:33 aries kernel: ACPI: EC: EC stopped
Feb 15 20:38:33 aries kernel: PM: Saving platform NVS memory
Feb 15 20:38:33 aries kernel: Disabling non-boot CPUs ...
Feb 15 20:38:33 aries kernel: smpboot: CPU 1 is now offline
Feb 15 20:38:33 aries kernel: smpboot: CPU 2 is now offline
Feb 15 20:38:33 aries kernel: smpboot: CPU 3 is now offline
Feb 15 20:38:33 aries kernel: smpboot: CPU 4 is now offline
Feb 15 20:38:33 aries kernel: smpboot: CPU 5 is now offline
Feb 15 20:38:33 aries kernel: smpboot: CPU 6 is now offline
Feb 15 20:38:33 aries kernel: smpboot: CPU 7 is now offline
Feb 15 20:38:33 aries kernel: PM: Creating hibernation image:
Feb 15 20:38:33 aries kernel: PM: Need to copy 333295 pages
Feb 15 20:38:33 aries kernel: PM: Normal pages needed: 333295 + 1024, available pages: 1743362
Feb 15 20:38:33 aries kernel: PM: Restoring platform NVS memory
Feb 15 20:38:33 aries kernel: ACPI: EC: EC started
Feb 15 20:38:33 aries kernel: Enabling non-boot CPUs ...
Feb 15 20:38:33 aries kernel: x86: Booting SMP configuration:
Feb 15 20:38:33 aries kernel: smpboot: Booting Node 0 Processor 1 APIC 0x2
Feb 15 20:38:33 aries kernel:  cache: parent cpu1 should not be sleeping
Feb 15 20:38:33 aries kernel: CPU1 is up
Feb 15 20:38:33 aries kernel: smpboot: Booting Node 0 Processor 2 APIC 0x4
Feb 15 20:38:33 aries kernel:  cache: parent cpu2 should not be sleeping
Feb 15 20:38:33 aries kernel: CPU2 is up
Feb 15 20:38:33 aries kernel: smpboot: Booting Node 0 Processor 3 APIC 0x6
Feb 15 20:38:33 aries kernel:  cache: parent cpu3 should not be sleeping
Feb 15 20:38:33 aries kernel: CPU3 is up
Feb 15 20:38:33 aries kernel: smpboot: Booting Node 0 Processor 4 APIC 0x1
Feb 15 20:38:33 aries kernel:  cache: parent cpu4 should not be sleeping
Feb 15 20:38:33 aries kernel: CPU4 is up
Feb 15 20:38:33 aries kernel: smpboot: Booting Node 0 Processor 5 APIC 0x3
Feb 15 20:38:33 aries kernel:  cache: parent cpu5 should not be sleeping
Feb 15 20:38:33 aries kernel: CPU5 is up
Feb 15 20:38:33 aries kernel: smpboot: Booting Node 0 Processor 6 APIC 0x5
Feb 15 20:38:33 aries kernel:  cache: parent cpu6 should not be sleeping
Feb 15 20:38:33 aries kernel: CPU6 is up
Feb 15 20:38:33 aries kernel: smpboot: Booting Node 0 Processor 7 APIC 0x7
Feb 15 20:38:33 aries kernel:  cache: parent cpu7 should not be sleeping
Feb 15 20:38:33 aries kernel: CPU7 is up
Feb 15 20:38:33 aries kernel: ACPI: Waking up from system sleep state S4
Feb 15 20:38:33 aries kernel: usb usb1: root hub lost power or was reset
Feb 15 20:38:33 aries kernel: usb usb2: root hub lost power or was reset
Feb 15 20:38:33 aries kernel: ACPI: EC: event unblocked
Feb 15 20:38:33 aries kernel: sd 0:0:0:0: [sda] Starting disk
Feb 15 20:38:33 aries kernel: NVRM: GPU at PCI:0000:01:00: GPU-3a05f18a-2eff-9960-daf4-8c7fb0ef0d0d
Feb 15 20:38:33 aries kernel: NVRM: GPU Board Serial Number: 0422916002151
Feb 15 20:38:33 aries kernel: NVRM: Xid (PCI:0000:01:00): 32, Channel ID 00000000 intr 00008000
Feb 15 20:38:33 aries kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
Feb 15 20:38:33 aries kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Feb 15 20:38:33 aries kernel: ata2: SATA link down (SStatus 4 SControl 300)
Feb 15 20:38:33 aries kernel: ata4: SATA link down (SStatus 4 SControl 300)
Feb 15 20:38:33 aries kernel: ata1.00: ACPI cmd f5/00:00:00:00:00:e0 (SECURITY FREEZE LOCK) filtered out
Feb 15 20:38:33 aries kernel: ata1.00: ACPI cmd b1/c1:00:00:00:00:e0 (DEVICE CONFIGURATION OVERLAY) filtered out
Feb 15 20:38:33 aries kernel: ata1.00: ACPI cmd f5/00:00:00:00:00:e0 (SECURITY FREEZE LOCK) filtered out
Feb 15 20:38:33 aries kernel: ata1.00: ACPI cmd b1/c1:00:00:00:00:e0 (DEVICE CONFIGURATION OVERLAY) filtered out
Feb 15 20:38:33 aries kernel: ata1.00: configured for UDMA/100
Feb 15 20:38:33 aries kernel: ata3.00: configured for UDMA/100
Feb 15 20:38:33 aries kernel: usb 1-6: reset low-speed USB device number 2 using xhci_hcd
Feb 15 20:38:33 aries kernel: usb 1-14: reset low-speed USB device number 3 using xhci_hcd
Feb 15 20:38:33 aries kernel: PM: Basic memory bitmaps freed
Feb 15 20:38:33 aries kernel: OOM killer enabled.
Feb 15 20:38:33 aries kernel: Restarting tasks ... done.
Feb 15 20:38:33 aries kernel: PM: hibernation exit
Feb 15 20:38:33 aries systemd-networkd[471]: eno1: Lost carrier
Feb 15 20:38:33 aries systemd-networkd[471]: eno1: DHCP lease lost
Feb 15 20:38:33 aries systemd-timesyncd[394]: No network connectivity, watching for changes.
Feb 15 20:38:33 aries systemd-sleep[6718]: System resumed.
Feb 15 20:38:33 aries systemd[1]: Started Hibernate.
Feb 15 20:38:33 aries systemd[1]: sleep.target: Unit not needed anymore. Stopping.
Feb 15 20:38:33 aries systemd[1]: Stopped target Sleep.
Feb 15 20:38:33 aries systemd[1]: Reached target Hibernate.
Feb 15 20:38:33 aries systemd[1]: hibernate.target: Unit not needed anymore. Stopping.
Feb 15 20:38:33 aries systemd[1]: Stopped target Hibernate.
Feb 15 20:38:33 aries systemd-logind[400]: Operation 'sleep' finished.
Feb 15 20:38:34 aries kernel: e1000e: eno1 NIC Link is Up 100 Mbps Full Duplex, Flow Control: Rx/Tx
Feb 15 20:38:34 aries kernel: e1000e 0000:00:1f.6 eno1: 10/100 speed: disabling TSO
Feb 15 20:38:34 aries systemd-networkd[471]: eno1: Gained carrier
Feb 15 20:38:34 aries systemd-timesyncd[394]: Network configuration changed, trying to establish connection.
Feb 15 20:38:35 aries geoclue[719]: Failed to query location: Error resolving “location.services.mozilla.com”: Name or service not known
Feb 15 20:38:39 aries kernel: NVRM: Xid (PCI:0000:01:00): 8, Channel 00000010
Feb 15 20:38:40 aries systemd-networkd[471]: eno1: DHCPv4 address 192.168.1.3/24 via 192.168.1.1
Feb 15 20:38:40 aries systemd-networkd[471]: eno1: Configured
Feb 15 20:38:47 aries kernel: NVRM: Xid (PCI:0000:01:00): 8, Channel 00000010
Feb 15 20:38:55 aries kernel: NVRM: Xid (PCI:0000:01:00): 8, Channel 00000010
Feb 15 20:39:04 aries kernel: NVRM: Xid (PCI:0000:01:00): 8, Channel 00000010
Feb 15 20:39:08 aries systemd-logind[400]: System is rebooting.

With the 387 series, the Xid errors were normally 31 and 8, now it is 32 and 8. I'll post this in the nvidia forums and hope they fix it.

hardware info:
HP 802F Motherboard
Intel i7 6700
Nvidia Quadro K420
8GB Ram

kabir@aries:~|⇒  inxi -G
Graphics:  Card: NVIDIA GK107GL [Quadro K420]
           Display Server: X.Org 1.19.6 driver: nvidia Resolution: 1920x1080@60.00hz
           OpenGL: renderer: Quadro K420/PCIe/SSE2 version: 4.5.0 NVIDIA 390.25

Offline

#123 2018-02-15 16:06:04

loqs
Member
Registered: 2014-03-06
Posts: 17,196

Re: Terrible performance regression with Nvidia 390.25 driver

390.25-10: rebuild against linux 4.15.3 (just a sync rebuild to keep the module compatible with 4.15.3)
390.25-11: kernel 4.15.3-2 (as well as sync for kernel config changes also increased module blacklist as nvidiafb and rivafb were enabled in the kernel)
Surprising either of those releases affected performance.

Offline

#124 2018-02-15 17:22:04

Batou
Member
Registered: 2017-01-03
Posts: 259

Re: Terrible performance regression with Nvidia 390.25 driver

@Ioqs is it possible to build 4.15 against 387 drivers? How difficult is that and how do I find out what other packages are impacted by it?

I'm basically going to downgrade to 4.14 later today because 4.15 and 390 break CUDA. I'm getting weird error messages when compiling against CUDA libs. Something's severely messed up with 390... it's an absolute mess. Not having CUDA work properly makes my computer useless for work.

PS: I'm starting to think that 390 is so messed up due to Nvidia's Spectre mitigations maybe...


Please vote for all the AUR packages you're using. You can mass-vote for all of them by doing: "pacman -Qqm | xargs aurvote -v" (make sure to run "aurvote --configure"  first)

Offline

#125 2018-02-15 18:07:51

loqs
Member
Registered: 2014-03-06
Posts: 17,196

Re: Terrible performance regression with Nvidia 390.25 driver

@Batou you will need to downgrade  put on IgnorePkg in pacman.conf any 390.25 versioned package such as nvidia-utils opencl-nvidia nvidia-settings plus either nvidia or nvidia-dkms.
To make a patched 387.24 ensure the nvidia or nvidia-dkms package is removed and all other nvidia packages are downgraded to 387.24

$ mkdir nvidia
$ cd nvidia
$ curl -o 387.34-4.15.patch https://ptpb.pw/OBXC
$ curl -o PKGBUILD https://ptpb.pw/gsus
$ makepkg #then install either pacman -U pacman -U nvidia-387.34-11-x86_64.pkg.tar.xz or pacman -U nvidia-dkms-387.34-11-x86_64.pkg.tar.xz

If you are not using the DKMS package remember to run makepkg -Cf + pacman -U pacman -U nvidia-387.34-11-x86_64.pkg.tar.xz after a kernel upgrade before rebooting
You can skip the rebuild and check if the module fails to load and only then do the rebuild instead.

Offline

Board footer

Powered by FluxBB