You are not logged in.

#176 2025-05-09 18:27:33

seth
Member
Registered: 2012-09-03
Posts: 65,061

Re: nvidia-390xx AUR package discussion thread

seth also wrote:

monitor the nvidia-390xx-dkms package for gcc15 patch

It's already slushing around and you don't need a patch for the 6.15 kernel - in doubt you'd likely be better off holding out on the LTS kernel and the 390xx drivers (performance-wise)

Offline

#177 2025-05-09 19:14:16

aldolat
Member
Registered: 2022-07-15
Posts: 17

Re: nvidia-390xx AUR package discussion thread

An update regarding my post here: I successfully compiled and installed the nvidia-390-dkms package adding the gcc-15.patch of bufferunderrun here to PKGBUILD. Then I re-enabled the updates for:

linux
linux-headers
gcc
gcc-libs

and updated the whole system without any error.

Offline

#178 2025-05-11 02:15:00

drankinatty
Member
From: Nacogdoches, Texas
Registered: 2009-04-24
Posts: 88
Website

Re: nvidia-390xx AUR package discussion thread

@Seth, the Nvidia-390xx fix build with kernel 6.3 patch looks exactly like what will be required. I wonder why the AUR 390xx driver never fully incorporated that patch?

In the current kernel-6.3.patch for the AUR 390xx driver, kernel/comon/inc/nv-mm.h is never modified. This will take some significant testing to see if we can now incorporate the missing items of the 6.3 patch and make that work with the changes needed for 6.15. If we can, then the backport of Joan's 470 patch with vm_flags_reset(vma, vma->vm_flags | flags) can be directly incorporated.

When 6.15 appears, we see if it will build as is, but I suspect these inconsistencies will need to be ironed out. Has anyone tried incorporating the complete linked 6.3 kernel patch yet with the current or LTS kernel?

Last edited by drankinatty (2025-05-11 03:26:57)


David C. Rankin, J.D.,P.E.

Offline

#179 2025-05-13 02:03:49

canolucas
Member
Registered: 2010-05-23
Posts: 59

Re: nvidia-390xx AUR package discussion thread

I'm taking a look at the linux-6.3.patch that was uploaded to the AUR package. Looks like almost all changes of the PLD Linux patch are included in our Archlinux patch

The difference is that the PLD Linux patch adds the following functions:

++static inline void nv_vm_flags_set(struct vm_area_struct *vma, vm_flags_t flags)
++{
++#if defined(NV_VM_AREA_STRUCT_HAS_CONST_VM_FLAGS)
++    vm_flags_set(vma, flags);
++#else
++    vma->vm_flags |= flags;
++#endif
++}
++
++static inline void nv_vm_flags_clear(struct vm_area_struct *vma, vm_flags_t flags)
++{
++#if defined(NV_VM_AREA_STRUCT_HAS_CONST_VM_FLAGS)
++    vm_flags_clear(vma, flags);
++#else
++    vma->vm_flags &= ~flags;
++#endif
++}

But looks like we added similar functions in nv-linux.h (first file modified in the Archlinux patch)

+#if LINUX_VERSION_CODE < KERNEL_VERSION(6, 3, 0)
+// Rel. commit "mm: introduce vma->vm_flags wrapper functions" (Suren Baghdasaryan, 26 Jan 2023)
+static inline void vm_flags_set(struct vm_area_struct *vma, vm_flags_t flags)
+{
+    vma->vm_flags |= flags;
+}
+
+static inline void vm_flags_clear(struct vm_area_struct *vma, vm_flags_t flags)
+{
+    vma->vm_flags &= ~flags;
+}
+#endif
+

As I could see in the archlinux patch, our approach is different. Instead of using the wrapper functions that start with nv_, we are using the functions defined in Linux kernel directly, (we just declare the functions if the kernel version is < 6.3).

I think we were functionally covered. Functionally, it looks like our code didn't differ from Joan's source code. Yes, the name of the wrapper functions differ vm_flags_set vs nv_vm_flags_set, vm_flags_clear vs vm_flags_clear but both pairs of wrapper functions just add or remove the given flags vma->vm_flags |= flags and vma->vm_flags &= ~flags respectively.

So, what will be the problem with the upcoming Linux kernel release ? I don't see too much code breakage. Yes, Joan's 6.15 patch rely on the functions named nv_vm_flags_set and nv_vm_flags_unset being present. But what if we continue using the functions already defined in the Linux kernel directly (vm_flags_set and vm_flags_clear) ?

I agree that we will be better off avoiding this difference with Joan's code. We'll probably face more merge conflicts or breakage in the future, and being Joan the main maintainer regarding the compatibility of this old driver with new kernel versions, I agree that we will be better off being sync'ed with Joans' codebase.

nv_ wrapper functions also give us more flexibility, we will be able to handle compatibility issues in our package, instead of relying on the Linux kernel functions and being affected by possible changes there.

On the other hand, Joans' changes to the nv_vm_flags_set and nv_vm_flags_clear functions (previously created by him in the 6.3 patch) is just a licence thing, maybe it is not really needed at all. We will have to see when the next kernel version gets released.

static inline void nv_vm_flags_set(struct vm_area_struct *vma, vm_flags_t flags)
 {
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(6, 15, 0)
+    // Rel. commit "mm: uninline the main body of vma_start_write()" (Suren Baghdasaryan, 13 Feb 2025)
+    // Since Linux 6.15, vm_flags_set and vm_flags_clear call a GPL-only symbol
+    // for locking (__vma_start_write), which can't be called from non-GPL code.
+    // However, it appears all uses on the driver are on VMAs being initially
+    // mapped / which are already locked, so we can use vm_flags_reset, which
+    // doesn't lock the VMA, but rather just asserts it is already write-locked.
+    vm_flags_reset(vma, vma->vm_flags | flags);
+#else
     vm_flags_set(vma, flags);
+#endif
 }
 
 static inline void nv_vm_flags_clear(struct vm_area_struct *vma, vm_flags_t flags)
 {
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(6, 15, 0)
+    // Rel. commit "mm: uninline the main body of vma_start_write()" (Suren Baghdasaryan, 13 Feb 2025)
+    // See above
+    vm_flags_reset(vma, vma->vm_flags & ~flags);
+#else
     vm_flags_clear(vma, flags);
+#endif
 }

EDIT: yup, looks like we will need Joan's code. symbols that are exported with EXPORT_GPL_ONLY, are only available if the kernel module contains MODULE_LICENSE("GPL")

Last edited by canolucas (2025-05-13 03:14:46)

Offline

#180 2025-05-20 07:16:53

drankinatty
Member
From: Nacogdoches, Texas
Registered: 2009-04-24
Posts: 88
Website

Re: nvidia-390xx AUR package discussion thread

Thank YOU!  I'm concerned my attempt to modify the code to use Joan's approach incorporating the changes needed to bring the 390 driver in sync with the 470 driver would not end well. I haven't taken a deep dive into the sources to try and sort it out yet, so it may not be that bad, but I find I'm a bit reluctant to just start tinkering with function renaming or adding new wrappers to the mix without having a good base understanding of the complete sources.

If you have a good handle on what is needed, I'm happy to help test what you can put together. Your explanation above provides a good roadmap that I can use, if needed, to try and locate the current wrappers and see whether we just need to rename/modify the current functions in nv-mm.h or whether we need to add wrappers and leave the current content unchanged. Let me know either way.

I tend to agree that bringing our 6.3 patch in line with the complete kernel 6.3 patch as it was originally will likely make future maintenance and patch backports much easier. Backporting Joan's patches when the code for the drivers diverges takes significantly more experience with the driver than many of those willing to help have. (me included) Thank you again for helping with these differences and let me know how you want to proceed.


David C. Rankin, J.D.,P.E.

Offline

#181 2025-05-21 02:32:26

canolucas
Member
Registered: 2010-05-23
Posts: 59

Re: nvidia-390xx AUR package discussion thread

It seems there is a patch proposal in the comments of the AUR package, it was posted today by ventureo. instead of relying on wrapper functions, it uses vm_flags_reset directly, instead of the "from now on GPL only" vm_flags_set / vm_flags_clear when linux version is > 6.15

https://github.com/CachyOS/CachyOS-PKGB … c2d3dcd016
Yes, the approach is different, it doesn't use Joans' wrapper functions, but besides that, it looks good to me. At least we have something to test when Linux 6.15 gets released.

Last edited by canolucas (2025-05-21 02:36:58)

Offline

#182 2025-05-28 17:40:40

boomshalek
Member
Registered: 2007-10-12
Posts: 113

Re: nvidia-390xx AUR package discussion thread

Hi
First, thanks a lot for putting all this work in to make our old cards running with upstream kernel changes !
I have installed kernel-lts510. There the build fails an dkms install 
(Error ! Bad return status for module build on kernel: 5.10.236-1-lts510)
Unfortunately the make.log seems not to be available

On kernel-lts it installs fine.

Any ideas how to make it install on lts510 ?

TIA

Offline

#183 2025-05-28 18:25:46

seth
Member
Registered: 2012-09-03
Posts: 65,061

Re: nvidia-390xx AUR package discussion thread

It will be helpful to post the build log to see where and why it fails, but you might simply have to skip the patches for newer kernels?

Offline

#184 2025-05-28 20:43:14

boomshalek
Member
Registered: 2007-10-12
Posts: 113

Re: nvidia-390xx AUR package discussion thread

seth wrote:

It will be helpful to post the build log to see where and why it fails, but you might simply have to skip the patches for newer kernels?

Hi. Thank you for trying to help me.
I have kernel-lts510 installed since years, without related nvidia-390xx problems, so my question is, which patch(es) might be the one(s).
I was thinking it was related to recent gcc upgrades instead.

While having kernel-lts and kernel-lts510 installed at the same time I do find the make.log for kernel-lts' nvidia module but there is no folder for kernel-lts510 nvidia module created where i could find the make.log for that compilation. Any hints on that?

Last edited by boomshalek (2025-05-28 20:44:32)

Offline

#185 2025-05-28 21:12:53

seth
Member
Registered: 2012-09-03
Posts: 65,061

Re: nvidia-390xx AUR package discussion thread

seth wrote:

It will be helpful to post the build log

Offline

#186 2025-05-30 17:07:07

boomshalek
Member
Registered: 2007-10-12
Posts: 113

Re: nvidia-390xx AUR package discussion thread

My bad; I have used a user repository with precompiled lts510 as it takes more than 4 hours to compile the kernel on my system.
The repo package has not been updated for about 30 days.

After I had compiled lts510 today against all recent system packages myself, nvidia-390xx-dkms built and installed fine.

Sorry seth, thank you anyway!

Offline

#187 2025-05-31 16:52:31

drankinatty
Member
From: Nacogdoches, Texas
Registered: 2009-04-24
Posts: 88
Website

Re: nvidia-390xx AUR package discussion thread

@Seth, yes, I just finished looking over https://github.com/CachyOS/CachyOS-PKGBUILDS/...2d3dcd016 and that left me scratching my head a bit. It would be great if we can avoid changing any of the wrappers in kernel/comon/inc/nv-mm.h. I'm a bit concerned with ending up with a hybrid patch set. Some of Joan's patches, some from CachyOS, all blended together. The only reason I say that is then future patches for the 470 driver won't be a straight forward backport.

6.15 should hit testing shortly and we can see how it goes. Many thanks to @canolucs for getting the 6.15 patch. I look forward to testing it.


David C. Rankin, J.D.,P.E.

Offline

#188 2025-06-05 23:53:20

canolucas
Member
Registered: 2010-05-23
Posts: 59

Re: nvidia-390xx AUR package discussion thread

@drankinatty
Ok so now the patch was been rewritten to match Joans function nomenclature.
Here is the latest version of ventureos patch: https://github.com/CachyOS/CachyOS-PKGB … 6.15.patch

So by using this patch we will be calling to our own wrapper functions, which in turn will call vm_flags_reset or vm_flags_set/vm_flags_clear based on the kernel version, this has some advantages:
* we will avoid breaking support for previous versions of the kernel
* we are now matching Joans codebase, so it will be much easier to merge future patches as well

when 6.15 hits the Core-Testing repo, I think that we are ready to test the patch in its current form. It looks good as far as I'm concerned.

Offline

#189 2025-06-06 00:43:43

canolucas
Member
Registered: 2010-05-23
Posts: 59

Re: nvidia-390xx AUR package discussion thread

===
Please ignore this post
===

I think that the only portion of the original 6.3 patch that we are missing is adding the following check to conftest.sh

+diff --color -ur NVIDIA-Linux-x86_64-390.157-no-compat32.orig/kernel/conftest.sh NVIDIA-Linux-x86_64-390.157-no-compat32/kernel/conftest.sh
+--- NVIDIA-Linux-x86_64-390.157-no-compat32.orig/kernel/conftest.sh	2022-10-11 18:00:50.000000000 +0200
++++ NVIDIA-Linux-x86_64-390.157-no-compat32/kernel/conftest.sh	2023-05-27 21:33:14.502405255 +0200
+@@ -4646,6 +4646,25 @@
+ 
+             compile_check_conftest "$CODE" "NV_ACPI_VIDEO_BACKLIGHT_USE_NATIVE" "" "functions"
+         ;;
++
++        vm_area_struct_has_const_vm_flags)
++            #
++            # Determine if the 'vm_area_struct' structure has
++            # const 'vm_flags'.
++            #
++            # A union of '__vm_flags' and 'const vm_flags' was added 
++            # by commit bc292ab00f6c ("mm: introduce vma->vm_flags
++            # wrapper functions") in mm-stable branch (2023-02-09)
++            # of the akpm/mm maintainer tree.
++            #
++            CODE="
++            #include <linux/mm_types.h>
++            int conftest_vm_area_struct_has_const_vm_flags(void) {
++                return offsetof(struct vm_area_struct, __vm_flags);
++            }"
++
++            compile_check_conftest "$CODE" "NV_VM_AREA_STRUCT_HAS_CONST_VM_FLAGS" "" "types"
++        ;;
+     esac
+ }
+

This would allow us to support even older kernel versions:

static inline void nv_vm_flags_set(struct vm_area_struct *vma, vm_flags_t flags)
{
#if LINUX_VERSION_CODE >= KERNEL_VERSION(6, 15, 0)
    // Rel. commit "mm: uninline the main body of vma_start_write()" (Suren Baghdasaryan, 13 Feb 2025)
    // Since Linux 6.15, vm_flags_set and vm_flags_clear call a GPL-only symbol
    // for locking (__vma_start_write), which can't be called from non-GPL code.
    // However, it appears all uses on the driver are on VMAs being initially
    // mapped / which are already locked, so we can use vm_flags_reset, which
    // doesn't lock the VMA, but rather just asserts it is already write-locked.
    vm_flags_reset(vma, vma->vm_flags | flags);
#else
    #if defined(NV_VM_AREA_STRUCT_HAS_CONST_VM_FLAGS)
        vm_flags_set(vma, flags);
    #else
        vma->vm_flags |= flags;
    #endif
#endif
}
static inline void nv_vm_flags_clear(struct vm_area_struct *vma, vm_flags_t flags)
{
#if LINUX_VERSION_CODE >= KERNEL_VERSION(6, 15, 0)
    // Rel. commit "mm: uninline the main body of vma_start_write()" (Suren Baghdasaryan, 13 Feb 2025)
    // See above
    vm_flags_reset(vma, vma->vm_flags & ~flags);
#else
    #if defined(NV_VM_AREA_STRUCT_HAS_CONST_VM_FLAGS)
        vm_flags_clear(vma, flags);
    #else
        vma->vm_flags &= ~flags;
    #endif
#endif
}

This will support kernel version < 6.3.
Linux 6.3 is the kernel version in which the following patch was included:
mm: introduce vma->vm_flags wrapper functions

So, Linux 6.3 was the first kernel version in which:

* the vm_flags field was changed to be read-only.
* vm_flags_init, vm_flags_reset, vm_flags_set, vm_flags_clear and vm_flags_mod access functions were added.

Previous kernel versions would need the above code modifications to be able to modify the flags (flag set / flag clear) directly (it will be allowed without using the helper functions because in Linux versions prior to 6.3 the vm_flags field will not be read-only yet).

EDIT: Sorry there is no real need for doing this, we already define vm_flags_set / vm_flags_clear if needed

+#if LINUX_VERSION_CODE < KERNEL_VERSION(6, 3, 0)
+// Rel. commit "mm: introduce vma->vm_flags wrapper functions" (Suren Baghdasaryan, 26 Jan 2023)
+static inline void vm_flags_set(struct vm_area_struct *vma, vm_flags_t flags)
+{
+    vma->vm_flags |= flags;
+}
+
+static inline void vm_flags_clear(struct vm_area_struct *vma, vm_flags_t flags)
+{
+    vma->vm_flags &= ~flags;
+}
+#endif
+

===
Please ignore this post
===

Last edited by canolucas (2025-06-12 23:14:40)

Offline

#190 2025-06-10 22:47:13

UpbeatBlacksmith
Member
Registered: 2025-05-04
Posts: 1

Re: nvidia-390xx AUR package discussion thread

Hi guys, while upgrading my system, i broke my 390xx driver,

==> dkms install --no-depmod nvidia/390.157 -k 6.15.1-arch1-2

Error! Bad return status for module build on kernel: 6.15.1-arch1-2 (x86_64)
Consult /var/lib/dkms/nvidia/390.157/build/make.log for more information.
==> WARNING: `dkms install --no-depmod nvidia/390.157 -k 6.15.1-arch1-2' exited 10

This was the error that came, last time when this error came it was due to gcc-15 but this i dont think thats the reason this time, as i fixed it using gcc-15.patch.
I tried upgrading nvidia-390xx package but error was still there.

here is the make.log
https://justpaste.it/g4kcc

I think error is due to

 #include "nv-misc.h" 

Here are the details of my system:
gcc : 15.1.1 20250425
linux : 6.15.1

Sorry if post it in wrong format, it was first time I'm asking question on ArchWiki
I hope somebody can help me smile

Edit: I find the problem is with the latest kernel, i installed linux-lts 6.12.32.1-lts, drivers were successfully builded
Edit 2: Fixied after kernel 6.15 patch

Last edited by UpbeatBlacksmith (2025-06-11 19:07:16)

Offline

#191 2025-06-16 10:33:00

mess
Member
Registered: 2025-05-02
Posts: 5

Re: nvidia-390xx AUR package discussion thread

Upgraded my OS today to latest 6.15.2-arch1-1.
I also had to upgrade nvidia-390xx from AUR, I did it with "git clone/makepkg", not yay, not sure if that matters..
I added the nvidia-drm.modeset=1 nvidia-drm.fbdev=1 args to the grub CMDLINE.
When I reboot, the nvidia modules are loaded OK, "fbdev" reports unsuccessful, and I guess that means I won't have nvidia on console right now, but it's not an issue for me.

$ sudo dmesg | grep fbdev
[    0.000000] Command line: BOOT_IMAGE=/vmlinuz-linux root=/dev/mapper/os-root rw loglevel=3 quiet nvidia-drm.modeset=1 nvidia-drm.fbdev=1
[    0.045651] Kernel command line: BOOT_IMAGE=/vmlinuz-linux root=/dev/mapper/os-root rw loglevel=3 quiet nvidia-drm.modeset=1 nvidia-drm.fbdev=1
[    3.632611] nvidia_drm: unknown parameter 'fbdev' ignored

Other than this, the driver now works fine for me, so I just wanted to thank everyone who was involved in investigating and fixing this. THANK YOU GUYS, Have a nice day all of you!

Last edited by mess (2025-06-16 10:39:55)

Offline

#192 2025-06-19 15:44:18

seth
Member
Registered: 2012-09-03
Posts: 65,061

Re: nvidia-390xx AUR package discussion thread

The fbdev parameter is not supported up until iirc some 545 or 550 version, you can just remove that.

There's currently discussion to remove the modeset hack to disable the simpledrm device from the kernel patches, do things still work on 390xx w/o "nvidia-drm.modeset=1"?

Offline

#193 2025-06-21 00:35:34

canolucas
Member
Registered: 2010-05-23
Posts: 59

Re: nvidia-390xx AUR package discussion thread

@seth, I just tested this. I removed the "nvidia-drm.modeset=1" kernel parameter and my system does NOT boot properly.
All I get is a black screen, so i guess it is neccesary to boot with the kernel parameter added for the driver to be able to load properly.

I just read here that all modeset=1 really does is initialize all GPUs inmediately (instead of waiting for a client to read the /dev/nvidia* device files). This has pros and cons:

aplattner @ nvidia forums wrote:

Setting modeset=1 doesn’t actually install a framebuffer console. All it really does is enable the DRIVER_MODESET capability flag in the nvidia-drm devices so that DRM clients can use the various modesetting APIs. In addition to allowing clients that talk to the low-level DRM interface to work, it’s also necessary for some PRIME-related interoperability features.

The downside, if you want to call it that, is that loading nvidia-drm with modeset=1 causes it to configure and initialize all GPUs immediately rather than waiting for a client to open the /dev/nvidia* device files. This means that some options that require a userspace application to configure them before the GPUs are initialized won’t work if they were already configured by nvidia-drm. The big example at the moment is SLI Mosaic, which is enabled by the X driver if /etc/X11/xorg.conf says it should be.

Last edited by canolucas (2025-06-21 00:47:37)

Offline

#194 2025-06-21 12:25:52

seth
Member
Registered: 2012-09-03
Posts: 65,061

Re: nvidia-390xx AUR package discussion thread

I maybe should have been more elaborate.
You'll still absolutely need nvidia_drm.modeset to be enabled for kms, but the kernel parameter has a second function.
So the idea for a meaningful test would be to enable modesetting via a modprobe configlet, https://wiki.archlinux.org/title/Kernel … modprobe.d

options nvidia_drm modeset=1

(don't forget to regenerate the initramfs if you the nvidia modules added there) but remove the kernel parameter (to allow the simpledrm device to step in)

Last edited by seth (2025-06-22 06:51:59)

Offline

#195 2025-06-21 22:25:00

mess
Member
Registered: 2025-05-02
Posts: 5

Re: nvidia-390xx AUR package discussion thread

@seth I think you probably meant

options nvidia-drm modeset=1

I removed the kernel option from grub, then I tested with just options nvidia modeset=1 in /etc/modprobe.d/nvidia.conf (+mkinitcpio, etc.), my card was not recognized by Xorg, the NVIDIA driver was not even loaded.
I changed nvidia to nvidia-drm, then my card WAS recognized, NVIDIA driver loaded by Xorg, but:

Since I boot to a console, not to X, and I start Xorg (XFCE) from console with 'startxfce4', I did that, and the NVIDIA driver was kicking in, but Xorg.0.log said that the driver could not 'get a permission for modesetting', so Xorg exited.
Then I started Xorg with sudo, and Xorg started, but I got a black screen, and Xorg.0.log says that the NVIDIA driver can not connect to the ACPI server.

I did not test starting my desktop with Xorg, if you want me to do that, I can.

All in all, when I have the kernel option to grub, I can start my startxfce4 from userland, NVIDIA recognizes my card, and can connect to ACPI.

Not sure if this helps much, let me know if you want me to test anything else.

Last edited by mess (2025-06-21 22:38:54)

Offline

#196 2025-06-22 06:55:03

seth
Member
Registered: 2012-09-03
Posts: 65,061

Re: nvidia-390xx AUR package discussion thread

Yes, sorry - the relevant module of course doesn't magically change wink

Xorg.0.log said that the driver could not 'get a permission for modesetting', so Xorg exited.

Can you please post that log?

Then I started Xorg with sudo

"Wahhhh" - never! do that. Make sure you're not left w/ root owned files in your $HOME

Offline

#197 2025-06-22 11:42:18

mess
Member
Registered: 2025-05-02
Posts: 5

Re: nvidia-390xx AUR package discussion thread

Don't worry, the sudo stunt was solely for the sake of the experiment. :-)

Here is my full logs:


[    19.903] 
X.Org X Server 1.21.1.16
X Protocol Version 11, Revision 0
[    19.903] Current Operating System: Linux arcticmonkey 6.15.2-arch1-1 #1 SMP PREEMPT_DYNAMIC Tue, 10 Jun 2025 21:32:33 +0000 x86_64
[    19.903] Kernel command line: BOOT_IMAGE=/vmlinuz-linux root=/dev/mapper/os-root rw loglevel=3 quiet nvidia-drm.fbdev=1
[    19.903]  
[    19.903] Current version of pixman: 0.46.2
[    19.903]    Before reporting problems, check [url]http://wiki.x.org[/url]
        to make sure that you have the latest version.
[    19.903] Markers: (--) probed, (**) from config file, (==) default setting,
        (++) from command line, (!!) notice, (II) informational,
        (WW) warning, (EE) error, (NI) not implemented, (??) unknown.
[    19.904] (==) Log file: "/althome/testuser/.local/share/xorg/Xorg.0.log", Time: Sun Jun 22 00:01:30 2025
[    19.907] (==) Using config directory: "/etc/X11/xorg.conf.d"
[    19.907] (==) Using system config directory "/usr/share/X11/xorg.conf.d"
[    19.908] (==) No Layout section.  Using the first Screen section.
[    19.909] (==) No screen section available. Using defaults.
[    19.909] (**) |-->Screen "Default Screen Section" (0)
[    19.909] (**) |   |-->Monitor "<default monitor>"
[    19.909] (==) No monitor specified for screen "Default Screen Section".
        Using a default monitor configuration.
[    19.909] (**) Allowing byte-swapped clients
[    19.909] (==) Automatically adding devices
[    19.909] (==) Automatically enabling devices
[    19.909] (==) Automatically adding GPU devices
[    19.909] (==) Automatically binding GPU devices
[    19.909] (==) Max clients allowed: 256, resource mask: 0x1fffff
[    19.911] (WW) The directory "/usr/share/fonts/misc" does not exist.
[    19.911]    Entry deleted from font path.
[    19.912] (WW) The directory "/usr/share/fonts/OTF" does not exist.
[    19.912]    Entry deleted from font path.
[    19.912] (WW) The directory "/usr/share/fonts/Type1" does not exist.
[    19.912]    Entry deleted from font path.
[    19.914] (WW) `fonts.dir' not found (or not valid) in "/usr/share/fonts/100dpi".
[    19.914]    Entry deleted from font path.
[    19.914]    (Run 'mkfontdir' on "/usr/share/fonts/100dpi").
[    19.915] (WW) `fonts.dir' not found (or not valid) in "/usr/share/fonts/75dpi".
[    19.915]    Entry deleted from font path.
[    19.915]    (Run 'mkfontdir' on "/usr/share/fonts/75dpi").
[    19.915] (==) FontPath set to:
        /usr/share/fonts/TTF
[    19.915] (==) ModulePath set to "/usr/lib/xorg/modules"
[    19.915] (II) The server relies on udev to provide the list of input devices.
        If no devices become available, reconfigure udev or disable AutoAddDevices.
[    19.915] (II) Module ABI versions:
[    19.915]    X.Org ANSI C Emulation: 0.4
[    19.915]    X.Org Video Driver: 25.2
[    19.915]    X.Org XInput driver : 24.4
[    19.915]    X.Org Server Extension : 10.0
[    19.917] (++) using VT number 1

[    19.917] (--) controlling tty is VT number 1, auto-enabling KeepTty
[    19.920] (II) systemd-logind: took control of session /org/freedesktop/login1/session/_31
[    19.925] (II) xfree86: Adding drm device (/dev/dri/card1)
[    19.925] (II) Platform probe for /sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0/drm/card1
[    19.926] (II) systemd-logind: got fd for /dev/dri/card1 226:1 fd 13 paused 0
[    19.928] (II) xfree86: Adding drm device (/dev/dri/card0)
[    19.928] (II) Platform probe for /sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0/simple-framebuffer.0/drm/card0
[    19.929] (II) systemd-logind: got fd for /dev/dri/card0 226:0 fd 14 paused 0
[    19.932] (**) OutputClass "nvidia" ModulePath extended to "/usr/lib/nvidia/xorg,/usr/lib/xorg/modules,/usr/lib/xorg/modules"
[    19.935] (--) PCI:*(1@0:0:0) 10de:107d:10de:094e rev 161, Mem @ 0xda000000/16777216, 0xd0000000/134217728, 0xd8000000/33554432, I/O @ 0x00003000/128, BIOS @ 0x????????/131072
[    19.935] (WW) Open ACPI failed (/var/run/acpid.socket) (No such file or directory)
[    19.935] (II) LoadModule: "glx"
[    19.936] (II) Loading /usr/lib/nvidia/xorg/libglx.so
[    20.001] (II) Module glx: vendor="NVIDIA Corporation"
[    20.001]    compiled for 4.0.2, module version = 1.0.0
[    20.002]    Module class: X.Org Server Extension
[    20.002] (II) NVIDIA GLX Module  390.157  Wed Oct 12 09:19:15 UTC 2022
[    20.003] (II) Applying OutputClass "nvidia" to /dev/dri/card1
[    20.003]    loading driver: nvidia
[    20.003] (==) Matched nouveau as autoconfigured driver 0
[    20.003] (==) Matched nv as autoconfigured driver 1
[    20.003] (==) Matched nvidia as autoconfigured driver 2
[    20.003] (==) Matched modesetting as autoconfigured driver 3
[    20.003] (==) Matched fbdev as autoconfigured driver 4
[    20.003] (==) Matched vesa as autoconfigured driver 5
[    20.003] (==) Assigned the driver to the xf86ConfigLayout
[    20.003] (II) LoadModule: "nouveau"
[    20.007] (WW) Warning, couldn't open module nouveau
[    20.007] (EE) Failed to load module "nouveau" (module does not exist, 0)
[    20.007] (II) LoadModule: "nv"
[    20.007] (WW) Warning, couldn't open module nv
[    20.007] (EE) Failed to load module "nv" (module does not exist, 0)
[    20.007] (II) LoadModule: "nvidia"
[    20.007] (II) Loading /usr/lib/xorg/modules/drivers/nvidia_drv.so
[    20.014] (II) Module nvidia: vendor="NVIDIA Corporation"
[    20.014]    compiled for 4.0.2, module version = 1.0.0
[    20.014]    Module class: X.Org Video Driver
[    20.015] (II) LoadModule: "modesetting"
[    20.015] (II) Loading /usr/lib/xorg/modules/drivers/modesetting_drv.so
[    20.017] (II) Module modesetting: vendor="X.Org Foundation"
[    20.017]    compiled for 1.21.1.16, module version = 1.21.1
[    20.017]    Module class: X.Org Video Driver
[    20.017]    ABI class: X.Org Video Driver, version 25.2
[    20.017] (II) LoadModule: "fbdev"
[    20.018] (WW) Warning, couldn't open module fbdev
[    20.018] (EE) Failed to load module "fbdev" (module does not exist, 0)
[    20.018] (II) LoadModule: "vesa"
[    20.018] (WW) Warning, couldn't open module vesa
[    20.018] (EE) Failed to load module "vesa" (module does not exist, 0)
[    20.019] (II) NVIDIA dlloader X Driver  390.157  Wed Oct 12 09:21:41 UTC 2022
[    20.019] (II) NVIDIA Unified Driver for all Supported NVIDIA GPUs
[    20.019] (II) modesetting: Driver for Modesetting Kernel Drivers: kms
[    20.020] xf86EnableIO: failed to enable I/O ports 0000-03ff (Operation not permitted)
[    20.020] (II) systemd-logind: releasing fd for 226:0
[    20.021] (II) Loading sub module "fb"
[    20.021] (II) LoadModule: "fb"
[    20.021] (II) Module "fb" already built-in
[    20.021] (II) Loading sub module "wfb"
[    20.021] (II) LoadModule: "wfb"
[    20.021] (II) Loading /usr/lib/xorg/modules/libwfb.so
[    20.022] (II) Module wfb: vendor="X.Org Foundation"
[    20.022]    compiled for 1.21.1.16, module version = 1.0.0
[    20.022]    ABI class: X.Org ANSI C Emulation, version 0.4
[    20.022] (II) Loading sub module "ramdac"
[    20.022] (II) LoadModule: "ramdac"
[    20.023] (II) Module "ramdac" already built-in
[    20.025] (WW) Falling back to old probe method for modesetting
[    20.026] (WW) VGA arbiter: cannot open kernel arbiter, no multi-card support
[    20.026] (II) NVIDIA(0): Creating default Display subsection in Screen section
        "Default Screen Section" for depth/fbbpp 24/32
[    20.026] (==) NVIDIA(0): Depth 24, (==) framebuffer bpp 32
[    20.026] (==) NVIDIA(0): RGB weight 888
[    20.026] (==) NVIDIA(0): Default visual is TrueColor
[    20.026] (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0)
[    20.027] (II) Applying OutputClass "nvidia" options to /dev/dri/card1
[    20.027] (**) NVIDIA(0): Option "AllowEmptyInitialConfiguration"
[    20.027] (**) NVIDIA(0): Enabling 2D acceleration
[    20.030] (--) NVIDIA(0): Valid display device(s) on GPU-0 at PCI:1:0:0
[    20.030] (--) NVIDIA(0):     DFP-0
[    20.030] (--) NVIDIA(0):     DFP-1
[    20.030] (--) NVIDIA(0):     DFP-2
[    20.030] (--) NVIDIA(0):     DFP-3 (boot)
[    20.033] (II) NVIDIA(0): NVIDIA GPU NVS 310 (GF119) at PCI:1:0:0 (GPU-0)
[    20.033] (--) NVIDIA(0): Memory: 524288 kBytes
[    20.033] (--) NVIDIA(0): VideoBIOS: 75.19.85.00.01
[    20.033] (II) NVIDIA(0): Detected PCI Express Link width: 16X
[    20.033] (EE) NVIDIA(GPU-0): Failed to acquire modesetting permission.
[    20.033] (EE) NVIDIA(0): Failing initialization of X screen 0
[    20.034] (II) UnloadModule: "nvidia"
[    20.034] (II) UnloadSubModule: "wfb"
[    20.034] (EE) Screen(s) found, but none have a usable configuration.
[    20.034] (EE) 
Fatal server error:
[    20.034] (EE) no screens found(EE) 
[    20.034] (EE) 
Please consult the The X.Org Foundation support 
         at [url]http://wiki.x.org[/url]
 for help. 
[    20.034] (EE) Please also check the log file at "/althome/testuser/.local/share/xorg/Xorg.0.log" for additional information.
[    20.034] (EE) 
[    20.040] (EE) Server terminated with error (1). Closing log file.

Last edited by mess (2025-06-22 12:10:51)

Offline

#198 2025-06-22 12:13:53

seth
Member
Registered: 2012-09-03
Posts: 65,061

Re: nvidia-390xx AUR package discussion thread

Please use [code][/code] tags, not "quote" tags. Edit your post in this regard.
"nvidia-drm.fbdev=1" doesn't do anything on 390xx

If

cat /sys/module/nvidia_drm/parameters/modeset

says "Y" w/ only the modprobe.d config (but w/o the kernel parameter) nvidia fails to take control from the simplydumb device (which is expected) resulting in this problem and in that case removing the hack would effectively break kms (and thus GUI environments) for 390xx users.
Please make sure the parameter actually applied and then I'm gonna issue another caveat reg. that effort.

Offline

#199 2025-06-22 14:55:10

mess
Member
Registered: 2025-05-02
Posts: 5

Re: nvidia-390xx AUR package discussion thread

Yes, I retested, when I remove the modeset parameter from the kernel, and enable it in modprobe.d, then reboot, that's when I get the NVIDIA modesetting permission problem.
I also checked at the same time:

cat /sys/module/nvidia_drm/parameters/modeset
Y

Last edited by mess (2025-06-22 14:55:28)

Offline

#200 2025-06-22 19:40:56

seth
Member
Registered: 2012-09-03
Posts: 65,061

Re: nvidia-390xx AUR package discussion thread

There doesn't seem much appetite to maintain the hack for the 390xx and 470xx drivers.
See whether you get away w/ "initcall_blacklist=simpledrm_platform_driver_init" instead of "nvidia_drm.modeset=1" (the modprobe.d config to actually enable modesetting of course needs to stay, we're just dancing around the simplydumb device)

Offline

Board footer

Powered by FluxBB