You are not logged in.
`systemctl hibernate` seems to work as expected (although it does log me out of my x session).
Means it doesn't - you don't have th resume hook in your initramfs.
this issue only started occurring when I upgraded my cpu from a Ryzen 3 3100 to a Ryzen 5 5500
May 06 20:37:20 frostyarch kernel: DMI: To Be Filled By O.E.M. To Be Filled By O.E.M./B450M Pro4, BIOS P5.70 10/20/2022
nvidia-drm.modeset=l
"1", not "l" - get a better monospace font.
Can you sleep/wake fine when removing nvidia and running on nouveau?
Offline
Tried removing nvidia from modules. still not working.
going back to 535 (550 has problems)
Edit: 535 also has problems (nvidia utils has diff version)
550 problems: Requires new maintainer. Needs manual PKGBUILD edit for now.
Installed 525 instead
Package list for your favorite aur wrapper:
lib32-nvidia-525xx-utils lib32-opencl-nvidia-525xx libxnvctrl-525xx nvidia-525xx-dkms nvidia-525xx-utils opencl-nvidia-525xx
Last edited by Tharbad (2025-05-09 01:09:16)
Offline
Haha point noted about the monospace font! That issue is resolved now. I thought I could edit my post saying this before anyone noticed haha.
I tried running on nouveau, which I installed via:
sudo pacman -R lib32-nvidia-utils lib32-opencl-nvidia nvidia-open nvidia-settings nvidia-utils opencl-nvidia steam python-py3nvml gwe
sudo pacman -S mesa lib32-mesa
sudo mv /etc/modprobe.d/nvidia.conf .
sudo mv /etc/X11/xorg.conf .
sudo pacman -S mesa lib32-mesa
And verified the installation via
> inxi -F
System:
Host: frostyarch Kernel: 6.14.4-arch1-2 arch: x86_64 bits: 64
Desktop: Qtile v: 0.31.1.dev0+g8666bfc8.d20250312 Distro: Arch Linux
Machine:
Type: Desktop Mobo: ASRock model: B450M Pro4 serial: <superuser required>
UEFI: American Megatrends v: P5.70 date: 10/20/2022
CPU:
Info: 6-core model: AMD Ryzen 5 5500 bits: 64 type: MT MCP cache: L2: 3 MiB
Speed (MHz): avg: 2918 min/max: 400/4268 cores: 1: 2918 2: 2918 3: 2918
4: 2918 5: 2918 6: 2918 7: 2918 8: 2918 9: 2918 10: 2918 11: 2918 12: 2918
Graphics:
Device-1: NVIDIA TU104 [GeForce RTX 2070 SUPER] driver: nouveau v: kernel
Display: x11 server: X.Org v: 21.1.16 with: Xwayland v: 24.1.6 driver: X:
loaded: modesetting unloaded: vesa dri: nouveau gpu: nouveau resolution:
1: 2560x1440~60Hz 2: N/A
API: EGL v: 1.5 drivers: nouveau,swrast
platforms: gbm,x11,surfaceless,device
API: OpenGL v: 4.5 compat-v: 4.3 vendor: mesa v: 25.0.5-arch1.1
renderer: NV164
Info: Tools: api: eglinfo,glxinfo x11: xdriinfo, xdpyinfo, xprop, xrandr
Many things seemed wonky with this install (for instance, only one of my monitors worked), but I was able to suspend and resume properly! At least kinda. Shortly after I resumed my desktop environment became unresponsive (I could still move my mouse, but not click on stuff) and I could switch to TTYs, but they only showed a blinking underscore. All my graphical programs also crashed. Oddly, I still could change workspaces and click on tray icons in this state. In any case, this was very different from my black screens on nvidia. Here is the journalctl from that boot: https://0x0.st/8JyJ.txt. I tried it again with the same result, see https://0x0.st/8Jy3.txt.
As an aside, for all the people in this thread who have downgraded to a old nvidia driver version, what method did you use? I've been using https://gitlab.archlinux.org/archlinux/ … mmits/main as a guide to show me which nvidia drivers are compatible with which linux kernels, but when using the `downgrade script (https://aur.archlinux.org/packages/downgrade) I find myself in this dependency mess where I need to match versions for a variety of packages. For example:
> sudo downgrade nvidia nvidia-utils lib32-nvidia-utils linux
:: Retrieving packages...
lib32-nvidia-utils-550.90.07-1-x86_64 39.4 MiB 10.7 MiB/s 00:04 [##############################################################] 100%
linux-6.9.3.arch1-1-x86_64 133.9 MiB 18.2 MiB/s 00:07 [##############################################################] 100%
nvidia-550.90.07-1-x86_64 40.8 MiB 16.3 MiB/s 00:02 [##############################################################] 100%
nvidia-utils-550.90.07-1-x86_64 220.9 MiB 17.5 MiB/s 00:13 [##############################################################] 100%
loading packages...
warning: downgrading package lib32-nvidia-utils (570.144-1 => 550.90.07-1)
warning: downgrading package linux (6.14.4.arch1-2 => 6.9.3.arch1-1)
warning: downgrading package nvidia-utils (570.144-3 => 550.90.07-1)
resolving dependencies...
looking for conflicting packages...
:: nvidia-550.90.07-1 and nvidia-open-570.144-3 are in conflict. Remove nvidia-open? [y/N] y
Packages (5) nvidia-open-570.144-3 [removal] lib32-nvidia-utils-550.90.07-1 linux-6.9.3.arch1-1 nvidia-550.90.07-1 nvidia-utils-550.90.07-1
Total Installed Size: 971.20 MiB
Net Upgrade Size: -211.23 MiB
:: Proceed with installation? [Y/n]
(4/4) checking keys in keyring [##############################################################] 100%
(4/4) checking package integrity [##############################################################] 100%
(4/4) loading package files [##############################################################] 100%
(4/4) checking for file conflicts [##############################################################] 100%
error: failed to commit transaction (conflicting files)
nvidia-utils: /usr/lib/libnvidia-egl-gbm.so exists in filesystem (owned by egl-gbm)
nvidia-utils: /usr/lib/libnvidia-egl-gbm.so.1 exists in filesystem (owned by egl-gbm)
nvidia-utils: /usr/share/egl/egl_external_platform.d/15_nvidia_gbm.json exists in filesystem (owned by egl-gbm)
Errors occurred, no packages were upgraded.
I'm getting a sense there is a better way to do this. Of course, ideally I wouldn't downgrade at all, given that everything was working fine before I upgraded my cpu (and maybe did something else inadvertently at the same time to cause this)...
Last edited by a-curious-crow (2025-05-08 00:25:02)
Offline
downgraded to a old nvidia driver version, what method did you use
Offline
(...) Very oddly, this issue only started occurring when I upgraded my cpu from a Ryzen 3 3100 to a Ryzen 5 5500 (...)
As another anec-data point I am also on AM4- running a 5800X on a B550 chipset. I'm not convinced that's a factor but I'll see when I changed over from an Intel CPU/board and compare dates just in case.
I still get some rare "clusters" of suspend issues. As has just happened, I can go for weeks without them and then get a couple within the space of a day or two. This was across (ie both before/after) a system update but since kernel and nvidia-related packages don't get upgraded I'm not sure that's critical.
$ pacman -Q nvidia; uname -r; nvidia-debugdump --list
nvidia-535xx-dkms 535.183.01-2
6.1.91-1-lts61
Found 1 NVIDIA devices
Device ID: 0
Device name: NVIDIA GeForce GTX 970 (*PrimaryCard)
GPU internal ID: GPU-bf73e6ad-0567-643c-5a9d-369b88e45323
Still responsive to ssh (as mentioned before).
Last boot: https://0x0.st/8JxW.txt
May 08 10:22:54 zeus kernel: ------------[ cut here ]------------ May 08 10:22:54 zeus kernel: WARNING: CPU: 14 PID: 272752 at /var/lib/dkms/nvidia/535.183.01/build/nvidia/nv.c:3947 nv_restore_user_channels+0x4e/0x1d0 [nvidia] May 08 10:22:54 zeus kernel: Modules linked in: tls snd_seq_dummy snd_hrtimer snd_seq dm_snapshot dm_bufio nft_masq nft_ct nft_reject_ipv4 nf_reject_ipv4 nft_reject act_csum cls_u32 sch_htb nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nf_tables libcrc32c bridge stp llc nct6775 nct6775_core hwmon_vid hid_logitech_hidpp joydev mousedev snd_usb_audio snd_usbmidi_lib snd_rawmidi snd_seq_device hid_logitech_dj mc r8169 realtek mdio_devres libphy intel_rapl_msr intel_rapl_common edac_mce_amd kvm_amd snd_hda_codec_realtek ccp snd_hda_codec_generic snd_hda_codec_hdmi kvm snd_hda_intel snd_intel_dspcfg irqbypass snd_intel_sdw_acpi crct10dif_pclmul crc32_pclmul snd_hda_codec polyval_clmulni polyval_generic eeepc_wmi gf128mul asus_wmi snd_hda_core ghash_clmulni_intel ledtrig_audio sha512_ssse3 sparse_keymap snd_hwdep sha256_ssse3 platform_profile snd_pcm sha1_ssse3 i8042 aesni_intel snd_timer serio nvidia_drm(POE) crypto_simd cryptd usbhid rfkill wmi_bmof nvidia_modeset(POE) snd sp5100_tco rapl May 08 10:22:54 zeus kernel: video soundcore k10temp pcspkr i2c_piix4 wmi gpio_amdpt acpi_cpufreq gpio_generic mac_hid vboxnetflt(OE) vboxnetadp(OE) vboxdrv(OE) nvidia_uvm(POE) nvidia(POE) sg crypto_user loop fuse nfnetlink bpf_preload ip_tables x_tables ext4 crc32c_generic crc16 mbcache jbd2 dm_mod nvme crc32c_intel nvme_core xhci_pci xhci_pci_renesas nvme_common May 08 10:22:54 zeus kernel: CPU: 14 PID: 272752 Comm: nvidia-sleep.sh Tainted: P OE 6.1.91-1-lts61 #1 d05288a9a86238b04a93de064045849480ab030f May 08 10:22:54 zeus kernel: Hardware name: ASUS System Product Name/PRIME B550-PLUS, BIOS 2006 03/19/2021 May 08 10:22:54 zeus kernel: RIP: 0010:nv_restore_user_channels+0x4e/0x1d0 [nvidia] May 08 10:22:54 zeus kernel: Code: 24 c0 05 00 00 4c 89 ef e8 df 2a ba db f6 43 10 01 74 73 48 89 de 31 ff e8 ff 1d a9 00 41 89 c6 85 c0 0f 84 3a 01 00 00 31 ed <0f> 0b 49 81 c4 e8 06 00 00 4c 89 e7 e8 b1 2a ba db be 01 00 00 00 May 08 10:22:54 zeus kernel: RSP: 0018:ffffaa8604adf9d8 EFLAGS: 00010206 May 08 10:22:54 zeus kernel: RAX: 0000000000000003 RBX: ffff989286407800 RCX: ffffaa8604adf958 May 08 10:22:54 zeus kernel: RDX: ffffaa86019cfe60 RSI: 0000000000000246 RDI: ffffaa8604adf908 May 08 10:22:54 zeus kernel: RBP: ffff9895a0953000 R08: 0000000000000000 R09: ffff9895a0955f60 May 08 10:22:54 zeus kernel: R10: 000000700030f231 R11: 0000000000000000 R12: ffff989286407800 May 08 10:22:54 zeus kernel: R13: ffff989286407dc0 R14: 0000000000000003 R15: 0000000000000000 May 08 10:22:54 zeus kernel: FS: 00007fb275416b80(0000) GS:ffff98a16ed80000(0000) knlGS:0000000000000000 May 08 10:22:54 zeus kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 May 08 10:22:54 zeus kernel: CR2: 0000561449f23218 CR3: 00000008a2cea000 CR4: 0000000000750ee0 May 08 10:22:54 zeus kernel: PKRU: 55555554 May 08 10:22:54 zeus kernel: Call Trace: May 08 10:22:54 zeus kernel: <TASK> May 08 10:22:54 zeus kernel: ? nv_restore_user_channels+0x4e/0x1d0 [nvidia b2cf649bae6446ec4b5dfc55ba36538945a1757f] May 08 10:22:54 zeus kernel: ? __warn+0x7d/0xd0 May 08 10:22:54 zeus kernel: ? nv_restore_user_channels+0x4e/0x1d0 [nvidia b2cf649bae6446ec4b5dfc55ba36538945a1757f] May 08 10:22:54 zeus kernel: ? report_bug+0x108/0x150 May 08 10:22:54 zeus kernel: ? handle_bug+0x3c/0x80 May 08 10:22:54 zeus kernel: ? exc_invalid_op+0x17/0x70 May 08 10:22:54 zeus kernel: ? asm_exc_invalid_op+0x1a/0x20 May 08 10:22:54 zeus kernel: ? nv_restore_user_channels+0x4e/0x1d0 [nvidia b2cf649bae6446ec4b5dfc55ba36538945a1757f] May 08 10:22:54 zeus kernel: ? nv_restore_user_channels+0x132/0x1d0 [nvidia b2cf649bae6446ec4b5dfc55ba36538945a1757f] May 08 10:22:54 zeus kernel: nv_set_system_power_state+0xe9/0x470 [nvidia b2cf649bae6446ec4b5dfc55ba36538945a1757f] May 08 10:22:54 zeus kernel: nv_procfs_write_suspend+0xef/0x170 [nvidia b2cf649bae6446ec4b5dfc55ba36538945a1757f] May 08 10:22:54 zeus kernel: proc_reg_write+0x5a/0xa0 May 08 10:22:54 zeus kernel: ? srso_alias_return_thunk+0x5/0x7f May 08 10:22:54 zeus kernel: vfs_write+0xe9/0x3e0 May 08 10:22:54 zeus kernel: ? srso_alias_return_thunk+0x5/0x7f May 08 10:22:54 zeus kernel: ? notify_change+0x265/0x570 May 08 10:22:54 zeus kernel: ? __vfs_getxattr+0x2e/0x80 May 08 10:22:54 zeus kernel: ksys_write+0x6d/0xf0 May 08 10:22:54 zeus kernel: do_syscall_64+0x5a/0x80 May 08 10:22:54 zeus kernel: ? srso_alias_return_thunk+0x5/0x7f May 08 10:22:54 zeus kernel: ? get_page_from_freelist+0x14ef/0x1660 May 08 10:22:54 zeus kernel: ? srso_alias_return_thunk+0x5/0x7f May 08 10:22:54 zeus kernel: ? srso_alias_return_thunk+0x5/0x7f May 08 10:22:54 zeus kernel: ? __mod_memcg_lruvec_state+0x45/0x90 May 08 10:22:54 zeus kernel: ? srso_alias_return_thunk+0x5/0x7f May 08 10:22:54 zeus kernel: ? __mod_lruvec_page_state+0x99/0x140 May 08 10:22:54 zeus kernel: ? srso_alias_return_thunk+0x5/0x7f May 08 10:22:54 zeus kernel: ? page_add_new_anon_rmap+0x74/0x130 May 08 10:22:54 zeus kernel: ? srso_alias_return_thunk+0x5/0x7f May 08 10:22:54 zeus kernel: ? srso_alias_return_thunk+0x5/0x7f May 08 10:22:54 zeus kernel: ? __handle_mm_fault+0xe38/0xf80 May 08 10:22:54 zeus kernel: ? srso_alias_return_thunk+0x5/0x7f May 08 10:22:54 zeus kernel: ? handle_mm_fault+0xdd/0x2d0 May 08 10:22:54 zeus kernel: ? srso_alias_return_thunk+0x5/0x7f May 08 10:22:54 zeus kernel: ? do_user_addr_fault+0x225/0x560 May 08 10:22:54 zeus kernel: ? srso_alias_return_thunk+0x5/0x7f May 08 10:22:54 zeus kernel: ? exc_page_fault+0x7c/0x180 May 08 10:22:54 zeus kernel: entry_SYSCALL_64_after_hwframe+0x6e/0xd8 May 08 10:22:54 zeus kernel: RIP: 0033:0x7fb275519006 May 08 10:22:54 zeus kernel: Code: 5d e8 41 8b 93 08 03 00 00 59 5e 48 83 f8 fc 75 19 83 e2 39 83 fa 08 75 11 e8 26 ff ff ff 66 0f 1f 44 00 00 48 8b 45 10 0f 05 <48> 8b 5d f8 c9 c3 0f 1f 40 00 f3 0f 1e fa 55 48 89 e5 48 83 ec 08 May 08 10:22:54 zeus kernel: RSP: 002b:00007ffc66244830 EFLAGS: 00000202 ORIG_RAX: 0000000000000001 May 08 10:22:54 zeus kernel: RAX: ffffffffffffffda RBX: 0000000000000007 RCX: 00007fb275519006 May 08 10:22:54 zeus kernel: RDX: 0000000000000007 RSI: 0000561449f22e10 RDI: 0000000000000001 May 08 10:22:54 zeus kernel: RBP: 00007ffc66244850 R08: 0000000000000000 R09: 0000000000000000 May 08 10:22:54 zeus kernel: R10: 0000000000000000 R11: 0000000000000202 R12: 0000000000000007 May 08 10:22:54 zeus kernel: R13: 0000561449f22e10 R14: 00007fb27566e5c0 R15: 0000000000000000 May 08 10:22:54 zeus kernel: </TASK> May 08 10:22:54 zeus kernel: ---[ end trace 0000000000000000 ]--- ... May 08 10:22:54 zeus kernel: nvidia-modeset: ERROR: GPU:0: Failed to bind display engine notify context DMA: 0x1a (Ran out of a critical resource, other than memory [NV_ERR_INSUFFICIENT_RESOURCES]) May 08 10:22:54 zeus kernel: nvidia-modeset: ERROR: GPU:0: Failed to allocate display engine core DMA push buffer May 08 10:22:54 zeus kernel: nvidia-modeset: ERROR: GPU:0: Failed to bind display engine notify context DMA: 0x1a (Ran out of a critical resource, other than memory [NV_ERR_INSUFFICIENT_RESOURCES]) May 08 10:22:54 zeus kernel: nvidia-modeset: ERROR: GPU:0: Failed to allocate display engine core DMA push buffer
Offline
a-curious-crow wrote:(...) Very oddly, this issue only started occurring when I upgraded my cpu from a Ryzen 3 3100 to a Ryzen 5 5500 (...)
As another anec-data point I am also on AM4- running a 5800X on a B550 chipset. I'm not convinced that's a factor but I'll see when I changed over from an Intel CPU/board and compare dates just in case.
I still get some rare "clusters" of suspend issues. As has just happened, I can go for weeks without them and then get a couple within the space of a day or two. This was across (ie both before/after) a system update but since kernel and nvidia-related packages don't get upgraded I'm not sure that's critical.
$ pacman -Q nvidia; uname -r; nvidia-debugdump --list nvidia-535xx-dkms 535.183.01-2 6.1.91-1-lts61 Found 1 NVIDIA devices Device ID: 0 Device name: NVIDIA GeForce GTX 970 (*PrimaryCard) GPU internal ID: GPU-bf73e6ad-0567-643c-5a9d-369b88e45323
Still responsive to ssh (as mentioned before).
Intersting. I also have AMD Ryzen 9 5950x. Microcode up to date. X570 chipset.
kernel is latest 6.14.5-zen1-1-zen. Had weird problems when stopped updating kernel but kept updating core packages.
GPU is NVIDIA GeForce RTX 4070 Ti SUPER
Offline
In different matter:
Anyone tried the open driver?
For my card Arch wiki recommends the closed one.
But maybe it can help?
Offline
Thank you so much seth for the aur link! That is so much better than what I was trying. Unfortunately, like 7000k I ran into some install issues building the 535 packages with the latest 6.14.5-arch1-1 kernel. I went to try the 550 packages, but it look like they are also having issues. I don't want to mess with patching these packages at the moment, so I'll eagerly await 550 being fixed, then try with that driver version.
I tried both open and normal nvidia, and had the same issue with both. But it's worth a shot I'd say! Maybe our problems are subtly different.
Offline
What install issues?
There're none reported in the AUR comments and the package got a version update mid-april.
Offline
When installing the 535 packages, my computer wouldn't boot. When i looked at journalctl, I saw:
systemd-modules-load[344]: Failed to find module 'nvidia-uvm'
Then after recovering my system and trying to reinstall 535, I saw this in /var/log/pacman.log:
[2025-05-08T20:47:51-0700] [ALPM] reinstalled lib32-opencl-nvidia-535xx (535.247.01-1)
[2025-05-08T20:47:51-0700] [ALPM] installed lib32-nvidia-535xx-utils (535.247.01-1)
[2025-05-08T20:47:51-0700] [ALPM] installed nvidia-535xx-dkms (535.247.01-1)
[2025-05-08T20:47:51-0700] [ALPM] reinstalled opencl-nvidia-535xx (535.247.01-1)
[2025-05-08T20:47:51-0700] [ALPM] transaction completed
[2025-05-08T20:47:51-0700] [ALPM] running '20-systemd-sysusers.hook'...
[2025-05-08T20:47:51-0700] [ALPM] running '30-systemd-daemon-reload-system.hook'...
[2025-05-08T20:47:51-0700] [ALPM] running '30-systemd-restart-marked.hook'...
[2025-05-08T20:47:51-0700] [ALPM] running '30-systemd-udev-reload.hook'...
[2025-05-08T20:47:52-0700] [ALPM] running '30-systemd-update.hook'...
[2025-05-08T20:47:52-0700] [ALPM] running '60-depmod.hook'...
[2025-05-08T20:47:54-0700] [ALPM] running '70-dkms-install.hook'...
[2025-05-08T20:47:54-0700] [ALPM-SCRIPTLET] ==> dkms install --no-depmod nvidia/535.247.01 -k 6.14.5-arch1-1
[2025-05-08T20:48:09-0700] [ALPM-SCRIPTLET]
[2025-05-08T20:48:09-0700] [ALPM-SCRIPTLET] Error! Bad return status for module build on kernel: 6.14.5-arch1-1 (x86_64)
[2025-05-08T20:48:09-0700] [ALPM-SCRIPTLET] Consult /var/lib/dkms/nvidia/535.247.01/build/make.log for more information.
[2025-05-08T20:48:09-0700] [ALPM-SCRIPTLET] ==> WARNING: `dkms install --no-depmod nvidia/535.247.01 -k 6.14.5-arch1-1' exited 10
[2025-05-08T20:48:09-0700] [ALPM] running '90-mkinitcpio-install.hook'...
I assume this is what 7000k was referring to in his comment on this thread about needing to patch 535 to make it work.
As far as 550, the latest instructions as of a few days ago on https://aur.archlinux.org/pkgbase/nvidia-550xx-dkms say you need to do some manual patch steps to make it work. I'm hoping someone incorporates this into the package itself soon.
Offline
When installing the 535 packages, my computer wouldn't boot. When i looked at journalctl, I saw:
systemd-modules-load[344]: Failed to find module 'nvidia-uvm'
Then after recovering my system and trying to reinstall 535, I saw this in /var/log/pacman.log:
[2025-05-08T20:47:51-0700] [ALPM] reinstalled lib32-opencl-nvidia-535xx (535.247.01-1) [2025-05-08T20:47:51-0700] [ALPM] installed lib32-nvidia-535xx-utils (535.247.01-1) [2025-05-08T20:47:51-0700] [ALPM] installed nvidia-535xx-dkms (535.247.01-1) [2025-05-08T20:47:51-0700] [ALPM] reinstalled opencl-nvidia-535xx (535.247.01-1) [2025-05-08T20:47:51-0700] [ALPM] transaction completed [2025-05-08T20:47:51-0700] [ALPM] running '20-systemd-sysusers.hook'... [2025-05-08T20:47:51-0700] [ALPM] running '30-systemd-daemon-reload-system.hook'... [2025-05-08T20:47:51-0700] [ALPM] running '30-systemd-restart-marked.hook'... [2025-05-08T20:47:51-0700] [ALPM] running '30-systemd-udev-reload.hook'... [2025-05-08T20:47:52-0700] [ALPM] running '30-systemd-update.hook'... [2025-05-08T20:47:52-0700] [ALPM] running '60-depmod.hook'... [2025-05-08T20:47:54-0700] [ALPM] running '70-dkms-install.hook'... [2025-05-08T20:47:54-0700] [ALPM-SCRIPTLET] ==> dkms install --no-depmod nvidia/535.247.01 -k 6.14.5-arch1-1 [2025-05-08T20:48:09-0700] [ALPM-SCRIPTLET] [2025-05-08T20:48:09-0700] [ALPM-SCRIPTLET] Error! Bad return status for module build on kernel: 6.14.5-arch1-1 (x86_64) [2025-05-08T20:48:09-0700] [ALPM-SCRIPTLET] Consult /var/lib/dkms/nvidia/535.247.01/build/make.log for more information. [2025-05-08T20:48:09-0700] [ALPM-SCRIPTLET] ==> WARNING: `dkms install --no-depmod nvidia/535.247.01 -k 6.14.5-arch1-1' exited 10 [2025-05-08T20:48:09-0700] [ALPM] running '90-mkinitcpio-install.hook'...
I assume this is what 7000k was referring to in his comment on this thread about needing to patch 535 to make it work.
As far as 550, the latest instructions as of a few days ago on https://aur.archlinux.org/pkgbase/nvidia-550xx-dkms say you need to do some manual patch steps to make it work. I'm hoping someone incorporates this into the package itself soon.
It seems you missed nvidia-utils...
Offline
I didn't post my entire log, just a snippet. `nvidia-535xx-utils` was also installed.
Offline
The curious bits will be in
[2025-05-08T20:48:09-0700] [ALPM-SCRIPTLET] Consult /var/lib/dkms/nvidia/535.247.01/build/make.log for more information.
Offline
Ah that log is empty right now, and I don't want to try to rebuild it and debug provided 550 may work soon.
Offline
550 may work soon.
Is there something being worked on in 550 which will fix suspend-related bug(s) ? I haven't been following things in the NVIDIA fora lately.
I'd like to believe we'll get a resolution to an issue from 2023...
Offline
Nothing is being worked on for that driver version in terms of functionality. I was referring to the fact that the aur package itself is currently broken, and hopefully will be fixed soon.
Offline