You are not logged in.
I have 2 GPUs and, I am dual booting. But, I can't use my nvidia graphics card. I already disabled fast booting on Windows.
sudo dmesg | grep -i D3cold outputs:
[ 0.270606] pci 0000:00:01.0: PME# supported from D0 D3hot D3cold
[ 0.271103] pci 0000:00:01.1: PME# supported from D0 D3hot D3cold
[ 0.272063] pci 0000:00:06.0: PME# supported from D0 D3hot D3cold
[ 0.272669] pci 0000:00:0d.0: PME# supported from D3hot D3cold
[ 0.274381] pci 0000:00:14.0: PME# supported from D3hot D3cold
[ 0.276291] pci 0000:00:14.3: PME# supported from D0 D3hot D3cold
[ 0.278619] pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold
[ 0.279304] pci 0000:00:1d.0: PME# supported from D0 D3hot D3cold
[ 0.280361] pci 0000:00:1f.3: PME# supported from D3hot D3cold
[ 0.282722] pci 0000:04:00.0: PME# supported from D0 D1 D2 D3hot D3cold
[ 0.283206] pci 0000:05:00.0: PME# supported from D1 D2 D3hot D3cold
[ 0.918287] nvidia 0000:01:00.0: Unable to change power state from D3cold to D0, device inaccessible
[ 1.051904] nvidia 0000:01:00.0: Unable to change power state from D3cold to D0, device inaccessible
[ 1.161077] nvidia 0000:01:00.0: Unable to change power state from D3cold to D0, device inaccessible
[ 1.300270] nvidia 0000:01:00.0: Unable to change power state from D3cold to D0, device inaccessible
[ 1.427690] nvme 0000:02:00.0: Unable to change power state from D3cold to D0, device inaccessible
[ 1.496780] nvidia 0000:01:00.0: Unable to change power state from D3cold to D0, device inaccessible
[ 1.631688] nvidia 0000:01:00.0: Unable to change power state from D3cold to D0, device inaccessible
[ 2.068341] nvidia 0000:01:00.0: Unable to change power state from D3cold to D0, device inaccessible
[ 2.357275] nvidia 0000:01:00.0: Unable to change power state from D3cold to D0, device inaccessible
[ 2.764112] snd_hda_intel 0000:01:00.1: Unable to change power state from D3cold to D0, device inaccessible
[ 2.864224] nvidia 0000:01:00.0: Unable to change power state from D3cold to D0, device inaccessible
[ 4.967488] snd_hda_intel 0000:01:00.1: Unable to change power state from D3cold to D0, device inaccessibleAnd here is my journatctl for current boot:
https://envs.sh/din.txt
lspci | grep VGA
00:02.0 VGA compatible controller: Intel Corporation TigerLake-H GT1 [UHD Graphics] (rev 01)
01:00.0 VGA compatible controller: NVIDIA Corporation GA106M [GeForce RTX 3060 Mobile / Max-Q] (rev a1)Offline
Jun 24 21:28:57 user9823net kernel: nvidia: loading out-of-tree module taints kernel.
Jun 24 21:28:57 user9823net kernel: nvidia: module verification failed: signature and/or required key missing - tainting kernel
Jun 24 21:28:57 user9823net kernel: hid-generic 0003:046D:C092.0002: input,hiddev96,hidraw1: USB HID v1.11 Keyboard [Logitech G102 LIGHTSYNC Gaming Mouse] on usb-0000:00:14.0-1/input1
Jun 24 21:28:57 user9823net kernel: nvidia-nvlink: Nvlink Core is being initialized, major device number 240
Jun 24 21:28:57 user9823net kernel:
Jun 24 21:28:57 user9823net kernel: nvidia 0000:01:00.0: Unable to change power state from D3cold to D0, device inaccessible
Jun 24 21:28:57 user9823net kernel: nvidia 0000:01:00.0: vgaarb: VGA decodes changed: olddecodes=io+mem,decodes=none:owns=none
Jun 24 21:28:57 user9823net kernel: NVRM: The NVIDIA GPU 0000:01:00.0
NVRM: (PCI ID: 10de:2520) installed in this system has
NVRM: fallen off the bus and is not responding to commands.
Jun 24 21:28:57 user9823net kernel: nvidia 0000:01:00.0: probe with driver nvidia failed with error -1
Jun 24 21:28:57 user9823net kernel: NVRM: The NVIDIA probe routine failed for 1 device(s).
Jun 24 21:28:57 user9823net kernel: NVRM: None of the NVIDIA devices were initialized.
Jun 24 21:28:57 user9823net kernel: nvidia-nvlink: Unregistered Nvlink Core, major device number 240
Jun 24 21:28:57 user9823net kernel: usb 3-6: new high-speed USB device number 3 using xhci_hcd
Jun 24 21:28:57 user9823net kernel: nvidia-nvlink: Nvlink Core is being initialized, major device number 240
Jun 24 21:28:57 user9823net kernel:
Jun 24 21:28:57 user9823net kernel: nvidia 0000:01:00.0: Unable to change power state from D3cold to D0, device inaccessible
Jun 24 21:28:57 user9823net kernel: nvidia 0000:01:00.0: vgaarb: VGA decodes changed: olddecodes=none,decodes=none:owns=none
Jun 24 21:28:57 user9823net kernel: NVRM: The NVIDIA GPU 0000:01:00.0
NVRM: (PCI ID: 10de:2520) installed in this system has
NVRM: fallen off the bus and is not responding to commands.
Jun 24 21:28:57 user9823net kernel: nvidia 0000:01:00.0: probe with driver nvidia failed with error -1
Jun 24 21:28:57 user9823net kernel: NVRM: The NVIDIA probe routine failed for 1 device(s).
Jun 24 21:28:57 user9823net kernel: NVRM: None of the NVIDIA devices were initialized.The GPU doesn't respond at all.
Is this on battery or on AC?
Do you have any related config options in the UEFI settings?
Jun 24 21:28:57 user9823net kernel: DMI: CASPER BILGISAYAR SISTEMLERI EXCALIBUR G900/NLCC 001, BIOS CP141 11/23/2021resp. is tehre a firmware update available for the device?
Ceterum censeo:
Jun 24 21:28:57 user9823net kernel: nvme0n1: p1 p2 p3 p4 p5 p6 p7 p8Is there a parallel windows installation?
=> 3rd link below. Mandatory.
Disable it (it's NOT the BIOS setting!) and reboot windows and linux twice for voodo reasons.
Offline
Yes, I have a parallel Windows installation. And I already disabled Fast Start with
powercfg /H offweek ago. Also, I tried rebooting twice and it didn't work.
Offline
Does the GPU work on windows?
The GPU doesn't respond at all.
Is this on battery or on AC?
Might be a race condition (to much stuff drawing too much power at the same time), does it help to add "rcutree.gp_init_delay=1" to the https://wiki.archlinux.org/title/Kernel_parameters ?
Or does "nvidia-drm.fbdev=0"? (Though unlikely)
Offline
I tried:
GRUB_CMDLINE_LINUX_DEFAULT="loglevel=3 quiet nvidia-drm.modeset=1 nvidia-drm.fbdev=1 rcutree.go_init_delay=1"GRUB_CMDLINE_LINUX_DEFAULT="loglevel=3 quiet nvidia-drm.modeset=1 nvidia-drm.fbdev=0 rcutree.go_init_delay=1"GRUB_CMDLINE_LINUX_DEFAULT="loglevel=3 quiet nvidia-drm.modeset=1 nvidia-drm.fbdev=0"Still "Unable to change power state from D3cold to D0, device inaccessible"
Offline
I also want to say one more thing, I have 2 SSDs in my computer. And I get the same error on the 2nd SSD. I can't remember the GPU, but I remember that I couldn't use the SSD in Ubuntu, my distro before Arch Linux.
[ 1.425058] nvme 0000:02:00.0: Unable to change power state from D3cold to D0, device inaccessibleLast edited by user9823 (2025-06-25 21:48:32)
Offline
Did you only edit etc/default/grub or did you also run grub-mkconfig afterwards?
Does the GPU work on windows?
Is this on battery or on AC?
Do you have any related config options in the UEFI settings?
The bus doesn't get enough power, the question is how systemic the problem is and whether you can do something about it.
Did you add the second nvme yourself?
Does it help to re-scan the bus later, after the system has booted?
https://stackoverflow.com/questions/323 … f-pcie-bus
Offline
Did you only edit etc/default/grub or did you also run grub-mkconfig afterwards?
Yes, I ran:
sudo grub-mkconfig -o /boot/grub/grub.cfgDid you add the second nvme yourself?
Does it help to re-scan the bus later, after the system has booted?
https://stackoverflow.com/questions/323 … f-pcie-bus
Yes, I added nvme by myself. Second SSD works on Windows without any problem.
When I run:
echo 1 > /sys/bus/pci/rescanI don't see my nvme on lsblk
Offline
Second SSD works on Windows without any problem.
And what about the GPU?
Also: can you remove the second nvme again and does that revive the GPU?
Offline
Second SSD works on Windows without any problem.
And what about the GPU?
Also GPU works.
Also: can you remove the second nvme again and does that revive the GPU?
I can try but not today.
Offline
You could also try to add "pcie_aspm=off", but
1. added nvme myself
2. nvme and gpu on next lane don't power up
sounds sketchy, so let's see whether the nvme knocks both out.
Offline
You could also try to add "pcie_aspm=off", but
1. added nvme myself
2. nvme and gpu on next lane don't power up
sounds sketchy, so let's see whether the nvme knocks both out.
Previously I tried "pcie_aspm=off" also. if you think the problem is in the nvme. I can take it off. What should I do if that's the problem because I think I installed the nvme correctly.
I also want to ask one more thing, in the dmesg output the nvme error comes later than the nvidia. Could it be affecting nvidia anyway?
Offline
Could it be affecting nvidia anyway?
Yes, if they end up stealing power from each other.
This likely isn't a software thing.
What should I do if that's the problem because I think I installed the nvme correctly.
If it's a power draw spike, structure the access.
You've btw. not addressed the question about the power suorce.
Offline
You've btw. not addressed the question about the power suorce.
Are you asking:
Is this on battery or on AC?
If so: It's plugged in and working on AC.
Offline