You are not logged in.

#1 2021-02-12 22:25:21

duskrider
Member
Registered: 2020-02-12
Posts: 7

[Solved] nvme Disk loads in installer, but not in tailored initrd

Hello everyone!

I am setting up a brand new Lenovo Yoga 7. I shrunk the Windows install, disabled secure boot and everything seemed to work well. Unfortunately, the system can't boot, because the disk does not show up.

What I tried: I added "nvme_load=YES" to the kernel parameters, I added "vmd" to mkinitcpio.conf MODULES, I added modconf to HOOKS and removed autodetect (and I added lspci and nvme to BINARIES).
So far I am not making any progress.

I'm happy about any debugging hints (or solutions, haha) you can give me!

lspci in the live system says:

0000:00:00.0 Host bridge: Intel Corporation 11th Gen Core Processor Host Bridge/DRAM Registers (rev 01)
0000:00:02.0 VGA compatible controller: Intel Corporation UHD Graphics (rev 01)
0000:00:04.0 Signal processing controller: Intel Corporation Device 9a03 (rev 01)
0000:00:07.0 PCI bridge: Intel Corporation Tiger Lake-LP Thunderbolt PCI Express Root Port #0 (rev 01)
0000:00:07.1 PCI bridge: Intel Corporation Tiger Lake-LP Thunderbolt PCI Express Root Port #1 (rev 01)
0000:00:07.2 PCI bridge: Intel Corporation Tiger Lake-LP Thunderbolt PCI Express Root Port #2 (rev 01)
0000:00:07.3 PCI bridge: Intel Corporation Tiger Lake-LP Thunderbolt PCI Express Root Port #3 (rev 01)
0000:00:0a.0 Signal processing controller: Intel Corporation Device 9a0d (rev 01)
0000:00:0d.0 USB controller: Intel Corporation Tiger Lake-LP Thunderbolt USB Controller (rev 01)
0000:00:0d.2 USB controller: Intel Corporation Tiger Lake-LP Thunderbolt NHI #0 (rev 01)
0000:00:0d.3 USB controller: Intel Corporation Tiger Lake-LP Thunderbolt NHI #1 (rev 01)
0000:00:0e.0 RAID bus controller: Intel Corporation Volume Management Device NVMe RAID Controller
0000:00:12.0 Serial controller: Intel Corporation Tiger Lake-LP Integrated Sensor Hub (rev 20)
0000:00:14.0 USB controller: Intel Corporation Tiger Lake-LP USB 3.2 Gen 2x1 xHCI Host Controller (rev 20)
0000:00:14.2 RAM memory: Intel Corporation Tiger Lake-LP Shared SRAM (rev 20)
0000:00:14.3 Network controller: Intel Corporation Wi-Fi 6 AX201 (rev 20)
0000:00:15.0 Serial bus controller [0c80]: Intel Corporation Tiger Lake-LP Serial IO I2C Controller #0 (rev 20)
0000:00:15.1 Serial bus controller [0c80]: Intel Corporation Tiger Lake-LP Serial IO I2C Controller #1 (rev 20)
0000:00:16.0 Communication controller: Intel Corporation Tiger Lake-LP Management Engine Interface (rev 20)
0000:00:1d.0 System peripheral: Intel Corporation Device 09ab
0000:00:1e.0 Communication controller: Intel Corporation Tiger Lake-LP Serial IO UART Controller #0 (rev 20)
0000:00:1e.3 Serial bus controller [0c80]: Intel Corporation Tiger Lake-LP Serial IO SPI Controller #1 (rev 20)
0000:00:1f.0 ISA bridge: Intel Corporation Tiger Lake-LP LPC Controller (rev 20)
0000:00:1f.3 Multimedia audio controller: Intel Corporation Tiger Lake-LP Smart Sound Technology Audio Controller (rev 20)
0000:00:1f.4 SMBus: Intel Corporation Tiger Lake-LP SMBus Controller (rev 20)
0000:00:1f.5 Serial bus controller [0c80]: Intel Corporation Tiger Lake-LP SPI Controller (rev 20)
10000:e0:1d.0 PCI bridge: Intel Corporation Tiger Lake-LP PCI Express Root Port #9 (rev 20)
10000:e1:00.0 Non-Volatile memory controller: Sandisk Corp Device 5006

In the emergency shell of the installed system, the last line is just missing, but I also get only numeric codes for the devices:

0000:00:00.0 Class 0600: Device 8086:9a14 (rev 01)
0000:00:02.0 Class 0300: Device 8086:9a49 (rev 01)
0000:00:04.0 Class 1180: Device 8086:9a03 (rev 01)
0000:00:07.0 Class 0604: Device 8086:9a23 (rev 01)
0000:00:07.1 Class 0604: Device 8086:9a25 (rev 01)
0000:00:07.2 Class 0604: Device 8086:9a27 (rev 01)
0000:00:07.3 Class 0604: Device 8086:9a29 (rev 01)
0000:00:0a.0 Class 1180: Device 8086:9a0d (rev 01)
0000:00:0d.0 Class 0c03: Device 8086:9a13 (rev 01)
0000:00:0d.2 Class 0c03: Device 8086:9a1b (rev 01)
0000:00:0d.3 Class 0c03: Device 8086:9a1d (rev 01)
0000:00:0e.0 Class 0104: Device 8086:9a0b
0000:00:12.0 Class 0700: Device 8086:a0fc (rev 20)
0000:00:14.0 Class 0c03: Device 8086:a0ed (rev 20)
0000:00:14.2 Class 0500: Device 8086:a0ef (rev 20)
0000:00:14.3 Class 0280: Device 8086:a0f0 (rev 20)
0000:00:15.0 Class 0c80: Device 8086:a0e8 (rev 20)
0000:00:15.1 Class 0c80: Device 8086:a0e9 (rev 20)
0000:00:16.0 Class 0780: Device 8086:a0e0 (rev 20)
0000:00:1d.0 Class 0880: Device 8086:09ab
0000:00:1e.0 Class 0780: Device 8086:a0a8 (rev 20)
0000:00:1e.3 Class 0c80: Device 8086:a0ab (rev 20)
0000:00:1f.0 Class 0601: Device 8086:a082 (rev 20)
0000:00:1f.3 Class 0401: Device 8086:a0c8 (rev 20)
0000:00:1f.4 Class 0c05: Device 8086:a0a3 (rev 20)
0000:00:1f.5 Class 0c80: Device 8086:a0a4 (rev 20)
10000:00:1d.0 Class 0604: Device 8086:a0b0 (rev 20)

Some excerpts from dmesg:

libata version 3.00 loaded.
...
[    9.476846] vmd 0000:00:0e.0: PCI host bridge to bus 10000:e0
[    9.476848] pci_bus 10000:e0: root bus resource [bus e0-ff]
[    9.476849] pci_bus 10000:e0: root bus resource [mem 0x50000000-0x51ffffff]
[    9.476850] pci_bus 10000:e0: root bus resource [mem 0x605f102000-0x605f1fffff 64bit]
[    9.476874] pci 10000:e0:1d.0: [8086:a0b0] type 01 class 0x060400
[    9.476977] pci 10000:e0:1d.0: PME# supported from D0 D3hot D3cold
[    9.477016] pci 10000:e0:1d.0: PTM enabled (root), 4ns granularity
[    9.477061] pci 10000:e0:1d.0: Adding to iommu group 9
[    9.477073] pci 10000:e0:1d.0: Primary bus is hard wired to 0
[    9.477138] pci 10000:e1:00.0: [15b7:5006] type 00 class 0x010802
[    9.477169] pci 10000:e1:00.0: reg 0x10: [mem 0x50000000-0x50003fff 64bit]
[    9.477215] pci 10000:e1:00.0: reg 0x20: [mem 0x00000000-0x000000ff 64bit]
[    9.477393] pci 10000:e1:00.0: Adding to iommu group 9
[    9.477434] pci 10000:e0:1d.0: PCI bridge to [bus e1]
[    9.477436] pci 10000:e0:1d.0:   bridge window [io  0x0000-0x0fff]
[    9.477438] pci 10000:e0:1d.0:   bridge window [mem 0x50000000-0x500fffff]
[    9.477444] pci 10000:e0:1d.0: Primary bus is hard wired to 0
[    9.477452] pci 10000:e0:1d.0: BAR 14: assigned [mem 0x50000000-0x500fffff]
[    9.477453] pci 10000:e0:1d.0: BAR 13: no space for [io  size 0x1000]
[    9.477454] pci 10000:e0:1d.0: BAR 13: failed to assign [io  size 0x1000]
[    9.477455] pci 10000:e1:00.0: BAR 0: assigned [mem 0x50000000-0x50003fff 64bit]
[    9.477469] pci 10000:e1:00.0: BAR 4: assigned [mem 0x50004000-0x500040ff 64bit]
[    9.477483] pci 10000:e0:1d.0: PCI bridge to [bus e1]
[    9.477488] pci 10000:e0:1d.0:   bridge window [mem 0x50000000-0x500fffff]
[    9.477505] pcieport 10000:e0:1d.0: can't derive routing for PCI INT A
[    9.477505] pcieport 10000:e0:1d.0: PCI INT A: no GSI
[    9.477549] pcieport 10000:e0:1d.0: PME: Signaling with IRQ 184
[    9.477670] nvme nvme0: pci function 10000:e1:00.0
[    9.477686] vmd 0000:00:0e.0: Bound to PCI domain 10000
[    9.477687] pcieport 10000:e0:1d.0: can't derive routing for PCI INT A
[    9.477688] nvme 10000:e1:00.0: PCI INT A: no GSI
[    9.487648] nvme nvme0: 8/0/0 default/read/poll queues
[    9.492942]  nvme0n1: p1 p2 p3 p4 p5

The broken system looks pretty similar. But I cannot find the libata-line anywhere. Is this significant?

[    1.706251] vmd 0000:00:0e.0: PCI host bridge to bus 10000:00
[    1.706253] pci_bus 10000:00: root bus resource [bus 00-1f]
[    1.706253] pci_bus 10000:00: root bus resource [mem 0x50000000-0x51ffffff]
[    1.706254] pci_bus 10000:00: root bus resource [mem 0x605f102000-0x605f1fffff 64bit]
[    1.706268] pci 10000:00:1d.0: [8086:a0b0] type 01 class 0x060400
[    1.706377] pci 10000:00:1d.0: PME# supported from D0 D3hot D3cold
[    1.706405] pci 10000:00:1d.0: PTM enabled (root), 4ns granularity
[    1.706468] pci_bus 10000:e1: busn_res: can not insert [bus e1] under [bus 00-1f] (conflicts with (null) [bus 00-1f])
[    1.706469] pci 10000:00:1d.0: PCI bridge to [bus e1]
[    1.706471] pci 10000:00:1d.0:   bridge window [io  0x0000-0x0fff]
[    1.706473] pci 10000:00:1d.0:   bridge window [mem 0x50000000-0x500fffff]
[    1.706489] pci 10000:00:1d.0: BAR 14: assigned [mem 0x50000000-0x500fffff]
[    1.706490] pci 10000:00:1d.0: BAR 13: no space for [io  size 0x1000]
[    1.706491] pci 10000:00:1d.0: BAR 13: failed to assign [io  size 0x1000]
[    1.706491] pci 10000:00:1d.0: PCI bridge to [bus e1]
[    1.706496] pci 10000:00:1d.0:   bridge window [mem 0x50000000-0x500fffff]
[    1.706513] pcieport 10000:00:1d.0: can't derive routing for PCI INT A
[    1.706513] pcieport 10000:00:1d.0: PCI INT A: no GSI
[    1.706556] pcieport 10000:00:1d.0: PME: Signaling with IRQ 149
[    1.706621] vmd 0000:00:0e.0: Bound to PCI domain 10000

Last edited by duskrider (2021-02-13 10:09:11)

Offline

#2 2021-02-13 00:04:04

loqs
Member
Registered: 2014-03-06
Posts: 17,321

Re: [Solved] nvme Disk loads in installer, but not in tailored initrd

Can you please post the full good and bad dmesg output.  As well as lsmod output from the good and bad boots.  See the tip box from pastebin to pipe the output from the console.

Offline

#3 2021-02-13 07:00:20

duskrider
Member
Registered: 2020-02-12
Posts: 7

Re: [Solved] nvme Disk loads in installer, but not in tailored initrd

Thanks for your reply!

Here are the logs:

Healthy dmesg: https://pastebin.com/RePhSUZN
Healthy lsmod: https://pastebin.com/f2hy7end
Broken dmesg: https://pastebin.com/ZC0Sxn46
Broken lsmod: https://pastebin.com/MwsKXUNS

Thank you very much!

Offline

#4 2021-02-13 08:56:00

loqs
Member
Registered: 2014-03-06
Posts: 17,321

Re: [Solved] nvme Disk loads in installer, but not in tailored initrd

[    1.706468] pci_bus 10000:e1: busn_res: can not insert [bus e1] under [bus 00-1f] (conflicts with (null) [bus 00-1f])

What if you add the boot parameter iommu=soft
Edit:
You could also try using linux 5.10.11 from the ALA which is used on the ISO that worked for you.

Last edited by loqs (2021-02-13 09:19:43)

Offline

#5 2021-02-13 10:08:41

duskrider
Member
Registered: 2020-02-12
Posts: 7

Re: [Solved] nvme Disk loads in installer, but not in tailored initrd

loqs wrote:

You could also try using linux 5.10.11 from the ALA which is used on the ISO that worked for you.

That's it, thanks a lot! I feel a bit stupid now, but it seems linux-lts is actually too old for this machine. iommu=soft barely changed anything, for the record.

Conclusion for anyone else stumbling on this:
* Lenovo Yoga 7 with Tiger Lake chipset
* NVME drive visible in installer
* Current linux-lts kernel (here 5.4.) is too old, use 5.10!

Last edited by duskrider (2021-02-13 10:45:08)

Offline

Board footer

Powered by FluxBB