You are not logged in.
I have reached out to Lexar:
Based on the images you have shared, particularly the SSD label and front sticker, we can confirm that your device is a genuine Lexar NQ790 2TB NVMe SSD. Variations in component layout can occur between different production batches, so slight differences in appearance compared to online images are normal and do not indicate that the product is not authentic.
Offline
So the drive seems genuine, badblocks doesn't spill obvious problems but the FS won't fix…
Add "nvme_core.default_ps_max_latency_us=0 iommu=soft pcie_aspm=off" to the https://wiki.archlinux.org/title/Kernel_parameters reboot and then run
sudo fsck.ext4 -fvp /dev/nvme0n1 # post the output, you can "| tee /tmp/fsck.log" it
mount /dev/nvme0n1 /mnt/test # NOT read-only, this will require "-o ro,noload" if the FS has a journalSpeaking of journal, did you tune2fs it off or anything like that?
Offline
I purchased a USB NVMe device and have attached this drive to a different PC, but the drive has the same issue there. Cannot mount and will corrupt when trying to mount.
On this device it is listed as sdc. Because it is connected via USB, I cannot use nvme-cli.
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 931,5G 0 disk
└─sda1 8:1 0 931,5G 0 part
sdb 8:16 0 232,9G 0 disk
└─sdb1 8:17 0 232,9G 0 part
sdc 8:32 0 1,8T 0 disk
nvme1n1 259:0 0 931,5G 0 disk
├─nvme1n1p1 259:1 0 512M 0 part /boot
├─nvme1n1p2 259:2 0 638,7G 0 part /
├─nvme1n1p3 259:3 0 16M 0 part
├─nvme1n1p4 259:4 0 291,8G 0 part
└─nvme1n1p5 259:5 0 515M 0 part
nvme0n1 259:6 0 931,5G 0 disk
├─nvme0n1p1 259:7 0 16M 0 part
├─nvme0n1p2 259:8 0 698,1G 0 part
└─nvme0n1p3 259:9 0 233,4G 0 part
✘ user@ranger ~ sudo fdisk -l /dev/sdc
Disk /dev/sdc: 1,82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: Generic PCIE
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
user@ranger ~ sudo fsck /dev/sdc
fsck from util-linux 2.42
e2fsck 1.47.4 (6-Mar-2025)
Lexar_2TB: recovering journal
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Feature orphan_present is set but orphan file is clean.
Clear<y>? yes
Lexar_2TB: ***** FILE SYSTEM WAS MODIFIED *****
Lexar_2TB: 133860/122101760 files (7.5% non-contiguous), 82102852/488378646 blocks
✘ user@ranger ~ sudo fsck /dev/sdc
fsck from util-linux 2.42
e2fsck 1.47.4 (6-Mar-2025)
Lexar_2TB: clean, 133860/122101760 files, 82102852/488378646 blocks
user@ranger ~ sudo mount /dev/sdc /mnt/test
mount: /mnt/test: fsconfig() failed: Structure needs cleaning.
dmesg(1) may have more information after failed mount system call.
✘ user@ranger ~ sudo fsck /dev/sdc
fsck from util-linux 2.42
e2fsck 1.47.4 (6-Mar-2025)
Lexar_2TB: recovering journal
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Feature orphan_present is set but orphan file is clean.
Clear<y>? yes
Lexar_2TB: ***** FILE SYSTEM WAS MODIFIED *****
Lexar_2TB: 133860/122101760 files (7.5% non-contiguous), 82102852/488378646 blocksI have not disabled the journal.
I will put the disk back into the original PC and change the kernel parameters tomorrow.
Offline
when you get able to mount the drive you should create a backup, wipe it and start over by regular partioning
if you have the space on another drive you could create a backup with dd and mount that via loop
i'm a bit surprised why mkfs allows such structure - but we had similar confusing topics in the past where users came up with rather strange setups
Offline
wipe it and start over by regular partioning
I'd suggest to discard all blocks before partioning:
# blkdiscard /dev/nvme0n1 # !!! Warning !!! All data will be lost! and later either mount with -o discard or enable periodic trim (https://wiki.archlinux.org/title/Solid_state_drive#TRIM)
Offline
I'm a bit surprised why mkfs allows such structure
mkfs will put a filesystem onto a file - whether that file is a regular file or whatever blockdevice - the unpartitioned isn't *that* uncommon and is sometimes referred to s "superdisk" or "superfloppy" by transferring the norm of that medium onto another.
But
[ 3468.393306] EXT4-fs error (device nvme0n1): ext4_init_orphan_info:583: inode #12: comm mount: iget: special inode unallocatedBefore you wipe the disk, you might want to give testdisk a run - I start to suspect that there's never been an ext4 FS on /dev/nvme0n1 but he header overwrote a partition table or this has been using a different FS all along.
Offline
FYI I have a second identical disk with identical setup (FS on the namespace, not partitions) that still works properly in a different computer. I have mentioned this before. I can run commands and provide output from that device.
I have not formatted or partitioned the faulty disk yet, because that would not allow me to find out why this situation suddenly happened and to help others avoid this.
The facts are that it has worked for months and then during a random moment the disk unmounted, I had run fsck that ran for a lengthy time, mounted once after it had finished and allowed me to fish the files out of the LOST+FOUND directory, and has from that point on, after a reboot, been unable to mount.
I could have wiped the disk from the start, but the fact that it's in this state might uncover a bug somewhere.
I'll also have you know that this disk only contains a library of Steam games, so its contents are not valuable.
Should I proceed with the kernel parameters on the original PC or is that now irrelevant with the new findings of me trying to mount via USB device on a different PC, and instead proceed with testdisk?
Offline
I'll also have you know that this disk only contains a library of Steam games, so its contents are not valuable.
You can perform a destructive test to check whether the disk can store data uncorrupted.
First, fill the disk with reproducible pseudorandom data. For example:
# openssl enc -aes-256-ctr -pass pass:12345 -nosalt 2>/dev/null < /dev/zero | dd of=/dev/nvme0n1 obs=4096 status=progressThen reproduce the sequence and compare to disk content:
# openssl enc -aes-256-ctr -pass pass:12345 -nosalt 2>/dev/null < /dev/zero | cmp /dev/nvme0n1 -If at the end you see something like
cmp: EOF on ‘/dev/nvme0n1’ after byte ..., line ...it means that disk at least doesn't corrupt stored data.
In case of corruption you'll see something like
/dev/nvme0n1 - differ: byte ..., line ...Offline
Before that, what does https://man.archlinux.org/man/dumpe2fs.8 report?
Offline
It printed over 100000 lines of output.
Here are the first, probably relevant, lines:
Filesystem volume name: Lexar 2TB
Last mounted on: <not available>
Filesystem UUID: 0df9ba34-3a02-4db4-aa1c-6678949413ec
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index orphan_file filetype extent 64bit flex_bg metadata_csum_seed sparse_super large_file huge_file dir_nlink extra_isize metadata_csum
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 122101760
Block count: 488378646
Reserved block count: 24418932
Overhead clusters: 7947216
Free blocks: 406275794
Free inodes: 121967900
First block: 0
Block size: 4096
Fragment size: 4096
Group descriptor size: 64
Reserved GDT blocks: 1024
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Flex block group size: 16
Filesystem created: Thu Dec 25 20:12:24 2025
Last mount time: Wed Apr 15 22:12:04 2026
Last write time: Wed Apr 15 22:12:07 2026
Mount count: 0
Maximum mount count: -1
Last checked: Wed Apr 15 22:12:07 2026
Check interval: 0 (<none>)
Lifetime writes: 101 MB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 32
Desired extra isize: 32
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: de7ca367-a3a5-4964-8947-c49870208649
Journal backup: inode blocks
Checksum type: crc32c
Checksum: 0x75eec39d
Checksum seed: 0x6e07d8d3
Orphan file inode: 12
Journal features: journal_incompat_revoke journal_64bit journal_checksum_v3
Total journal size: 1024M
Total journal blocks: 262144
Max transaction length: 262144
Fast commit length: 0
Journal sequence: 0x00000716
Journal start: 0
Journal checksum type: crc32c
Journal checksum: 0xafe6bcb8
Group 0: (Blocks 0-32767) csum 0x0e78
Primary superblock at 0, Group descriptors at 1-233
Reserved GDT blocks at 234-1257
Block bitmap at 1258 (+1258), csum 0x57f21b7d
Inode bitmap at 1274 (+1274), csum 0x60cf9852
Inode table at 1290-1801 (+1290)
23283 free blocks, 8181 free inodes, 2 directories, 8181 unused inodes
Free blocks: 9485-32767
Free inodes: 12-8192
Group 1: (Blocks 32768-65535) csum 0xe9b9 [INODE_UNINIT]
Backup superblock at 32768, Group descriptors at 32769-33001
Reserved GDT blocks at 33002-34025
Block bitmap at 1259 (bg #0 + 1259), csum 0xfe0b5cf3
Inode bitmap at 1275 (bg #0 + 1275), csum 0x00000000
Inode table at 1802-2313 (bg #0 + 1802)
1345 free blocks, 8192 free inodes, 0 directories, 8192 unused inodes
Free blocks: 34026-34303, 40120-40447, 40463-40959, 63260-63487, 65010-65023
Free inodes: 8193-16384
Group 2: (Blocks 65536-98303) csum 0x44e4 [INODE_UNINIT]
Block bitmap at 1260 (bg #0 + 1260), csum 0x6ba97360
Inode bitmap at 1276 (bg #0 + 1276), csum 0x00000000
Inode table at 2314-2825 (bg #0 + 2314)
107 free blocks, 8192 free inodes, 0 directories, 8192 unused inodes
Free blocks: 84179-84191, 84216-84223, 88033-88063, 92137-92159, 93649-93663, 93695, 94192-94207
Free inodes: 16385-24576
Group 3: (Blocks 98304-131071) csum 0x21b0 [INODE_UNINIT]
Backup superblock at 98304, Group descriptors at 98305-98537
Reserved GDT blocks at 98538-99561
Block bitmap at 1261 (bg #0 + 1261), csum 0xb5f08b36
Inode bitmap at 1277 (bg #0 + 1277), csum 0x00000000
Inode table at 2826-3337 (bg #0 + 2826)
424 free blocks, 8192 free inodes, 0 directories, 8192 unused inodes
Free blocks: 99562-99583, 111751-111807, 112299-112383, 124853-124927, 126901-126975, 128919-129023, 131067-131071
Free inodes: 24577-32768
Group 4: (Blocks 131072-163839) csum 0xae86 [INODE_UNINIT]
Block bitmap at 1262 (bg #0 + 1262), csum 0xe6cba190
Inode bitmap at 1278 (bg #0 + 1278), csum 0x00000000
Inode table at 3338-3849 (bg #0 + 3338)
890 free blocks, 8192 free inodes, 0 directories, 8192 unused inodes
Free blocks: 132591-132607, 134058-134143, 147115-147199, 147695-147711, 150110-150271, 150494-150527, 154285-154367, 154574-154623, 155002-155135, 159325-159487, 162843-162901
Free inodes: 32769-40960
...Offline
Filesystem state: clean
Last mount time: Wed Apr 15 22:12:04 2026
Last write time: Wed Apr 15 22:12:07 2026You did succeed to mount this??
But
Mount count: 0
Filesystem created: Thu Dec 25 20:12:24 2025
Lifetime writes: 101 MBlooks nonsensical. There's 82102852 used 4k blocks, ~313 GB
=> how many filesystems/partitions/whatever does testdisk find?
Offline