You are not logged in.
I first had the exact same problem as another user in another topic: https://bbs.archlinux.org/viewtopic.php?id=258959
My system is very new and I'm new to arch linux but also have a really complex setup ^^'
So after my /dev/md126 | /dev/md126p1 didn't show up anymore, I could copy my files via mounting all my disks back to my system disk via arch linux live cd (2020.08.01)
mount /dev/nvme0n1p2 /mnt
mount /dev/nvme0n1p1 /mnt/boot
mount /dev/md126p1 /mnt/md126p1
mount --bind /mnt/md126p1/opt /mnt/opt
then mv everything from opt to a temp location, umount /mnt/opt and mv back to /mnt/opt
modified fstab and booted back into the system
everything worked
but linux 5.8.8.arch1-1 really can not found any /dev/md* things anymore
I read many things about raid and also found things like using dmraid in HOOKS and dm_mod in MODULES and so on...
after I have setup many new things I now see /dev/mapper/isw_bebfjfaiib_SSD-RAID0
"SSD-RAID0" is the name of the RAID I have written in raid controller menu
$ sudo blkid
/dev/nvme0n1p1: LABEL_FATBOOT="EFIBOOT" LABEL="EFIBOOT" UUID="CB6C-3316" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="80981458-b09e-0f45-9936-f4adbebf3ce2"
/dev/nvme0n1p2: UUID="a837fee0-84f6-4d49-b08b-5e0ae8188db0" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="bbc113f0-25c3-1d42-855f-8ac14badcd0b"
/dev/nvme0n1p3: UUID="77a686d8-b652-40ab-8a8a-17e5d5208cbb" TYPE="swap" PARTUUID="36e431f3-376f-0c49-af27-7b3e6ba4a876"
/dev/sda: TYPE="isw_raid_member"
/dev/sdb: TYPE="isw_raid_member"
/dev/sdc: TYPE="isw_raid_member"
/dev/sdd1: LABEL_FATBOOT="EFIBOOT" LABEL="EFIBOOT" UUID="CC39-484D" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="be6d8b72-c7cb-d04a-87ad-2cd228707f66"
/dev/sdd2: UUID="4bd6b4c1-91ea-44ff-856c-d03dca2a8604" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="68e41026-220c-534a-99b5-34c10ef956cc"
/dev/sdd3: UUID="f3aa0e6f-f055-47c8-bfea-243f4151c869" TYPE="swap" PARTUUID="46ff6dd7-cad4-e242-8073-d01a0a0f37a8"
/dev/sr0: BLOCK_SIZE="2048" UUID="2020-08-01-09-10-05-00" LABEL="ARCH_202008" TYPE="iso9660" PTUUID="5f20069e" PTTYPE="dos"
/dev/sde2: LABEL="Shinigamis Externe" BLOCK_SIZE="4096" UUID="D87060D17060B7C0" TYPE="ntfs" PARTLABEL="Basic data partition" PARTUUID="b37ee969-8dba-46ac-8f07-6fe912b1e9d4"
/dev/mapper/isw_bebfjfaiib_SSD-RAID0: PTUUID="4acdd913-3995-b64d-80b1-cd41e9835867" PTTYPE="gpt"
/dev/sde1: PARTLABEL="Microsoft reserved partition" PARTUUID="203efda6-4bfb-4d3d-b3d0-d48c75209fc0"
$ sudo fdisk -l
Disk /dev/nvme0n1: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: Samsung SSD 970 EVO Plus 2TB
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 99E8FD5B-C8B4-1F49-AC63-4B28C6E46CC1
Device Start End Sectors Size Type
/dev/nvme0n1p1 2048 2099199 2097152 1G EFI System
/dev/nvme0n1p2 2099200 3638593535 3636494336 1.7T Linux root (x86-64)
/dev/nvme0n1p3 3638593536 3907029134 268435599 128G Linux swap
Disk /dev/sda: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: MKNSSDRE2TB
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000
Device Boot Start End Sectors Size Id Type
/dev/sda1 1 4294967295 4294967295 2T ee GPT
Disk /dev/sdb: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: MKNSSDRE2TB
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdc: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: CT2000MX500SSD1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/sdd: 238.47 GiB, 256060514304 bytes, 500118192 sectors
Disk model: Samsung SSD 840
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: A6F3BC60-E3D2-CB44-91DB-D60AD8CD0F3B
Device Start End Sectors Size Type
/dev/sdd1 2048 2099199 2097152 1G EFI System
/dev/sdd2 2099200 466563071 464463872 221.5G Linux root (x86-64)
/dev/sdd3 466563072 500118158 33555087 16G Linux swap
Disk /dev/mapper/isw_bebfjfaiib_SSD-RAID0: 5.46 TiB, 6001189453824 bytes, 11721073152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 131072 bytes / 393216 bytes
Disklabel type: gpt
Disk identifier: 4ACDD913-3995-B64D-80B1-CD41E9835867
Device Start End Sectors Size Type
/dev/mapper/isw_bebfjfaiib_SSD-RAID0-part1 2048 11721073118 11721071071 5.5T Linux RAID
Disk /dev/sde: 2.73 TiB, 3000558944256 bytes, 732558336 sectors
Disk model: My Book 1140
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: B6CD5155-8D7F-4045-A275-377DF565DCAF
Device Start End Sectors Size Type
/dev/sde1 6 32773 32768 128M Microsoft reserved
/dev/sde2 33024 732558079 732525056 2.7T Microsoft basic data
So now I'm confused about one thing!
/dev/sda1 1 4294967295 4294967295 2T ee GPT Is it correct that the first disk of the raid has its own GPT Table?
And is it correct or wrong that it has "Disklabel type: dos"?
After I created a ext4 partition in gparted I found out that
/dev/dm-0 was the same as /dev/mapper/isw_bebfjfaiib_SSD-RAID0
and
/dev/dm-1 was the same as /dev/mapper/isw_bebfjfaiib_SSD-RAID0-part1
I mv all opt files back into the new raid, modified /etc/fstab and then reboot
$ cat /etc/fstab
# Static information about the filesystems.
# See fstab(5) for details.
# <file system> <dir> <type> <options> <dump> <pass>
# /dev/nvme0n1p2
UUID=a837fee0-84f6-4d49-b08b-5e0ae8188db0 / ext4 rw,relatime 0 1
# /dev/nvme0n1p1 LABEL=EFIBOOT
UUID=CB6C-3316 /boot vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,utf8,errors=remount-ro 0 2
# /dev/dm-1
UUID=13b716f0-7466-48f0-8d1d-5c414e7b52c2 /mnt/ssdraid0 ext4 rw,relatime,stripe=96 0 2
# /mnt/ssdraid0/opt
/mnt/ssdraid0/opt /opt none rw,stripe=96,bind 0 0
# /dev/nvme0n1p3
UUID=77a686d8-b652-40ab-8a8a-17e5d5208cbb none swap defaults 0 0
# QEMU
hugetlbfs /dev/hugepages hugetlbfs mode=1770,gid=992 0 0
But now it failed
I don't have /dev/dm-0 or /dev/dm-1 after the reboot
And when I try to
sudo mount /dev/mapper/isw_bebfjfaiib_SSD-RAID0-part1 /mnt/ssdraid0
manually,
I get
mount: /mnt/ssdraid0: special device /dev/mapper/isw_bebfjfaiib_SSD-RAID0-part1 does not exist.
https://user-images.githubusercontent.c … f731ad.png
Any chance to get my files within the raid back? I have a windows10 vm with working PCI passthrough in it
It's not a big deal to lose this vm, but maybe it could save me some time and also bring me some knowledge about raid/arch linux
If I cannot fix this variant of raid setup, I maybe will wait until linux 5.8.8.arch1-1 was fixed and I can see /dev/md* things again, cause this (old) variant was much better, cause I can access my raid files with arch linux live cd
Thanks for any help
Mod Edit - Replaced oversized image with link.
CoC - Pasting pictures and code
Last edited by Shinigami92 (2020-09-13 11:16:13)
Offline
Ok I have learned my lesson...
I installed linux-lts and fall backed to my old nice working strategy
linux-5.8.8 will NOT work for me and I need to wait until this bug was fixed
I also learnd that dm_mod and dmraid stands for device mapper
Last edited by Shinigami92 (2020-09-13 11:14:32)
Offline