You are not logged in.
Pages: 1
I'm new to Arch and I'm setting up a NAS and I'm stuck on the RAID setup.
I have an SSD (sdb) for the filesystem (non-RAID) and I'm trying to set up my two 2TB hdds, sda1 and sdc1 in a RAID 0 software array via the Arch Raid wiki. These drives were pulled out of my old HTPC, but were not previously in a RAID setup.
The NAS is currently headless (kinda), so I've been doing the setup through ssh. When I got to the part of the Raid setup that says to securely wipe the drives, my ssh session ended before this was completed. I don't think this has any effect on my problem, I just thought I'd mention it.
I completed the "Build the array" step:
# mdadm --create --verbose --level=5 --metadata=1.2 --chunk=256 --raid-devices=5 /dev/md/<raid-device-name> /dev/<disk1> /dev/<disk2> /dev/<disk3> /dev/<disk4> /dev/<disk5>
When I tried the next step, updating the mdadm.conf file it said the file was busy.
I skipped this step and formatted the array successfully, and was able to mount it and copy files to it.
The next step said "If you selected the Non-FS data partition code the array will not be automatically recreated after the next boot." I used GPT partition tables with the fd00 hex code, so I felt comfortable skipping that step.
I also skipped the next step, Add to kernel image. I don't know why, but my files were copying over just fine, so I guess I figured I was done.
Then I rebooted and it goes into emergency mode.
After reading up on my problem, I learned more about this process, and I figured my problem was one of the steps I skipped. So I went back and finished the remaining steps, albeit out of order.
The first thing I did was update my configuration file. So from the looks of it, that should be done before putting the filesystem on it. Do I have to re-format? I definitely want to avoid that.
When I read about this problem I thought for sure it was the mkinitcpio hooks that I was missing, so I added mdadm_udev to the HOOKS section of mkinitcpio as well as ext4 and raid456 to the MODULES section, per the wiki. Then I regenerated the initramfs image and rebooted but the problem remains.
Here's what I copied from the output of the boot process:
The first hint of a problem occurs during boot:
A start job is running for dev-disk-by\x2duuid-5e553f3d:c258f28b:d07571ea:ff289d13.device
Then it times out:
Timed out waiting for device dev-disk-by\x2duuid-5e553f3d:c258f28b:d07571ea:ff289d13.device
Dependency failed for /mnt/nas.
Dependency failed for Local File Systems.
And drops me to emergency mode.
Here's the relevant excerpts from journalctl -xb:
kernel: md: bind<sda1>
kernel: md: bind<sdc1>
kernel: md: raid0 personality registered for level 0
kernel: md/raid0:md127: md_size is 7814053888 sectors.
kernel: md:RAID0 configuration for md127 – 1 zone
kernel: md: zone0=[sda1/sdc1]
kernel: zone-offset= 0KB, device-offset= 0KB, size=3907026944KB
kernel:
kernel: md127: detected capacity change from 0 to 4000795590656
kernel: md127: unknown partition table
so it looks like the kernel sees the RAID, right?
systemd[1]: Job dev-disk-by\x2duuid-5e553f3d:c258f28b:d07571ea:ff289d13.device/start timed out.
systemd[1]: Timed out waiting for device dev-disk-by\x2duuid-5e553f3d:c258f28b:d07571ea:ff289d13.device.
--Subject: Unit dev-disk-by\x2duuid-5e553f3d:c258f28b:d07571ea:ff289d13.device has failed
--The result is timeout.
Systemd[1]: Dependency failed for /mnt/nas.
-- Unit mnt-nas.mount has failed.
-- The result is dependency.
systemd[1]: Dependency failed for Local File Systems.
-- Subject: Unit local-fs.target has failed
-- Unit local-fs.target has failed.
--
-- The result is dependency.
Any help is apprecated.
Last edited by lewispm (2013-08-27 11:42:59)
Offline
Are you able to manually assemble the array again in emergency mode?
FWIW, I use the 'mdadm' hook in mkinitcpio, not 'mdadm_udev'. Perhaps you should try that?
Are you familiar with our Forum Rules, and How To Ask Questions The Smart Way?
BlueHackers // fscanary // resticctl
Offline
I tried mdadmn hook vs. the mdadm_udev and it has the same result.
I tried to assemble the array in emergency mode and it assembled without any output (success?).
But then when I tried to mount:
mount: can't find UUID=5e553f3d:c258f28b:d07571ea:ff289d13
I double checked the UUID and it is the same.
Offline
What shows up in /proc/mdstat? Before and after manually assembling the array?
You should stick with mdadm_udev; the alternative is no longer supported...
Offline
After it couldn't find the array by UUID I changed it to /dev/md127 in fstab and it works.
I think I'll mark it solved, but should I investigate going back to UUID or is the current fstab acceptable?
Offline
After it couldn't find the array by UUID I changed it to /dev/md127 in fstab and it works.
That suggests your /etc/mdadm.conf file isn't right.
Are you familiar with our Forum Rules, and How To Ask Questions The Smart Way?
BlueHackers // fscanary // resticctl
Offline
What shows up in /proc/mdstat? Before and after manually assembling the array?
Here's after assembling the array, since it works now:
$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid0]
md127 : active raid0 sda1[0] sdc1[1]
3907026944 blocks super 1.2 256k chunks
unused devices: <none>
You should stick with mdadm_udev; the alternative is no longer supported...
I switched back, and its still working with the /dev/md127 in fstab.
suggests your /etc/mdadm.conf file isn't right.
Here's /etc/mdadm.conf:
ARRAY /dev/md/nas metadata=1.2 name=lewis-nas:nas UUID=5e553f3d:c258f28b:d07571ea:ff289d13
and the output of
mdadm --detail --scan
is identical, since the wiki directed me to generate the mdadm.conf file like this:
mdadm --detail --scan > /etc/mdadm.conf
but the array is /dev/md127, not /dev/md/nas,is that the sticking point? Is it the UUID? If so, how can I check?
Offline
ARRAY /dev/md/nas metadata=1.2 name=lewis-nas:nas UUID=5e553f3d:c258f28b:d07571ea:ff289d13
....
but the array is /dev/md127, not /dev/md/nas,is that the sticking point? Is it the UUID? If so, how can I check?
I don't know if pointing the block device to a custom name inside /dev/md/ works; I've never done it, but it probably should so if it isn't, it's probably worth a bug report [1].
Try changing mdadm.conf to create /dev/md0 instead of /dev/md/nas and see if that works. That will narrow down the issue to being the mdadm_udev hook not assembling correctly (which can be bug reported IMHO), or something else still.
[1] Not exacttly the same issue, but similar: https://mailman.archlinux.org/pipermail … 31416.html
Are you familiar with our Forum Rules, and How To Ask Questions The Smart Way?
BlueHackers // fscanary // resticctl
Offline
Pages: 1