You are not logged in.
I have a system with two 1TB hard drives. It uses BIOS to boot, no UEFI. I would like to install Arch on a software RAID. I tried more or less following this :
https://www.serveradminz.com/blog/insta … tware-raid
But with modifications because my system doesn't have UEFI.
I can create the software RAID like so :
mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[a-b]1
I can then make partitions on /dev/md0 and pacstrap Arch onto one of the partitions as usual for an Arch install.
However, I can't get the bootloader to recognize the RAID partition when I'm configuring the bootloader. I've tried both grub and syslinux.
My questions are (1) Is there some documentation I could read about this (a lot of what I find now assumes UEFI now) and (2) Should I be making the software RAID the boot device, or do I need a master boot record on the disk to start the boot process and (3) is there a bootloader that is better suited to this? I'm not sure what, if anything, I need to do to start the software RAID as part of the boot process.
I think what I'm trying to do is a little different from the usual install which is why I'm struggling to find appropriate documentation.
Last edited by nilesOien (2024-05-21 23:41:36)
-- "Make it as simple as possible, but no simpler" - Albert Einstein
Offline
Have you read https://wiki.archlinux.org/title/GRUB#RAID ?
Online
It turns out that the system has an Intel Rapid Storage Technology card in it, as mentioned here :
https://wiki.archlinux.org/title/Partit … is_enabled
As far as I can tell, even if I go into the Intel setup on boot and set both drives to be what Intel terms to be "Non-RAID drives", there is a very strange interaction going on if I then try to install a software RAID on those drives. Newly created partitions can't have a filesystem created on them because they are "busy", even though they are brand new.
I'll poke around a bit more but at this point I strongly suspect that I'll just install Arch on the Non-RAID drives and have done with it. Attempting to set up a software RAID seems to be triggering something odd in this hardware/firmware (which is over 20 years old). Marking as solved.
-- "Make it as simple as possible, but no simpler" - Albert Einstein
Offline
from experience with fake-raid: spare yourself from a lot of trouble and just don't use it for your OS
unless there's some specific need, which i don't see when reading "raid 0" anyway, i recommend to avoid booting from a multi-disk array
reason: standard user grade hardware only support booting from one physical device anyway - so when setting up any sort of array you rely on the firmware to be smart enough to automatic failover in the event of failure
this means: setting up boot-sectors/-partitons on all drives part of the array - and keep them synchronized; setting up bootorder in a way so the bios/uefi can auto-failover to at least one other disk in the event the usual main/usual one fails; use a bootloader able to handle the array to at least load the kernel and initrd, which in turn has to able to assemble the array before pivot
you should avoid fake-raid at all cost!
most rely on windows-only drivers and will cause issues with Linux
it's bad quality cheap software raid at the level of an uefi option rom and prone to fail and to kill your disks (amd ones are very bad: mine randomly started to cause bad sectors - after dd /dev/zero and using zfs I not had any failures since)
"hardware-raid" (as in a physical hba with raid firmware) might be an option - but at this point you are in the realm of professional / server grade hardware and should be able to just afford 20 bucks for a cheap boot ssd
as for data: go software raid like zfs - it's opensource snd widely avaiable - and works in a rescue live system for recovery in the event of failure - and keep in mind: raid is not a backup - its not meant to prevent data loss but to increase performance and availability (in to bring down cost back in the early days)
Offline