You are not logged in.

#1 2011-04-12 12:34:44

evilgold
Member
Registered: 2008-10-30
Posts: 120

How does i make RAID?

TLDR: Fake RAID-1 setup is only showing main device in /dev/mapper, not paritions.  Maybe i need isw module?

I'm having a bit of trouble configuring my system into a raid. Not sure but i may have already lost my backup due to my ignorance, so before i dig myself any further i thought i'd ask for help here.

My desired setup is as follows:
2x 320gb drives RAID-1 with /(20gb) /boot(64mb) and /home(300gb)
2x 1tb drives RAID-1 as a storage/backup drive.

on top of this the individual partions are encrypted using luks, and ext4 and btrfs for filesystems (when i can get that far)

I configured the 1tb raid first, setting up my bios, and following this guide: https://wiki.archlinux.org/index.php/In … _Fake_RAID
Once i had this set working i backed up all my home and /etc onto it. Then went back to my bios to setup my 320gb raid (which previously held all the files on a single drive).
Thats when the problems started

So first i noticed that the 1tb drive was no longer showing up properly. /dev/mapper/isw_herpderp_blah would show, but not /dev/mapper/isw_herpderp_blahp1. When i tried looking at the partition table cfdisk complained about it not being valid, however fdisk listed the parition properly (and showed the partition as expected). This lead me to think that the partition map had been overwritten during the arch setup process, but this is rather unlikely, as I was sure to double and triple check everything before making any changes to the disks.

After a bit of fussing, i finally gave up on my data and moved on to installing onto the 320gb drives. This went smooth enough thanks to the wiki page. So after rebooting and getting things a bit more setup (install lxde, setting x, etc).... I went on to try and tinker with the 1tb raid some more. Another reboot and i noticed my systems RAID configuration was reporting the one terabyte as "initalized" while the other was "normal" (and i did recall the 1tb previously being listed as normal, before things went nuts).  Thinking this was the issue, i reconfigured (deleted and recreated) the raid between the 1tb drives (NOT messing with the 320s at all).

Now both drives show under the raid/boot screen as "normal" but linux fails to see the paritions on either of them...only fdisk (and maybe cfdisk) will show the proper partitions. This leaves me with no root partition.

Another thing of note, is that fdisk seems to show the disks as /dev/mapper/isw_herpderp_blah1 where as previously when they where showing up properly, they where listed as /dev/mapper/isw_herpderp_blahp1 (p1 instead of just 1).  This was also an issue with grub, mentioned and solved on the wiki page.

Is this normal behaviour for a RAID? This is my first time attempting to set one up, but i was under the impression they are for redudency, so I would expect that tinkering with one raid shouldnt screw up the other, but that seems to be what's happening for me.

I'm using x86_64, i've tried both the most current kernel26 package and the lts kernel. My /etc/mkinitcpio.conf is configured as per the Fake RAID wiki page, and I have 2 other non raid disks that are Luks encrypted w/ btrfs and show up properly in a live CD.

I'm not sure, but the wiki page hints on loading a module... would this only be needed if i wasnt seeing anything in /dev/mapper aside from 'control' (as the page seems to indicate) or could the kernel be not loading/autodetecting this module properly. I tried modprobe sata_isw as a guess, but had no luck.

Thoughts? Suggestions?

Thanks for reading my plight.

Offline

Board footer

Powered by FluxBB