You are not logged in.
I have a stand alone server running Arch 2009.08 x64 and it was configured via the 'Netinst' CD however during the time of the installation, the server had only one physical disk. I installed the system as follows:
* /dev/sda1 = swap
* /dev/sda2 = boot
* /dev/sda3 = /
* /dev/sda4 = /var
Now I have since obtained identical drives on this system which show up as /dev/sdb and /dev/sdc. Both of the new drives are identical. I have the system running perfectly however I would like to create /dev/sdb and /dev/sdc in a mirror. I can use 'cfdisk' utility to create RAID partitions on the newly installed drives using the 'fd' code. Can I simply just use 'mdadm' utility to create a mirror after I create the partitions with out messing up my configured / working Arch Linux server?
mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
The Wiki has a write up (which I have never been able to get working) on RAID / LVM but this appears to be configured during the installation of Arch which I have already done. Not to mention I have followed the Wiki guide dozens of times to no avail. I even gave up on trying to install the swap and root partitions on the RAID and just do /home and it still fails.
Anyone know if I will run into any issues?
./
Offline
As long as the drives don't hold any data it shouldn't be a problem. Make sure you add the mdadm hook to mkinitcpio.conf so the array is picked up and assembled during boot.
Got Leenucks? :: Arch: Power in simplicity :: Get Counted! Registered Linux User #392717 :: Blog thingy
Offline
Correct, I need to add the 'mdadm' hook before 'filesystems' not after as indicated in the Wiki but is that all I need to do? What about running this command?
mdadm -D --scan >>/etc/mdadm.conf
Last edited by Carlwill (2009-11-23 20:09:59)
./
Offline
To my knowledge, yes. I never used it that way since I set it up the old way (with the arrays specified in GRUB's configuration etc.). I had / on RAID 1 and my data on RAID 5.
Got Leenucks? :: Arch: Power in simplicity :: Get Counted! Registered Linux User #392717 :: Blog thingy
Offline
To my knowledge, yes. I never used it that way since I set it up the old way (with the arrays specified in GRUB's configuration etc.). I had / on RAID 1 and my data on RAID 5.
So you followed the Arch Wiki for RAID configuration during the installer and it worked with out any problems? I have 3 drives just like the Wiki and I have never been able to get it to work. I have spent weeks trying. Obviously I skip the LVM section of the Wiki since I don't use LVM but I would like to know what I am doing wrong? It's killing me that I can't do this and I have basically memorised the Wiki from doing it so many times...
./
Offline
This is how my GRUB entry looked:
# (0) Arch Linux Server
title Arch Linux Server
root (hd0,4)
kernel /boot/vmlinuz26server root=/dev/md0 ro md=0,/dev/sda5,/dev/sdb5 md=1,/dev/sda6,/dev/sdb6,/dev/sdc6 vga=794
initrd /boot/kernel26server.img
My mdadm.conf:
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=d8e1f142:46e9f433:9fece339:f6ae56e9
ARRAY /dev/md1 level=raid5 num-devices=3 UUID=a5e90f7f:964646c3:1b73e96b:d94bc95b
My mkinitcpio.conf (this is reconstructed since I removed the RAID setup to minimise power consumption):
HOOKS="base udev autodetect sata raid usbinput keymap"
# RAID setups defined
md=0,/dev/sda5,/dev/sdb5
md=1,/dev/sda6,/dev/sdb6,/dev/sdc6
Note I have no filesystems hook specified since I either built the FS drivers into my kernel statically or added them to the cpio image manually with the MODULES= directive.
I hope that helps you. Keep in mind 'raid' is an old hook, 'mdadm' is the new one if I'm not mistaken. It might also very well be you don't need the hook altogether with the way I set up the RAIDs (assembled by GRUB etc.), but I wouldnt bet on that - you can always test of course .
Got Leenucks? :: Arch: Power in simplicity :: Get Counted! Registered Linux User #392717 :: Blog thingy
Offline
Well here is what I am doing:
1. Use 'cfdisk' to partition all my three of my disks identically.
2. Use 'mdadm' to create all three RAID arrays as noted in the Wiki.
3. Run the following command:
rm /etc/mdadm.conf
mdadm --D -scan >> /etc/mdadm.conf
4. Run the set up as noted in the Wiki via /arch/setup
Now when I am configuring my system via the installer and it tells me I need to add the 'mdadm' hook in 'mkinitcpio.conf', do I need to add anything to the 'modules' line on the 'mkinitcpio.conf' file? I was told I do however the Wiki says nothing about that. Just that I need to add 'mdadm' to the 'hooks' section.
./
Offline
Perhaps you need to establish a file system on your md raid device before running mkinitcpio.................
Do you find your array listed in /etc/mdadm.conf after running the scan?
If so, only the missing file system can abort the raid. The file system must be applied to the md device AFAICT and not the constituent drives.
Prediction...This year will be a very odd year!
Hard work does not kill people but why risk it: Charlie Mccarthy
A man is not complete until he is married..then..he is finished.
When ALL is lost, what can be found? Even bytes get lonely for a little bit! X-ray confirms Iam spineless!
Offline
Perhaps you need to establish a file system on your md raid device before running mkinitcpio.................
Do you find your array listed in /etc/mdadm.conf after running the scan?
If so, only the missing file system can abort the raid. The file system must be applied to the md device AFAICT and not the constituent drives.
What do you mean establish the file system on the RAID? I thought I set my filesystems like ext4 on /dev/md0 and then I mount /dev/md0 /. When that's all said and done, I think the installer runs mkinitcpio, no?
./
Offline
This procedure for md device is standard for raid setups using mdadm which I found necessary for my software raid devices. These devices are full size and do not display any data in gparted. Thus they are single partition sd devices using all of the device. The composite md device carries the file system and must be established after the assembly AFAICT.
The mdadm system is further covered by the array data established in /etc/mdadm.conf by the -scan you performed. Mkinitcpio -p kernel26 is used to permit assembly in the boot process.
Thus my process was to delete the devices present state in gparted and then establish the md with the assemble procedure, then apply the filesystem to the md thusly generated. Then the /etc/mdadm.conf is generated followed by mkinitcpio.
I initially tried to establish a raid array by firstly applying a linux file system in gparted to the raid devices. This did not produce a raid array. Only when md device was addressed by mkefs did an array result.
This may not be applicable in your arrangement but it would seem reasonable that if mdadm procedure calls for these params they should be reflected in it. It may be that ext4 is not a raid option in mdadm.
I comment on the problem in hopes some clue may show up which aids in solving your difficulty.
It is proposed that any mixture of devices can be established in a raid array (but that may not include different file systems or all file systems). Thus an ide drive and several usb devices can make up a raid array...or just several usb devices.
My experience is with raid0 and am using sata to CF cards with four devices, one OCX device and three CF devices. I had to establish the filesystem in the md device after assembling the array. My raid0 reads at 176MB/S in hdparm.
I can examine the array in /etc/mdadm.conf and/or cat /proc/mdstat among others.
Mebbe there is a clue in this detail?
Prediction...This year will be a very odd year!
Hard work does not kill people but why risk it: Charlie Mccarthy
A man is not complete until he is married..then..he is finished.
When ALL is lost, what can be found? Even bytes get lonely for a little bit! X-ray confirms Iam spineless!
Offline
I copied a googled description for mdadm creation of raid array:
If you are using mdadm, a single command like
mdadm --create --verbose /dev/md0 --level=linear --raid-devices=2 /dev/sdb6 /dev/sdc5
should create the array. The parameters talk for themselves. The output might look like this
mdadm: chunk size defaults to 64K
mdadm: array /dev/md0 started.
Have a look in /proc/mdstat. You should see that the array is running.
Now, you can create a filesystem, just like you would on any other device, mount it, include it in your /etc/fstab and so on.
\
This refers to the filesystem creation on the md device : a very important step..............
Prediction...This year will be a very odd year!
Hard work does not kill people but why risk it: Charlie Mccarthy
A man is not complete until he is married..then..he is finished.
When ALL is lost, what can be found? Even bytes get lonely for a little bit! X-ray confirms Iam spineless!
Offline
cat /proc/mdstat
should give you info on your array (whether it's syncing, degraded, fully operational, etc., etc.).
If what's in the wiki doesn't work carlwill, then why don't you try what I pasted and report back, and if it works, adapt the wiki? And by the way, my mdadm.conf is the result of a mdam scan, too.
Got Leenucks? :: Arch: Power in simplicity :: Get Counted! Registered Linux User #392717 :: Blog thingy
Offline
I guess the easiest thing for me to do is write the most detailed post I can of exactly what I am doing. I have an extremely easy hardware configuration which is a x64 system with only two identical SATA drives which I would like to mirror. My partitions for the two drives are very straight forward:
- /dev/sda1 = 1024 MB (/boot) *bootable*
- /dev/sda2 = 249 GB (Linux RAID Autodetect)
- /dev/sdb1 = 1024 MB (swap)
- /dev/sdb2 = 249 GB (Linux RAID Autodetect)
Now as you can see I am not using RAID on my /boot partition so I should be able to just install Grub on the MBR /dev/sda. Right? Here is what I am doing step by step...
1. Boot from Netinst 2009.08 disk.
2. Login as root.
3. Load the RAID1 modules via the command:
[root@archiso ~]# modprobe raid1
4. Create the partitions listed above exactly in the 'cfdisk' utility.
5. Create the RAID array using the 'mdadm' command listed below:
[root@archiso ~]# mdadm --create --verbose /dev/md0 --level=linear --raid-devices=2 /dev/sda2 /dev/sdb2
mdadm: chunk size default to 64k
mdadm: array /dev/md0 started.
At this point my /etc/mdadm.conf has no info about the RAID array I just created above so I do the following command:
[root@archiso ~]# rm /etc/mdadm.conf
[root@archiso ~]# mdadm -D --scan >> /etc/mdadm.conf
[root@archiso ~]# cat /etc/mdadm.conf
ARRAY /dev/md0 level=linear num-devices=2 metadata=0.90 UUID=d76f90c5:1c5e4a1b:62dde03e:d4284c62
So now according to the Wiki, I am free to run the '/arch/setup' command and "SetupNetwork", "Set Clock", "Prep Hard Drives", "Install Packages", "Configure System", & "Install Bootloader".
Does the above so far look 100% correct to you guys? Did I miss something or should I change anything before I move on?
./
Offline
Looks ok to me... Keep in mind you need to copy that mdadm.conf to your installation root after setup.
Got Leenucks? :: Arch: Power in simplicity :: Get Counted! Registered Linux User #392717 :: Blog thingy
Offline
Looks ok to me... Keep in mind you need to copy that mdadm.conf to your installation root after setup.
I am a little unclear, how do I do this? I don't see any mention of this in the Wiki so perhaps this has been my issue all along.
After I install the Grub bootloader to /dev/sda and I exit the installer, I back at a root prompt. Do I simply:
cp -a /etc/mdadm.conf /mnt/etc/
If not so above, can you explain exactly when and how I should execute this?
./
Offline
That should be correct. Alternatively you can append the output of mdadm --scan to the new mdadm.conf, just like you did before.
Got Leenucks? :: Arch: Power in simplicity :: Get Counted! Registered Linux User #392717 :: Blog thingy
Offline
So you're saying once I exit the installer and am back at the command prompt on the live CD, I simple re-run the command I did before? When you say 'new mdadm.conf', what is the absolute path you're expecting?
/mnt/etc/mdadm.conf or /etc/mdadm.conf?
I would think it would be the first since that is the file it will read when it boots, no?
./
Offline
Seriously.... Yes.
Got Leenucks? :: Arch: Power in simplicity :: Get Counted! Registered Linux User #392717 :: Blog thingy
Offline
Hate to bring this back from the dead but I have yet to get this to work. When I am in the process of "Configure System":
rc.conf
mkinitcpio.conf
hosts.conf
etc etc etc
Do I need to add "any" modules in rc.conf? I know I need to add the 'mdadm' hooks in 'mkinitcpio.conf' file but what do I need to add to rc.conf? The Wiki makes no mention of me needing to add anything to rc.conf file?
Do I need the 'md_mod' and 'raid1' to the "MODULES" list in your rc.conf? I was told I do and the Wiki makes no mention of this at all. Can anyone please clarify?
Last edited by Carlwill (2009-12-01 18:27:06)
./
Offline
Well...
I think I finally got this working in a RAID1 mirror! I am pretty stoker right now because I have no idea what I did different every other time but it's working and she looks good...
FDISK
Disk /dev/sda: 320.1 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0003e44e
Device Boot Start End Blocks Id System
/dev/sda1 * 1 498 4000153+ 83 Linux
/dev/sda2 499 5478 40001850 83 Linux
/dev/sda3 5479 38913 268566637+ fd Linux raid autodetect
Disk /dev/sdb: 320.1 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000ba720
Device Boot Start End Blocks Id System
/dev/sdb1 1 498 4000153+ 82 Linux swap / Solaris
/dev/sdb2 499 5478 40001850 83 Linux
/dev/sdb3 5479 38913 268566637+ fd Linux raid autodetect
Disk /dev/md0: 275.0 GB, 275012124672 bytes
2 heads, 4 sectors/track, 67141632 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000
Disk /dev/md0 doesn't contain a valid partition table
MDSTAT
[root@mail ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda3[0] sdb3[1]
268566528 blocks [2/2] [UU]
/dev/md0:
Version : 0.90
Creation Time : Tue Dec 1 11:00:43 2009
Raid Level : raid1
Array Size : 268566528 (256.13 GiB 275.01 GB)
Used Dev Size : 268566528 (256.13 GiB 275.01 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Tue Dec 1 13:37:24 2009
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 94c95cea:6ead73dd:d14ae7e0:061e6ea4
Events : 0.36
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 8 19 1 active sync /dev/sdb3
Looks like this is exactly what I wanted!
Last edited by Carlwill (2009-12-01 18:47:09)
./
Offline