You are not logged in.
Hi all,
I've post here, in the workstation section, this post (http://bbs.archlinux.org/edit.php?id=771157)
Cause I've got no reply, I'm searching for an alternate solution. A "drastic" solution.
That because, moreover, I'm experiences problem with the utility provided by rom controller.
I really don't what's happening!
After some try, when I press ctrl+a (at boot) the controller say that there's not enought memory to start the utility!
Dam!!!
So.... I've detach thetwo disk from the raid controller and put on the motherboard controller (that have no built-in raid function).
Now I want (need) to setup a raid software array but I can't find any usefull guide or tool.
All I can find are a guide to install Arch on a raid and probably I'm not able to extract the part that I need.
If is it possible I want to:
- Setup software raid 1 with two disk.
- Move there my home partition.
Someone on internet say that there's no significative performance degrade from hw to sw raid when setup consist in only two disk on raid 1.
Is it true?
Hope (again) in you!
Ale
Last edited by Alexbit (2010-06-17 12:51:35)
Offline
If you have two extra blank disks installed (ie. you have one drive you are running Linux on NOW, and two fresh/new drives) you will just need to use mdadm to assemble a RAID-1 on them (which will create /dev/md0) and then you format /dev/md0 with the usual filesystem tools and modify your fstab to put your home directory there.
You will want to look at a guide or manual for mdadm, but you will be using similar commands to:
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
Which will use the first partition on disks b and c to make a RAID-1 called /dev/md0. You then format that with ext3/xfs etc. Your 'fstab' mounts filesystems at boot time and is in the /etc directory.
If you want your entire system on a RAID, then just follow the guide on the wiki. You will have to boot from a live-CD in this case and go from scratch.
Since RAID-1 is just a 'mirror' there isn't much load put on the CPU from software RAID. The real improvement from hardware RAID comes from when you have many disks (and a lot of I/O requests) with RAID-5 or RAID-6 as this requires parity calculations.
Offline
Well..
probably i've to move (and permanently stay) uner newbie.
:-(
I've done:
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
It say (last word):
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
After I've format the partition ext4 (named md0p1) and reboot the system.
But if I open Dolphin I always find two disk when I aspet only one (raid1).
Can you (or some one other) give more detailed instruction?
again, thank you all.
Ale
Last edited by Alexbit (2010-06-14 13:38:39)
Offline
Do you have a line such as
/dev/md0 /home ext4 defaults 0 1
In your /etc/fstab? Make sure there are no references to the individual drives/partitions in fstab too.
If /dev/sdb1 and/or /dev/sdc1 are mounted you will need to unmount them ("sudo umount /dev/sdb1" etc) and then mount /dev/md0 instead.
"mdadm --detail /dev/md0" should give you information on the array should it be correctly initialised. The file "/proc/mdstat" will also output information.
Offline
OK, after "n" try it's seems to work.
This is what I've done:
]# mdadm --build --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm: array /dev/md0 built and started.
# fdisk -cu /dev/md0
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First sector (2048-293041601, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-293041601, default 293041601):
Using default value 293041601
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 83
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
# mkfs -t ext4 /dev/md0p1
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
9158656 inodes, 36629944 blocks
1831497 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
1118 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 22 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
# fsck -f -y /dev/md0p1
fsck from util-linux-ng 2.17.2
e2fsck 1.41.12 (17-May-2010)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/md0p1: 11/9158656 files (0.0% non-contiguous), 622801/36629944 blocks
# nano /etc/fstab
(added: "/dev/md0p1 /media/data ext4 defaults 0 1")
# mount /dev/md0p1
mount: mount point /media/data does not exist
# mkdir /media/data
# mount /dev/md0p1
# mkdir /media/data/tempdir
#
[BOL]BUT[/BOL]
when I reboot I get the "superblock error" so I've to remount filesystem in rw mode and comment the line in /etc/fstab where "/dev/md0p1 /media/data ext4 defaults 0 1" and reboot.
System restart but raid "disapper".
I'm so sure that I miss to load something but I don't know what.
:-(
Offline
Essential steps:
* 'fdisk /dev/sdb' and create single partition /dev/sdb1
* 'fdisk /dev/sdc' and create single partition /dev/sdc1
* mdadm --build --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
* (skip 'fdisk /dev/md0')
* mkfs -t ext4 /dev/md0
Add /dev/md0 to /etc/fstab
Offline
Nothing change.
When I reboot I always get this message:
/dev/sda3: clean, ...
/dev/sda1: clean, ...
/dev/sda4: clean, ...
/dev/md0
The superblock could not be read or does not describe a correct ext2 filesystems, if the device ....
:-(
Offline
After booting up all the way, what does the following show?
lsmod | grep ext
May need to do:
sed -i /etc/rc.conf -e 's|^\(MODULES=[^)]*\))|\1 ext4)|'
(or update /etc/mkinitcpio.conf)
Offline
After booting up, "lsmod | grep ext" say:
$ lsmod | grep ext
ext2 55924 1
ext4 302709 2
mbcache 4278 2 ext2,ext4
jbd2 63651 1 ext4
crc16 1041 1 ext4
Offline
Check /etc/fstab? Any typos?
Anything contending with /media/data node?
In /etc/fstab, comment out # /dev/md0 /media/data ext4 defaults 0 1
Reboot and manually mount:
# mkdir /testing
# rmmod ext2 (unmount /boot first?)
# rmmod ext4
# modprobe ext4
# mount -t ext4 -o ro /dev/md0 /testing
Offline
This is my fstab now (before try your last tip)
# /etc/fstab: static file system information
#
# <file system> <dir> <type> <options> <dump> <pass>
none /dev/pts devpts defaults 0 0
none /dev/shm tmpfs defaults 0 0
#/dev/cdrom /media/cd auto ro,user,noauto,unhide 0 0
#/dev/dvd /media/dvd auto ro,user,noauto,unhide 0 0
#/dev/fd0 /media/fl auto user,noauto 0 0
/dev/sda1 /boot ext2 defaults 0 1
/dev/sda2 swap swap defaults 0 0
/dev/sda3 / ext4 defaults 0 1
/dev/sda4 /home ext4 defaults 0 1
#/dev/md0 /media/data ext4 defaults 0 1
I've try but it doesn't work again: when I reboot I got the well know error (upper post).
In my opinion I miss to load some module at boot.
Is there a modulo to put in rc.conf or a script lo launch to allow system to view my sw raid setup?
help - help - help
I've got many problem to setup my workstation ... :
Raid (I'm unable to let it work via hw - http://bbs.archlinux.org/viewtopic.php?id=98632 - or sw - this post)
iscsi (I'm unable to let it work - http://bbs.archlinux.org/viewtopic.php?id=98636)
proxy (same history - still working on ntlmap - http://bbs.archlinux.org/viewtopic.php?id=98635)
Sync WinMobile (unable to find destination - http://bbs.archlinux.org/viewtopic.php?id=99088)
QuadroFX performance (http://bbs.archlinux.org/viewtopic.php?id=99168)
I'm start thinking that is a "opera" bigger than me...
Last edited by Alexbit (2010-06-16 08:09:25)
Offline
TODAY is a GOOD day!
Unexpectly -> SOLVED!!!
Yeah!
BEFORE you start, I assumed that:
- sdb and sdc are the two disk that we have to put in raid
- /mnt/DATAraid is the folder where we mount the array
- /mnt/exHOME is the "old" home folder
So.. let's do it!
Open a terminal and type:
# init 1
# fdisk -cu /dev/sdb
d
n
p
1
[enter]
[enter]
t
fd
p
w
repeat the same steps for sdc then
# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
# cat /proc/mdstat
# mkfs -t ext4 -j -L DATAraid /dev/md0
# mkdir /mnt/DATAraid
# mkdir /mnt/exHOME
# mount /dev/md0 /mnt/DATAraid
# nano /etc/fstab
this is my altered fstab, i've comment out the original line and change with the subsequent lines:
#/dev/sda4 /home ext4 defaults,noatime 0 2
/dev/sda4 /mnt/exHOME ext4 defaults,noatime 0 2
#/dev/md0 /mnt/DATAraid ext4 defaults 0 0
/dev/md0 /home ext4 defaults,noatime 0 1
then
# rm /mnt/etc/mdadm.conf
# mdadm --examine --scan >> /mnt/etc/mdadm.conf
# nano /etc/mkinitcpio.conf
and add "mdadm" to your hook string just before "autodetect".
Save, reboot and we you're at the login screen choose terminal login and, after login type:
# init 1
# rsync -avH -delete -progress /mnt/EXhome/arcuser /home
# reboot
Now, YOU'RE DONE
USEFULL wiki:
- http://wiki.archlinux.org/index.php/Con … em_to_RAID
- http://wiki.archlinux.org/index.php/Mkinitcpio
- http://wiki.archlinux.org/index.php/Ins … AID_or_LVM
Last edited by Alexbit (2010-07-16 08:26:12)
Offline