You are not logged in.
Attempting to grow a raid system as I have done in the past.
I am presently in linux3.1.2-1 kernel and attempting to grow a raid0 of two devices to a three device raid0.
Previous to this attempt, I have successfully applied mdadm to grow from two devices in raid0 to five devices in raid0.
As of this kernel and initscripts and mdadm version, I cannot obtain the interim raid4 necessary to grow the raid0.
Previous attempts produced raid4 as an interim element in the new striping activity.
I used the following: (raid0 is now two devices)
mdadm --grow /dev/md0 --raid-devices=3 --add /dev/sdd2
This produces the error..... could not open raid4.
Perhaps someone knows how I can perform the indicated grow operation.
EDIT: Former use of grow depicted in this post:
https://bbs.archlinux.org/viewtopic.php?id=122577
Last edited by lilsirecho (2011-11-29 04:00:21)
Prediction...This year will be a very odd year!
Hard work does not kill people but why risk it: Charlie Mccarthy
A man is not complete until he is married..then..he is finished.
When ALL is lost, what can be found? Even bytes get lonely for a little bit! X-ray confirms Iam spineless!
Offline
Changed my strategy to create a larger raid0 bootable archlinux system.
Originally installed archlinux in kernel26v39 via ftp and used partitioned boot in one device and made the 8GB second partitions as md0 on two devices. Thus the raid data is limited to 8GB in these devices.
Desiring to increase the size of the array using 16GB devices, I copied the raid devices into two 16GB partitioned devices. This resulted in the same 8GB limited raid0 which booted and I was able eventually to upgrade to linux3 with several of these 16GB devices.
The size of the array remained at twice the 8gb device size, typically 14.98GB.
No mdadm operation to grow the raid0 to larger devices performs in linux3 for whatever reason.
Therefore, I changed the strategy to setup a new array on two 16GB devices, using usb access to the CF cards used for raid arrays. Firstly, formatted the two devices with 100mb partition #1 and the rest as ext3 partition#2. I copied the first partitions to both devices from the corresponding devices of my upgraded 16GB devices(linux3.1.2-1). Thus one device has the boot partition in partition#1 and the other device has swap in the first partition. Partition #2 in each device will be root in the raid array.
While connected in USB adapters, the 16GB CF cards to be loaded with full size raid0, I created a new raid array named /dev/md2.
mdadm --create /dev/md2 --level=0 --raid-devices=2 /dev/sdd2 /dev/sde2
This created the array, empty except for the partition#1 data added from the cp activities.
Having the 16GB devices with the linux3 established as /dev/md0 and installed in /sda and /sdb ports, I was then able to cp each partition #2 to the corresponding USB mounted device.
After completing that activity, gparted reports /dev/md0 as 14.98 GB and /dev/md127 as 29GB. Thus the array size is now twice the /dev/md0 size. Linux3 introduces the /dev/naming change to /md127. Hopefully, adding that to the kernel line in grub will allow booting.
This establishes a means to grow the size of a raid0 array by creating another in which to place the original.
There are further steps needed to establish grub and fstab and perhaps /etc/mdadm.conf.
The future looks much better after performing this activity!!!
I am hopeful I can get this pair of CF devices to boot...............
Prediction...This year will be a very odd year!
Hard work does not kill people but why risk it: Charlie Mccarthy
A man is not complete until he is married..then..he is finished.
When ALL is lost, what can be found? Even bytes get lonely for a little bit! X-ray confirms Iam spineless!
Offline
After many tries I finally have the following to report:
sh-4.2# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Thu Jul 7 19:14:55 2011
Raid Level : raid0
Array Size : 31090176 (29.65 GiB 31.84 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Thu Jul 7 19:14:55 2011
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Chunk Size : 512K
Name : (none):0
UUID : 476e7e0a:fb08db59:d2592e55:73d19345
Events : 0
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 18 1 active sync /dev/sdb2
sh-4.2#
This indicates that raid0 can be expanded with a copy procedure from a lesser sized raid0 array.
EDIT: Raid device identified in gparted as /dev/md0p1 29+GB
Last edited by lilsirecho (2011-11-29 04:12:35)
Prediction...This year will be a very odd year!
Hard work does not kill people but why risk it: Charlie Mccarthy
A man is not complete until he is married..then..he is finished.
When ALL is lost, what can be found? Even bytes get lonely for a little bit! X-ray confirms Iam spineless!
Offline
The copy procedure seems viable.......
The boot partition is ext2 and is 100mb...
The swap partition is swap and 100mb....
The root partition is /dev/md0 and is ext3... It consists of two ext3 partitioned device partitions.
Therefore all elements can be addressed in normal unix/linux fashion allowing upgrades, fsck, chroot, copy to larger devices and faster devices. Chroot was not used to upgrade to 29.6GB from 14.9GB, just copy into partitioned devices followed by e2fsck on /dev/md0.
The system is raid0 , bootable with grub bootloader in partitioned mode and at present linux3.1.2-1, 29.6GB in size.
The hdparm for two 533x ADATA devices is 122mb/s as read while booted into a hdd arch boot.
I suspect that raid devices might be poorly reported in hdparm while utilizing it within a raid system.
Thus, raid0 bootable can be resized by utilizing a second pair(or more) of devices to produce the resize and then copy the resized devices back into the original pair to resize that pair.
I fiddle with this mode of raid0 bootable to see what canbe done because its doable and produces some surprises.
The boot time to xfce4 Desktop is ~8,9 seconds which is probably a reasonable limit for the 4GB of installed packages.
Surprise....two maxell400x UDMA devices show hdparm read speed for the raid of 179+mb/s. This indicates differences in the architecture of the CF cards AFAICT.
Every CF card pair runs at 50% faster read speed than is normal for true IDE mode when utilizing sata to CF adapters for the raid array devices. All devices must be true ide mode and run in master ports. My system allows for bios boot enable for the sata port containing the CF boot partition. The raid array can be mounted in the normal hdd boot-up and handled therein in unix/linux fashion modifying its config. E2fsck can be applied to resync after changes.
Mebbe I will try a three device raid0 array with maxell devices soon............
Prediction...This year will be a very odd year!
Hard work does not kill people but why risk it: Charlie Mccarthy
A man is not complete until he is married..then..he is finished.
When ALL is lost, what can be found? Even bytes get lonely for a little bit! X-ray confirms Iam spineless!
Offline
FWIW:
Repeated the copy procedure to verify.
Prepared the two CF cards 16GB 400x maxell with each having 2 partitions. First partition 100mb and the second the remaining capacity.
Copied the existing raid0 CF devices of 8GB into the new partitions, one partition at a time. Boot contained in partition#1 of CF disk #1 and swap contained in partition#1 of CF card#2.
Partition#2 of each raid0 device contained "root" and these partitions were loaded into the new devices.
The result was /dev/md0 is now 29MB (versus ~15GB previously). Read speed is reported in hdparm as 182MB/s.
This procedure provides additional cells in the raid system providing longer life for the array in view of the limited life of cells for writes. (The same cells are not written to all the time).
Am interested in growing this combo to three devices with the "grow" mdadm command but it does not perform in linux3 kernels. This would extend the life of the array even further. Each device contained in such an array could also be copied to new devices and thereby extend the life of the array indefinitely.
Last edited by lilsirecho (2011-12-11 19:50:51)
Prediction...This year will be a very odd year!
Hard work does not kill people but why risk it: Charlie Mccarthy
A man is not complete until he is married..then..he is finished.
When ALL is lost, what can be found? Even bytes get lonely for a little bit! X-ray confirms Iam spineless!
Offline
Completed the grow operation with:
sh-4.2# resize2fs /dev/md0
resize2fs 1.41.14 (22-Dec-2010)
Resizing the filesystem on /dev/md0 to 7754240 (4k) blocks.
The filesystem on /dev/md0 is now 7754240 blocks long.
sh-4.2#
The --grow /dev/md0 --size=max must be followed by .......e2fsck /dev/md0.... and further yet by:
resize2fs /dev/md0
The result is shown above.
The previous size was 3897216 blocks.
The experiments with mdadm --grow have shown some progress.
One last step is to increase the raid to three devices.
So far not successful.
Prediction...This year will be a very odd year!
Hard work does not kill people but why risk it: Charlie Mccarthy
A man is not complete until he is married..then..he is finished.
When ALL is lost, what can be found? Even bytes get lonely for a little bit! X-ray confirms Iam spineless!
Offline
Latest efforts produced the following:
sh-4.2# cat /proc/mdstat
Personalities : [raid0]
md0 : inactive sdd2[1] sdc2[0] sda2[2](S)
23398400 blocks super 1.2
unused devices: <none>
sh-4.2# mdadm --grow /dev/md0 --raid-devices=3 --add /dev/sda2
mdadm: level of /dev/md0 changed to raid4
mdadm: Cannot open /dev/sda2: Device or resource busy
mdadm: /dev/md0: Cannot get array details from sysfs
sh-4.2#
EDIT: This indicates that /dev/sda2 has been added to the array blocks since each array is ~8GB.
I don't know why a spare drive partition can be busy acting as a spare!
Perhaps there is a way to proceed with adding this device partition to the array /dev/md0. According to man mdadm the array is to be resynced in raid4 and then returned to raid0 when fully synced.
As indicated above, the array is inactive but does show the three partitions involved.
If dev/sda2 can somehow be opened, the process probably will continue.
How to?
EDIT:Trying desperate methods!!!
sh-4.2# mdadm --run /dev/md0
mdadm: started /dev/md0
sh-4.2# cat /proc/mdstat
Personalities : [raid0] [raid6] [raid5] [raid4]
md0 : active raid4 sdd2[1] sdc2[0] sda2[2](S)
0 blocks super 1.2 level 4, 512k chunk, algorithm 0 [2/2] [UU]
unused devices: <none>
sh-4.2#
This indicates that the array is running but with 0 blocks in level4. This is consistent with the previous report of raid4 being initiated.
What can happen next?
Last edited by lilsirecho (2011-12-15 07:00:41)
Prediction...This year will be a very odd year!
Hard work does not kill people but why risk it: Charlie Mccarthy
A man is not complete until he is married..then..he is finished.
When ALL is lost, what can be found? Even bytes get lonely for a little bit! X-ray confirms Iam spineless!
Offline
Removing the added device restored the raid to raid0 which operates normally.
Perhaps a backup-file is required to fulfill the --grow function and establish a three device raid0.
Prediction...This year will be a very odd year!
Hard work does not kill people but why risk it: Charlie Mccarthy
A man is not complete until he is married..then..he is finished.
When ALL is lost, what can be found? Even bytes get lonely for a little bit! X-ray confirms Iam spineless!
Offline
Latest attempt results:
sh-4.2# mdadm --grow /dev/md0 --raid-devices=3 --add /dev/sda2
mdadm: level of /dev/md0 changed to raid4
mdadm: Cannot open /dev/sda2: Device or resource busy
mdadm: /dev/md0: Cannot get array details from sysfs
sh-4.2# mdadm --grow /dev/md0 --raid-devices=3 --add /dev/sda2
mdadm: Cannot understand this RAID level
sh-4.2#
You got that right!!!!
Prediction...This year will be a very odd year!
Hard work does not kill people but why risk it: Charlie Mccarthy
A man is not complete until he is married..then..he is finished.
When ALL is lost, what can be found? Even bytes get lonely for a little bit! X-ray confirms Iam spineless!
Offline
Attempting to mount the added device:
sh-4.2# mount /dev/sda2 /mnt/md1
mount: unknown filesystem type 'linux_raid_member'
sh-4.2#
Thus, it is set-up as a raid member in a three device raid but there are 0 blocks in the raid and it is raid4 as reported in cat /proc/mdstat.
Obviously the procedure has not completed the raid0 --grow to three devices.
What is the correct way to grow a raid0 to additional devices?
Prediction...This year will be a very odd year!
Hard work does not kill people but why risk it: Charlie Mccarthy
A man is not complete until he is married..then..he is finished.
When ALL is lost, what can be found? Even bytes get lonely for a little bit! X-ray confirms Iam spineless!
Offline
Reset the added drive with dd, cfdisk, mke2fs -j and retried the mdadm --grow as follows:
sh-4.2# mdadm --grow /dev/md0 --raid-devices=3 --add /dev/sdb2
mdadm: /dev/md0: could not set level to raid4
sh-4.2# mdadm --grow /dev/md0 --raid-devices=3 --add /dev/sdb2
mdadm: /dev/md0: could not set level to raid4
sh-4.2# mdadm --grow /dev/md0 --raid-devices=3 --add /dev/sdb2
mdadm: /dev/md0: could not set level to raid4
sh-4.2# mdadm --grow /dev/md0 --raid-devices=3 --add /dev/sdb2
mdadm: /dev/md0: could not set level to raid4
sh-4.2#
Thus it is apparent that the process isn't functioning for raid0 --grow as is allowed by the man mdadm outline for --add.
Prediction...This year will be a very odd year!
Hard work does not kill people but why risk it: Charlie Mccarthy
A man is not complete until he is married..then..he is finished.
When ALL is lost, what can be found? Even bytes get lonely for a little bit! X-ray confirms Iam spineless!
Offline
Performed the --grow procedure with three drives again. This attempt resulted in the following:
sh-4.2# mdadm --grow /dev/md0 --raid-devices=3 --add /dev/sda2
mdadm: level of /dev/md0 changed to raid4
mdadm: added /dev/sda2
mdadm: Need to backup 3072K of critical section..
sh-4.2# cat /proc/mdstat
Personalities : [raid0] [raid6] [raid5] [raid4]
md0 : active raid4 sda2[3] sdc2[0] sdd2[1]
31029248 blocks super 1.2 level 4, 512k chunk, algorithm 5 [4/2] [UU__]
resync=DELAYED
The system returned to root prompt and no activity seems to be occuring in the three drives.
Perhaps it is a hidden process or a very slow process requiring the system to remain on for a long period of time?
Any ideas?
Prediction...This year will be a very odd year!
Hard work does not kill people but why risk it: Charlie Mccarthy
A man is not complete until he is married..then..he is finished.
When ALL is lost, what can be found? Even bytes get lonely for a little bit! X-ray confirms Iam spineless!
Offline