You are not logged in.
The subject of this post is raid0 expansion or "grow" which is described in the wiki as the method to do the expansion.
However, in latest kernel upgrades the method fails.
[root@n6re ~]# mdadm --grow /dev/md0 --raid-devices=3 --add /dev/sde2
mdadm: /dev/md0: could not set level to raid4
As is shown the interim raid4 necessary to perform the sync operation is not allowed.
What can be done to enable added /devices to an existing array without requiring a new create statement?
It might well be a good idea to have a separate raid array indexed thread to consolidate all the raid problems in one arena.
Prediction...This year will be a very odd year!
Hard work does not kill people but why risk it: Charlie Mccarthy
A man is not complete until he is married..then..he is finished.
When ALL is lost, what can be found? Even bytes get lonely for a little bit! X-ray confirms Iam spineless!
Offline
Can you show us output "cat /proc/mdstat"?
Offline
The following is the file for /dev/md0:
[root@n6re ~]# cat /proc/mdstat
Personalities : [raid0] [raid6] [raid5] [raid4]
md0 : active raid0 sdb2[0] sda2[1]
15588864 blocks super 1.2 512k chunks
unused devices: <none>
[root@n6re ~]#
There is no indication of a third device in this file because it was not added by the --add procedure which failed.
EDIT: The array /dev/md0 is ~15GB and is bootable thru partitioned boot. The array /dev/md0 is partition#2 of the two drives and serves as root in the booted system. Thus the system is 100mb boot, small 100mb swap and the rest root of two 16GB CF cards.
I assume that I must refer to the added device as partition#2 since it is partitioned as root for the other drives. I can try to --add it as just /dev/sde but don't expect it will change things.EDIT: no change occured.
Last edited by lilsirecho (2011-12-07 16:16:58)
Prediction...This year will be a very odd year!
Hard work does not kill people but why risk it: Charlie Mccarthy
A man is not complete until he is married..then..he is finished.
When ALL is lost, what can be found? Even bytes get lonely for a little bit! X-ray confirms Iam spineless!
Offline
Try adding the device first:
[root@n6re ~]# mdadm --add /dev/md0 /dev/sde2
Use cat /proc/mdstat to track the progess, then once it's done and the device has been added then grow the array.
[root@n6re ~]# mdadm --grow /dev/md0 --raid-devices=3
I recently did this for a Raid6 storage array and it worked fine, though it took a while since I was adding 2 1Tb drives.
Edit: Probably a silly question but you do have an /dev/sde2 partition right? If you mirrored the partition structure on /dev/sde as you have on /dev/sda and /dev/sdb then you should be good to go. If not you'll need to replace the --add /dev/sde2 with /dev/sde* (the * being whatever you have).
Last edited by imatechguy (2011-12-07 18:45:04)
Offline
imatechguy:
The mdadm.conf makes specific reference to raid0 in --grow mode which stipulates that --grow and --add are needed in the same command.
The problem here seems to require a kernel mod to enable raid4 which is used to re-sync raid0 and then re-establish raid0.
This procedure worked before linux3 but no longer has kernel support.
What is required in the kernel to support this function is something I have no clue about.
That is my understanding of the problem at the moment.
EDIT: Since the mdadm man page info does not produce the --grow it is possible that the capability is history and the mdadm man page needs upgrading.
Last edited by lilsirecho (2011-12-07 18:47:25)
Prediction...This year will be a very odd year!
Hard work does not kill people but why risk it: Charlie Mccarthy
A man is not complete until he is married..then..he is finished.
When ALL is lost, what can be found? Even bytes get lonely for a little bit! X-ray confirms Iam spineless!
Offline
Ah, just now realizing you were trying to grow a Raid0 not a Raid4. If I'm not mistaken mdadm doesn't support adding drives to Raid0 or Raid10 arrays, only replacing existing devices with larger capacity drives so I'm not certain it's a kernel thing.
Depending on size and your access to spare HDD's you could copy/move your existing array to a single drive, kill off md0 and recreate it with the three drives you have for it now.
Sorry about that I just mis-read the raid level you were using.
Offline
imatechguy is right: impossible to grow raid0. You can look article, in chapter "Contrasting RAID-0 and LVM":
In the case of mdadm and software RAID-0 on Linux, you cannot grow a RAID-0 group. You can only grow a RAID-1, RAID-5, or RAID-6 array. This means that you can’t add drives to an existing RAID-0 group without having to rebuild the entire RAID group but having to restore all the data from a backup.
Therefore article was writen in 2009, I think, that behavior don't changed until now.
Last edited by kurych (2011-12-07 20:54:44)
Offline
Imatechguy:
The understanding I have is that I have done this --grow function in the past but not since linux 3. So the problem is in the linux three setup somewhere.
Man mdadm allows for raid0 to grow as I descibed.
Prediction...This year will be a very odd year!
Hard work does not kill people but why risk it: Charlie Mccarthy
A man is not complete until he is married..then..he is finished.
When ALL is lost, what can be found? Even bytes get lonely for a little bit! X-ray confirms Iam spineless!
Offline
I'm not at home so I don't have access to a Linux machine so the closest I have to a man page is this:
http://linux.die.net/man/8/mdadm
Scrolling down the page a bit you can see this, which is probably what you are seeing in the .conf file.
Grow
Grow (or shrink) an array, or otherwise reshape it in some way. Currently supported growth options including changing the active size of component devices and changing the number of active devices in Linear and RAID levels 0/1/4/5/6, changing the RAID level between 0, 1, 5, and 6, and between 0 and 10, changing the chunk size and layout for RAID 0,4,5,6, as well as adding or removing a write-intent bitmap.
However if you scroll down a bit more you'll see this section:
For create, build, or grow:
-n, --raid-devices=Specify the number of active devices in the array. This, plus the number of spare devices (see below) must equal the number of component-devices (including "missing" devices) that are listed on the command line for --create. Setting a value of 1 is probably a mistake and so requires that --force be specified first. A value of 1 will then be allowed for linear, multipath, RAID0 and RAID1. It is never allowed for RAID4, RAID5 or RAID6.
This number can only be changed using --grow for RAID1, RAID4, RAID5 and RAID6 arrays, and only on kernels which provide the necessary support.
I think mdadm has been that way for some time, no idea on how long but you can google and easily find related posts going back to 2008. Is it possible you remember using the --grow option with a different Raid level or perhaps when replacing existing HDDs with larger capacity ones instead of adding additional devices?
Edit: Note emphasis in second quote is mine.
Last edited by imatechguy (2011-12-07 21:37:35)
Offline
imatechguy:
Much in mdadm regards the non- raid0 setups.
Further down in mdadm man page there is a grow section which describes as I have reported.
My use of --grow using the procedure I have posted was to grow a two device cf raid0 array to an eventual 5 device cf card array running 306MB/sec hdparm.
This 5 device raid0 array was bootable and in kernel26v 39.
When I upgrade to linux 3 the raid arrayno longer functioned. I have since restarted raid0 woth 2 devices using kernel26v39 as a starter.
Some linux3 versions allow the raid0 to boot, after modifying grub kernel line.
Most linux3 versions kill the boot and refuse to respond to uuid for booting.
One linux version 3.1.1-1 permits booting with added kernel data.
None 9of the linux kernels allow --grow as described in the man pages for mdadm.
I am informed that there is a mdadm_udev which doesn't appear anywhere in the man pages.
I have had success with this raid0 but not with --grow in raid0.
I hope some mdadm expert can explain what is necessary to enable --grow in raid0 format......with linux3 versions of the kernel(verified that it introduces the fail to boot and the fail to --grow.
It has been done and should still be possible.
(I have also been able to re-size the devices to produce a 29GB /dev/md0.
I am anxious to produce a three device raid0 before I resize again.
So this post has presented the problem.............
EDIT: I am 86 years young and would appreciate a solution before I am 87
EDIT: My system is x86_64 with 4GBram.
Last edited by lilsirecho (2011-12-08 03:02:25)
Prediction...This year will be a very odd year!
Hard work does not kill people but why risk it: Charlie Mccarthy
A man is not complete until he is married..then..he is finished.
When ALL is lost, what can be found? Even bytes get lonely for a little bit! X-ray confirms Iam spineless!
Offline
Perhaops the linux3 kernels have a hangup with mdadm arrays and cannot recognize /dev/md0.
During boot-up, mdadm starts /dev/mdo as raid0 with 2 drives.
Very shortly thereafter the root device (/dev/md0) uuid fails to be found and ramfs is initiated.
This suggests that /dev/md0 is the problem and not raid4 initiation..
If there is no /dev/md0 then raid4 won't happen.So ls /dev....
h-4.2# ls dev
adsp parport0 tty17 tty52 vcs23 vcs59 vcsa36
agpgart port tty18 tty53 vcs24 vcs6 vcsa37
audio ppp tty19 tty54 vcs25 vcs60 vcsa38
autofs psaux tty2 tty55 vcs26 vcs61 vcsa39
block ptmx tty20 tty56 vcs27 vcs62 vcsa4
bsg pts tty21 tty57 vcs28 vcs63 vcsa40
btrfs-control random tty22 tty58 vcs29 vcs7 vcsa41
bus rtc tty23 tty59 vcs3 vcs8 vcsa42
char rtc0 tty24 tty6 vcs30 vcs9 vcsa43
console sda tty25 tty60 vcs31 vcsa vcsa44
core sda1 tty26 tty61 vcs32 vcsa1 vcsa45
cpu sda2 tty27 tty62 vcs33 vcsa10 vcsa46
cpu_dma_latency sdb tty28 tty63 vcs34 vcsa11 vcsa47
disk sdb1 tty29 tty7 vcs35 vcsa12 vcsa48
dri sdb2 tty3 tty8 vcs36 vcsa13 vcsa49
dsp sdc tty30 tty9 vcs37 vcsa14 vcsa5
fb0 sdc1 tty31 ttyS0 vcs38 vcsa15 vcsa50
fd sdc2 tty32 ttyS1 vcs39 vcsa16 vcsa51
full sdd tty33 ttyS2 vcs4 vcsa17 vcsa52
fuse sdd1 tty34 ttyS3 vcs40 vcsa18 vcsa53
hidraw0 sdd2 tty35 uinput vcs41 vcsa19 vcsa54
hidraw1 sdd3 tty36 urandom vcs42 vcsa2 vcsa55
hidraw2 sdd4 tty37 usb vcs43 vcsa20 vcsa56
hpet shm tty38 vcs vcs44 vcsa21 vcsa57
initctl snapshot tty39 vcs1 vcs45 vcsa22 vcsa58
input snd tty4 vcs10 vcs46 vcsa23 vcsa59
kmsg stderr tty40 vcs11 vcs47 vcsa24 vcsa6
log stdin tty41 vcs12 vcs48 vcsa25 vcsa60
loop0 stdout tty42 vcs13 vcs49 vcsa26 vcsa61
mapper tty tty43 vcs14 vcs5 vcsa27 vcsa62
mcelog tty0 tty44 vcs15 vcs50 vcsa28 vcsa63
md tty1 tty45 vcs16 vcs51 vcsa29 vcsa7
md0 tty10 tty46 vcs17 vcs52 vcsa3 vcsa8
mem tty11 tty47 vcs18 vcs53 vcsa30 vcsa9
mixer tty12 tty48 vcs19 vcs54 vcsa31 vga_arbiter
net tty13 tty49 vcs2 vcs55 vcsa32 watchdog
network_latency tty14 tty5 vcs20 vcs56 vcsa33 zero
network_throughput tty15 tty50 vcs21 vcs57 vcsa34
null tty16 tty51 vcs22 vcs58 vcsa35
sh-4.2#
So /dev/md0 is recognized? But not by some other process?
Gparted does not list /dev/md0 when booted in raid0
sh-4.2# gparted
======================
libparted : 3.0
======================
Could not stat device /dev/md/0 - No such file or directory.
As reported, gparted shows /dev/md0 as /dev/md/0.. This indicates some ID variance within the kernel which I cannot explain.
sh-4.2# mount /dev/md0 /mnt/md
sh-4.2#
This command mounts /dev/md0 in /mnt/md....
sh-4.2# mount /dev/md0 /mnt/md
sh-4.2#
Thus:
sh-4.2# ls
bin etc kdenlive lib64 mnt root srv usr
boot home lib lost+found opt run sys var
dev jumanji-git lib32 media proc sbin tmp
sh-4.2#
That is the full archlinux listing and it is obtained from /dev/md0.
Why the discrepant ID in gparted?
Perhaps mdadm has been upgraded, which means the same in linux land as it does in windows...your package is "improved" such that it fails........
One note which is pertinent....this raid0 has been upgraded yesterday without linux kernel install.
Thus, all upgrades have occured except the kernel.
I have another pair of CF cards also upgraded without linux initially,,,and then linux 3.1.1-1 was added with pacman-U. This linux build can boot with kernel line entries but it does not allow --grow /dev/md0 either. In fact, it boots with ...root=/dev/md0 in the kernel line.
Without that in the kernel line, it fails to find the uuid for /dev/md0,,,,even though it was started with mdadm just prior to that failure.
Cannot --grow /dev/md0 if it isn't recognized!!!!!!!
UUID is supposed to solve all problems in boot!!!!He He.........
Edit: From 2.6.35, the Linux Kernel is able to convert a RAID0 in to a RAID4
or RAID5. mdadm uses this functionality and the ability to add devices
to a RAID4 to allow devices to be added to a RAID0. When requested to
do this, mdadm will convert the RAID0 to a RAID4, add the necessary
disks and make the reshape happen, and then convert the RAID4 back to
RAID0.
This doesn't happen!!!!!!!
Last edited by lilsirecho (2011-12-08 00:27:28)
Prediction...This year will be a very odd year!
Hard work does not kill people but why risk it: Charlie Mccarthy
A man is not complete until he is married..then..he is finished.
When ALL is lost, what can be found? Even bytes get lonely for a little bit! X-ray confirms Iam spineless!
Offline
I finally got a chance to read the mdadm man page at home and did indeed find the following after scrolling way down to the "RAID-DEVICES CHANGES" subsection under "GROW MODE".
From 2.6.35, the Linux Kernel is able to convert a RAID0 in to a RAID4 or RAID5. mdadm uses this functionality
and the ability to add devices to a RAID4 to allow devices to be added to a RAID0. When requested to do this,
mdadm will convert the RAID0 to a RAID4, add the necessary disks and make the reshape happen, and then convert
the RAID4 back to RAID0.
According to that it should technically be possible to use the --grow option to add devices to a Raid0 array, though to be honest I've never before heard anyone say it could be done. So going back to your original command I notice you didn't tell mdadm which array to add the /dev/sde2 device to; but also try putting the add option before the grow option like so:
# mdadm --add /dev/md0 /dev/sde2 --grow /dev/md0 --raid-devices=3
If that doesn't work I'd reiterate the conventional understanding being that it's not possible to add devices to a Raid0 array with mdadm, but perhaps filing a bug report will get the right peoples attention for a fix or at least the documentation updated to accurately reflect available functionality. Either way good luck I hope you get it sorted out.
Offline
There are several options to utilize in mdadm for raid arrays. These are prefaced in commands to perform changes in raid arrays. One of these is --grow. There is no preface named --add so that cannot be the first option. Therefore the arrangement you present is not an option.
This does seem like an item to report as a bug since it did function and was outlined in mdadm as such in previous kernel releases and I have utilized it many times.
I appreciate your comments.
Most of the googled data on the subject is completely old and based on adding a patch to mdadm in the year 2009.
I will try to obtain help via a bug.
Prediction...This year will be a very odd year!
Hard work does not kill people but why risk it: Charlie Mccarthy
A man is not complete until he is married..then..he is finished.
When ALL is lost, what can be found? Even bytes get lonely for a little bit! X-ray confirms Iam spineless!
Offline
Entered flyspray bug with FS#27507.........................
Prediction...This year will be a very odd year!
Hard work does not kill people but why risk it: Charlie Mccarthy
A man is not complete until he is married..then..he is finished.
When ALL is lost, what can be found? Even bytes get lonely for a little bit! X-ray confirms Iam spineless!
Offline
There is no preface named --add so that cannot be the first option. Therefore the arrangement you present is not an option.
I'll repectfully disagree with that statement. It may or may not work in your situation but if you don't try it you won't know whether or not it works. Give the below links a read as well as the man page and you'll several references to "--add". Also depending on whether or not you are adding device or recovering from a failed device there are other references to
# mdadm --manage --add <array> <device>
Might as well try both options.
http://en.wikipedia.org/wiki/Mdadm
Offline
The function that you specified gives the error that -add cannot be used as the mode, --grow is required.
I have tried all possible modes for a solution...all give errors...the finction and command I listed is the only command that can perform the ==grow of a raid0 array ...if the kernel allows!!!!
The man mdadm pages state it is kernel related by implication...therefore I generated a bug...perhaps not the correct thing to do but it may result in some action.
A kernel request might be better?
I appreciate your interest in my dilemma and assure you I have investigated it thoroughly. Because it is a special case involving the dreaded raid0 (I love it) it isn't well-documented nor is it well-understood.
My system runs at flank speed booting in ten seconds and has been functioning nicely for months. I desire to --grow and to re-size eventually but --grow is first to establish the performance available in three CF cards for comparison with other raid methods.
I have 4GB ram so don't use swap even though I have 100mb assigned in one partition. I don't play games either. Just have boot and root and it flies.
I note that one mdadm patch for the raid0 problem was generated in 2009, therefore, it can be done with kernel support. What exactly is required therein is not in my purview.
Thanks for your interest.
EDIT: Example of error statements:
sh-4.2# mdadm --grow /dev/md0 --add /dev/sdc2
mdadm: can only add devices to linear arrays
sh-4.2#
sh-4.2# mdadm --grow /dev/md0 --raid-devices=3 --add /dev/sdc2
mdadm: /dev/md0: could not set level to raid4
sh-4.2#
As the error implies, mdadm uses raid4 to re-sync raid0 but cannot setup raid4 because the kernel hasn't the means to do so.
EDIT: I am posting while booted in raid0.............................................................
Last edited by lilsirecho (2011-12-09 16:45:56)
Prediction...This year will be a very odd year!
Hard work does not kill people but why risk it: Charlie Mccarthy
A man is not complete until he is married..then..he is finished.
When ALL is lost, what can be found? Even bytes get lonely for a little bit! X-ray confirms Iam spineless!
Offline
Okay last idea here, did you run modprobe for raid4 before trying to add the device and grow the array? You've obviously got raid0 going or you wouldn't be able to run it but since mdadm uses raid4 to grow a raid0 I was wondering if you have that? Probably no need to add it to your rc.conf since you should only need it long enough for the array to grow.
Offline
All references to --grow in ggogling report that modprobe does not cut it...it must be built into the kernel(or patched).
When using....cat /proc/mdstat while booted into arch from hdd, the personalities include raid0 raid 4 raid 5 and raid6. It is in this arena that I try to --grow raid0.
Since the personalities are correct, the kernel has those elements installed and no modprobe is required.....additionally, it seems it must be included in kernel boot-up rather than modprobe.
This makes sense since the re-sync is a major operation requiring many kernel functions and it must ne built in(according to all references in googling).
This kernel data is provided for many raid modes that are more safely used in linux but raid0 is not projected as desireable. This would be true for users whose activities require many, many writes when using ssds or CF cards. It is risky with hdd's if one fails the raid0 is lost.
I do not expect to lose a CF raid0 for years and do have backups. The disposition of data in a CF cards is not identical to that provided in hdd's and writes don't occur in the same cells over and over.
The kernel needs to be modded to allow the --grow function as described in mdadm m,an pages.
EDIT: Be aware that this raid0 is even more complex than a grow option. The raid0 bootable that I run is partitioned as well and setting up a partitioned boot is not probable in the new linux3 .
Last edited by lilsirecho (2011-12-09 21:51:01)
Prediction...This year will be a very odd year!
Hard work does not kill people but why risk it: Charlie Mccarthy
A man is not complete until he is married..then..he is finished.
When ALL is lost, what can be found? Even bytes get lonely for a little bit! X-ray confirms Iam spineless!
Offline
Main purpose to --grow is to provide more new cells to write to and thereby extend the life of the Compact Flash devices since algorithms internal to the devices control the writes, eliminating bad cells and writing to new cells.
Thus it isn't speed that is needed but capacity!
Prediction...This year will be a very odd year!
Hard work does not kill people but why risk it: Charlie Mccarthy
A man is not complete until he is married..then..he is finished.
When ALL is lost, what can be found? Even bytes get lonely for a little bit! X-ray confirms Iam spineless!
Offline
Added info:
sh-4.2# lsinitcpio -a /boot/initramfs-linux.img
==> Image: /boot/initramfs-linux.img
==> Kernel: 3.1.1-1-ARCH
==> Compressed with: gzip
-> Compression ratio: .553
-> Estimated decompression time: 0.096s
==> Included modules:
ata_piix [explicit] hid-magicmouse hid-uclogic
ehci-hcd [explicit] hid-microsoft hid-wacom
ext2 [explicit] hid-monterey hid-waltop
ext3 [explicit] hid-multitouch hid-wiimote
fb_sys_fops hid-ntrig hid-zpff
ff-memless hid-ortek hid-zydacron
hid hid-petalynx jbd
hid-a4tech hid-picolcd lcd
hid-apple hid-pl libata
hid-axff hid-prodikeys mbcache
hid-belkin hid-quanta md-mod
hid-cherry hid-roccat pata_acpi
hid-chicony hid-roccat-arvo raid0
hid-cypress hid-roccat-common scsi_mod
hid-dr hid-roccat-kone sd_mod
hid-elecom hid-roccat-koneplus snd
hid-emsff hid-roccat-kovaplus snd-rawmidi
hid-ezkey hid-roccat-pyra snd-seq-device
hid-gaff hid-samsung soundcore
hid-gyration hid-sjoy syscopyarea
hid-holtekff hid-sony sysfillrect
hid-kensington hid-speedlink sysimgblt
hid-keytouch hid-sunplus uhci-hcd [explicit]
hid-kye hid-tmff usbcore
hid-lcpower hid-topseed usbhid
hid-logitech hid-twinhan
==> Included binaries:
/sbin/switch_root
/sbin/udevadm
/sbin/modprobe
/sbin/blkid
/sbin/mdassemble
/bin/mount
/bin/busybox
==> Hook run order:
udev
mdadm
sh-4.2#
This is the list when booted into raid0 as I am now.
Prediction...This year will be a very odd year!
Hard work does not kill people but why risk it: Charlie Mccarthy
A man is not complete until he is married..then..he is finished.
When ALL is lost, what can be found? Even bytes get lonely for a little bit! X-ray confirms Iam spineless!
Offline