You are not logged in.
I just installed raid-1 on one hd (by first making empty md devices then copying stuff to them from old drive and then activating it). Now I have made the old drive empty and re-made the partitions to it (using sectors as size measures).
The problem is now - whenever I try to add my old drive's partition to mdx like this:
mdadm /dev/md1 --add /dev/hda1
I'll get this error:
mdadm: add new device failed for /dev/hda1: Bad file descriptor
I tried to restart after writing the partition table but that did not help.
Here are the personalities:
Personalities : [raid1]
md1 : active raid1 hdc1[1]
192640 blocks [2/1] [_U]
md5 : active raid1 hdc5[1]
289024 blocks [2/1] [_U]
md6 : active raid1 hdc6[1]
497856 blocks [2/1] [_U]
md7 : active raid1 hdc7[1]
2505984 blocks [2/1] [_U]
md8 : active raid1 hdc8[1]
96256 blocks [2/1] [_U]
md9 : active raid1 hdc9[1]
19968640 blocks [2/1] [_U]
md10 : active raid1 hdc10[1]
19534912 blocks [2/1] [_U]
Offline
Show us the output of "fdisk -l".
A bus station is where a bus stops.
A train station is where a train stops.
On my desk I have a workstation.
Offline
here you go:
Disk /dev/hde: 40.0 GB, 40060403712 bytes
255 heads, 63 sectors/track, 4870 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/hde1 2 4870 39110242+ f W95 Ext'd (LBA)
/dev/hde5 2 2424 19462716 7 HPFS/NTFS
/dev/hde6 2425 4870 19647463+ 83 Linux
Disk /dev/hda: 122.9 GB, 122942324736 bytes
1 heads, 1 sectors/track, 240121728 cylinders, total 240121728 sectors
Units = cylinders of 1 * 512 = 512 bytes
Device Boot Start End Blocks Id System
/dev/hda1 * 2 385561 192780 83 Linux
Disk /dev/hdc: 61.4 GB, 61475807232 bytes
16 heads, 63 sectors/track, 119117 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Device Boot Start End Blocks Id System
/dev/hdc1 * 1 383 192748+ fd Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/hdc2 383 85489 42893550 5 Extended
/dev/hdc5 383 957 289138+ fd Linux raid autodetect
/dev/hdc6 957 1945 497983+ fd Linux raid autodetect
/dev/hdc7 1945 6917 2506108+ fd Linux raid autodetect
/dev/hdc8 6917 7109 96358+ fd Linux raid autodetect
/dev/hdc9 7109 46729 19968763+ fd Linux raid autodetect
/dev/hdc10 46729 85489 19535008+ fd Linux raid autodetect
Disk /dev/md10: 20.0 GB, 20003749888 bytes
2 heads, 4 sectors/track, 4883728 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md10 doesn't contain a valid partition table
Disk /dev/md9: 20.4 GB, 20447887360 bytes
2 heads, 4 sectors/track, 4992160 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md9 doesn't contain a valid partition table
Disk /dev/md8: 98 MB, 98566144 bytes
2 heads, 4 sectors/track, 24064 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md8 doesn't contain a valid partition table
Disk /dev/md7: 2566 MB, 2566127616 bytes
2 heads, 4 sectors/track, 626496 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md7 doesn't contain a valid partition table
Disk /dev/md6: 509 MB, 509804544 bytes
2 heads, 4 sectors/track, 124464 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md6 doesn't contain a valid partition table
Disk /dev/md5: 295 MB, 295960576 bytes
2 heads, 4 sectors/track, 72256 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md5 doesn't contain a valid partition table
Disk /dev/md1: 197 MB, 197263360 bytes
2 heads, 4 sectors/track, 48160 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md1 doesn't contain a valid partition table
Offline
This seems so weird
I made a dummy test of making raid and here are the steps I took:
1. made just 8 MB partition as type FD in cfdisk
2. created md and made one disk missing like this:
mdadm --create /dev/md12 --level=1 --raid-disks=2 missing /dev/hda2
3. formatted it into swap (reiser would have needed 33MB space)
4. made another of equal size on /dev/hde2 using cfdisk
5. and then the critical part: mdadm -a /dev/md12 /dev/hde2
and again that error:
mdadm: add new device failed for /dev/hde2: Bad file descriptor
The disks seem ok, because I tried this in both ways using both disks as receiving/giving parties. I also checked kernel and enabled all md stuff as Y except the last one. I think I am perhaps missing something here...
Offline
Still no luck... Tried also different partition sizes and combinations - like using different disks; made sure the size of the partition to be added was of equal size or larger; even to add the partition on the same disk. I am starting to suspect my custom-made kernel. It got to be the kernel... I'll try the newest official Arch kernel now with the default settings.
Offline
Using the newest kernel (2.6.11.7) I am still getting "mdadm: add new device failed for /dev/hda1: Bad file descriptor" but now I also tried to instantly make the array ready without later adding the new partition. That worked, I could also make reiserfs on that md device.
I have three harddrives connected to this workstation and I can build complete arrays ready using that third drive as a start-up/temp space. On failure it will just take much more to be up and running again on full raid-1 array. I would have to copy the contents of those mds to temp drive, delete those mds, recreate them and copy the files back. This is the way to get around this problem for now but not what raid-1 should be about.
Offline
You say you format the partitions before adding them to the RAID-array. This is wrong: you have to partition both partitions as "Linux raid autodetect" (type FD) like you did, and then make a RAID-array out of them. Once the array is created, you can format /dev/md1 to whatever filesystem you want.
A bus station is where a bus stops.
A train station is where a train stops.
On my desk I have a workstation.
Offline
I think I am doing it right - I am doing as you described. During my tests I first made hda empty, wrote partition table. Then made the test partition of type fd and created an array with a missing disk of it
mdadm --create /dev/md13 --level=1 --raid-disks=2 /dev/hda1 missing
I also get this sometimes:
/dev/hda1 appears to contain a reiserfs file system
while cfdisk shows that the partition type is fd.Perhaps cfdisk doesn't cleanly delete old information?
I had partitions on the disks from which I am trying to make raid arrays - even boot sectors. Could it be that some older information or something is disturbing mdadm? I will check if there are any hd wiping tools for linux.
Offline
cfdisk doesn't erase any actual data on the disk, it just alters the partition table, which make the data on the disk inaccessible although it's still there.
I honestly can't remember anymore how I did it. I remember creating a few RAID-arrays and they survived a new install, I didn't have to reconfigure the drives as they were recognized and initialized automatically.
Sorry I can't be of anymore help.
A bus station is where a bus stops.
A train station is where a train stops.
On my desk I have a workstation.
Offline
no problem, thanks for being with me for awhile. I am already making this raid another way - making two partitions at once into raid array. At least this worked.
Offline
Problem solved The mdadm version that is included with arch has a bug in it - that is why --add always failed
Offline
after struggling all day with this botch job of a raid program I stumble across this nugget of info. Thanks for the heads up sven!
Offline