You are not logged in.
Hi fellow Archers,
This is my current setup:
Atom+ION board with 3 SATA ports + 1 PCI card with 2 SATA ports.
Storage:
2x 2TB
1x 1.5TB
1x 1TB
They are all part of the same vg:
[root@ion ~]# vgdisplay
--- Volume group ---
VG Name lvmvolume
System ID
Format lvm2
Metadata Areas 4
Metadata Sequence No 46
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 4
Act PV 4
VG Size 5.89 TiB
PE Size 4.00 MiB
Total PE 1544372
Alloc PE / Size 1544372 / 5.89 TiB
Free PE / Size 0 / 0
VG UUID E15rQM-mRJc-OLJH-C3Jo-NL6Y-g2ZQ-MP7PSz
I now have a new mainboard and a SATA controller card (PCI-E 1x) coming, with two new 2TB drives. Also a cheap small SSD on the way for the OS.
What I want to do: Migrate all data to a new RAID5 array consisting of 4x 2TB drives.
Is this possible without losing any data?
Thanks!
Offline
Shameless self-reply-bump;
I now have the new parts except the controller. When I get that I'm going to set up the new system, trying to migrate away all data off one 2TB drive so I can pull it out of the lvm volume group.
Then I can create a new RAID5 array with 2 new and 1 old 2TB drives and start moving the data to that one. When that's done, I'll move the other 2TB drive that was in the volume group, to the RAID5 array. That way I should end up with 4 2TB HDDs in RAID5.
I need to do some research first on what stripe size to use, filesystem, etc, as well as if it's best to have the HDDs on the controller (PCI-E 1x) or on the onboard SATA controller (which should be faster since it's not bottlenecked by the PCI-E 1x bus).
Thanks
Edit: New system installed, data migration ongoing. Managed to borrow a 2TB drive off a friend, migrated data to that so I could take one old 2TB out of the lvm, thus giving me 3 unused 2TB HDDs, on which I created a RAID5 array. Currently migrating data to it while it's initializing the parity etc...
[root@ion ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] [raid0]
md0 : active raid5 sdc1[3] sdb1[1] sda1[0]
3907024640 blocks super 1.2 level 5, 128k chunk, algorithm 2 [3/2] [UU_]
[========>............] recovery = 42.6% (832443904/1953512320) finish=18091.5min speed=1032K/sec
unused devices: <none>
[root@ion ~]# df -h
Filesystem Size Used Avail Use% Mounted on
udev 10M 236K 9.8M 3% /dev
/dev/mapper/lvmsystem-root
20G 7.6G 12G 41% /
shm 881M 0 881M 0% /dev/shm
/dev/sdd1 97M 20M 73M 21% /boot
tmpfs 881M 11M 871M 2% /tmp
/dev/loop0 124M 27M 96M 22% /var/lib/pacman
/dev/mapper/lvmvolume-home
5.8T 4.7T 1.2T 81% /home
/dev/mapper/lvstorage-storage
3.6T 2.1T 1.6T 57% /raid5volume
Edit: BEWARE!!
sata_mv: Highpoint RocketRAID BIOS CORRUPTS DATA on all attached drives, regardless of if/how they are configured. BEWARE!
sata_mv: For data safety, do not use sectors 8-9 on "Legacy" drives, and avoid the final two gigabytes on all RocketRAID BIOS initialized drives.
That was the culprit! I shrunk the device usage per partition like so:
Avail Dev Size : 3907025072 (1863.01 GiB 2000.40 GB)
Array Size : 11702108160 (5580.00 GiB 5991.48 GB)
Used Dev Size : 3900702720 (1860.00 GiB 1997.16 GB)
So it's using 1860 out of 1863 or so GB per partition, that way I should be safe corruption wise. It would be best to shrink the actual partitions (sd[b,c,d,g]1), but I don't know how to do that without messing everything up.
Last edited by Fackamato (2010-10-24 17:14:44)
Offline