You are not logged in.
Pages: 1
Hi there,
Been running arch64 for about 5 months now, very happy with it. Until recently I've been using raid10 with ext4 as my / and I converted to XFS as I got a little fsck scare on a bad shutdown. However switching from ext4 to XFS I've noticed some slowdowns... I originally went with raid10 for the speed improvement without the worry of one drive dying and taking everything with it, so I want some of my performance back.
When I made the raid10 array, I didn't bother with the chunk size so it is default (64k). I took a bit more care when I reformatted with XFS however; (pretty sure) I used sunit=128 and swidth=512 as they should be optimal. As far as I can tell, XFS is generally better with larger files while ext4 is possibly just a little better with the smaller ones. Bearing this in mind, I'm now considering splitting / and /home as the speed improvements I want (booting, program opening etc etc) would come from the / partition, so I was thinking of putting a 16k chunk size in mdadm for that array with ext4 and leaving 64k with XFS for /home.
Just wanted to know anyone's opinion before I committed
FWIW, CPU intensity shouldn't matter (Q6600 OCed from 2.4 to 3.51) and the HDDs are 4x WD6401AALS.
Thanks for any input
Arch x86_64
Offline
From where did you read how to calculate your chunk size?
Sounds like an interesting setup.
I have 6x 1TB disks, would someone recommend a kickass setup?
Offline
From the man page, I think. Also, I seem to remember a myth RAID article confirming what I input.
But yeah... my comp is no-where near as snappy as one would hope with my setup...
Once I buy a 1TB external HDD I'll backup my /home and start testing chunk sizes along with filesystems. Hopefully a 40GB / with 16k chunk and ext4 is a winning combo.
Arch x86_64
Offline
Still haven't had the funds/time to try out my setup, but just wanted to know if anyone thinks this would work:
Can I raid0 my / (without home) across 4 drives and have that md device raid1'ed with another partition on a 5th drive? Should one of my 4 drives die, I can just get the raid1 device on the 5th drive to copy my / back to the other 3.
Yes no maybe?
Arch x86_64
Offline
It would work, but personally I wouldn't recommend it... It's messy...
md0
/ \
------------------------
md1 /dev/sdd1
------------------------
|
/dev/sda1
/dev/sdb1
/dev/sdc1
Personally, I'm a fan of making multiple RAID-1 array from some drives (for redundancy), then running LVM above that for the striping ("psuedo-raid0")
lvmData
/ \
------------------------
md0 md1
------------------------
| |
/dev/sda1 /dev/sdb1
/dev/sdc1 /dev/sdd1
Last edited by fukawi2 (2009-08-23 09:11:36)
Are you familiar with our Forum Rules, and How To Ask Questions The Smart Way?
BlueHackers // fscanary // resticctl
Offline
Pages: 1