You are not logged in.
I have 4 disks, and this is the current layout :
---------------------------------------------------
| RAID 1 | RAID 1 |
--------------------------------------------------
| Disk 1 | Disk 2 | Disk 3 | Disk 4 |
Both raid1 are created with a hardware raid card.
My question is,
Q1:
which layout is the best practice if I want a best read/write performance below(regardless of the layout's boot prolem and etc.) ?
1.
| / | /var | .... |
---------------------------------------------------
| software RAID 0 |
---------------------------------------------------
| RAID 1 | RAID 1 |
--------------------------------------------------
| Disk 1 | Disk 2 | Disk 3 | Disk 4 |
2.
| LV-ROOT | LV-VAR | .... |
---------------------------------------------------
| LVM VolGroup |
---------------------------------------------------
| RAID 1 | RAID 1 |
--------------------------------------------------
| Disk 1 | Disk 2 | Disk 3 | Disk 4 |
3.
| LV-ROOT | LV-VAR | .... |
---------------------------------------------------
| LVM VolGroup |
---------------------------------------------------
| software RAID 0 |
---------------------------------------------------
| RAID 1 | RAID 1 |
--------------------------------------------------
| Disk 1 | Disk 2 | Disk 3 | Disk 4 |
And some other questions :
Q2 :
How much performance lose between layout 1 and layout 3 ? (1%? 50% or more? just a quantized value to indicate that )
Q3:
Is layout 1 and layout 2 have same performance if top level filesystem is only used to store data ? I mean like this :
| /data (all data will store here) |
---------------------------------------------------
| software RAID 0 or LVM |
---------------------------------------------------
| RAID 1 | RAID 1 |
--------------------------------------------------
| Disk 1 | Disk 2 | Disk 3 | Disk 4 |
Offline
When I was a database administrator on UNIX systems, we always built the (hardware) RAID arrays, first, then created the logical volumes on them. We also always used RAID-10 arrays for the best performance, but you don't really have enough drives for that. It is possible to have a 4-drive RAID-10, but it doesn't really buy you anything.
Tim
Offline
When I was a database administrator on UNIX systems, we always built the (hardware) RAID arrays, first, then created the logical volumes on them. We also always used RAID-10 arrays for the best performance, but you don't really have enough drives for that. It is possible to have a 4-drive RAID-10, but it doesn't really buy you anything.
Tim
Why 'doesnt buy me anything ?' .
As I think a software RAID 0 will let me gain 1 time higher read/write performance on 2 raid1 disk ?
Offline
It means your stripe width would be only two drives. It buys a little performance, but not very much. I seriously doubt that it would double your performance.
The best thing would be to benchmark your throughput on your system with the drives non-striped, then repeat the exact same tests with a two-drive RAID stripe. Then you'll know what you will get, instead of just guessing.
Tim
Offline