You are not logged in.

#1 2015-03-11 13:36:53

Resistance
Member
Registered: 2012-09-04
Posts: 26

Hard drive benchmarking.

Hi,
I have two hard drives, one of which I'd like to use for Arch (/, /boot, /home, ...), so I'm trying to figure out which one would be best suited for the task.

I have the following questions :
-Are these commands an accurate way of ranking them:

dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc

echo 3 > /proc/sys/vm/drop_caches
dd if=tempfile of=/dev/null bs=1M count=1024 

-Does the fact that one of the disks is mounted as / significantly alter the results ?
-Is the LVM overhead measurable ?
-The rudimentary results I obtained show that one disk is better than reading while the other at writing (R-W : 70MB/s - 57MB/s, R-W : 108MB/s - 38MB/s) ; my intuition tells me that for a root partition, the reading speed matters more than writing, is this correct ? At what "ratio" or absolute value does this stop being the case ? Other factors (e.g. for a system with high swap usage, writing speed should matter more) ?

Feel free to offer links or additional information.

Thank you for reading, and potentially helping (me, and other readers from the future).

Last edited by Resistance (2015-03-11 13:37:18)

Offline

#2 2015-03-18 00:51:18

rep_movsd
Member
Registered: 2013-08-24
Posts: 135

Re: Hard drive benchmarking.

Typically, hdparm is the tool used to benchmark reads rather than a simple dd command.

I have been successfully using a software RAID-0 setup for my / partition and it makes a huge impact

Offline

#3 2015-03-18 16:32:56

mr.MikyMaus
Member
From: disabled
Registered: 2006-03-31
Posts: 285

Re: Hard drive benchmarking.

Resistance wrote:

Hi,

I have the following questions :
-Are these commands an accurate way of ranking them:

dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc

echo 3 > /proc/sys/vm/drop_caches
dd if=tempfile of=/dev/null bs=1M count=1024 

In short, no. Not by a long shot. In order to do precise measurement you'd have to go beyond volume management, let alone the filesystem.


-Does the fact that one of the disks is mounted as / significantly alter the results ?

Most likely yes. Depends on how the box is used during the test.

-Is the LVM overhead measurable ?

No. The testing should be done under LVM, e.g. on the block layer, or even lower (ATA) as long as you're interested in hardware-related numbers and not benchmarking filesystems. But even low-level test results are questionable, see below.

-The rudimentary results I obtained show that one disk is better than reading while the other at writing (R-W : 70MB/s - 57MB/s, R-W : 108MB/s - 38MB/s) ; my intuition tells me that for a root partition, the reading speed matters more than writing, is this correct ? At what "ratio" or absolute value does this stop being the case ? Other factors (e.g. for a system with high swap usage, writing speed should matter more) ?

These results can be biased heavily depending on testing conditions. If you want to measure hard drive's performance, go for more low-level tools. And even when you finish the tests, the results won't tell you which drive will behave better under regular working conditions. Many performance factors are dependent on software, being it the controller firmware, the hard drive firmware or the operating system. Unless the two drives have big differences on the hardware level (like, say, higher RPM, larger cache, etc.), it doesn't make much sense benchmarking them against each other.

Go for reliability, not performance. Run some SMART tests, badblocks, etc. and then choose the worse drive for OS and the better one for data.


What happened to Arch's KISS? systemd sure is stupid but I must have missed the simple part ...

... and who is general Failure and why is he reading my harddisk?

Offline

Board footer

Powered by FluxBB