You are not logged in.
Pages: 1
same disk, same fs, same fstab parameters and same partitioning
suse 10.1:
hdparm -tT /dev/sda2
/dev/sda2:
Timing cached reads: 5312 MB in 2.00 seconds = 2658.23 MB/sec
Timing buffered disk reads: 102 MB in 3.06 seconds = 33.38 MB/sec
Arch 0.72
hdparm -Tt /dev/sda2
/dev/sda2:
Timing cached reads: 992 MB in 2.00 seconds = 496.35 MB/sec
Timing buffered disk reads: 100 MB in 3.02 seconds = 33.07 MB/sec
is there any way to improve sata performance under Arch? In both cases hdparm measured just after boot.
my fstab:
none /dev/pts devpts defaults 0 0
none /dev/shm tmpfs defaults 0 0
usbfs /proc/bus/usb usbfs noauto 0 0
proc /proc proc defaults 0 0
sysfs /sys sysfs noauto 0 0
/dev/cdrom /mnt/cd iso9660 ro,user,noauto,unhide 0 0
/dev/dvd /mnt/dvd udf ro,user,noauto,unhide 0 0
/dev/fd0 /mnt/fl vfat user,noauto 0 0
/dev/sda3 swap swap defaults 0 0
/dev/sda2 / xfs defaults,noatime,nodiratime,barrier 1 1
/dev/sda1 /boot ext2 defaults,noatime 1 2
/dev/sda4 /home xfs defaults,noatime,nodiratime,barrier 1 2
Offline
is dma on? hdparm -cuda /dev/sda2
This may help:
http://gentoo-wiki.com/HOWTO_Use_hdparm … erformance
You may also want to probe around in dmesg and see if the proper module is being loaded for your controller. Otherwise you may need to recompile your kernel with support for it.
Offline
sata does not support dma, or rather dma is "build-in"
as far as I know hdparm does not support manipulation on sata
libata, ahci and piix are loaded
I am running custom kernel only with stuff I need. I don't think that I would be able to boot kernel with wrong disk module
Offline
DESCRIPTION
hdparm provides a command line interface to various hard disk ioctls
supported by the stock Linux ATA/IDE device driver subsystem. Some
options may work correctly only with the latest kernels. For best
results, compile hdparm with the include files from the latest kernel
source code.
from man hdparm
I could post figures from my drives if it helps ...
Not sure about dma in sata would have to read up first ....
Odd that you are timing partitions not drives .... ie sda
Mr Green I like Landuke!
Offline
this is small 80GB disk so running hdparm on whole disk or partition does not matter. Anyway, results are exactly the same.
no hdparm does not support sata:
http://gentoo-wiki.com/HOWTO_Use_hdparm … erformance
http://www.thinkwiki.org/wiki/Problems_ … _hard_disk
for example:
hdparm -i /dev/sda generates typical error for sata:
/dev/sda:
HDIO_GET_IDENTITY failed: Inappropriate ioctl for device
however you can run
hdparm -I /dev/sda
that provides some information. see below.
Anyway it shows that disk runs in udma5 mode and write cache is enabled.
The problem is that I am getting much worse results with Arch when compared to suse with exactly the same settings.
hdparm -I /dev/sda
Capabilities:
LBA, IORDY(can be disabled)
Standby timer values: spec'd by Standard, no device specific minimum
R/W multiple sector transfer: Max = 16 Current = 16
Advanced power management level: 128 (0x80)
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 *udma5
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=240ns IORDY flow control=120ns
Commands/features:
Enabled Supported:
SMART feature set
Security Mode feature set
* Power Management feature set
* Write cache
* Look-ahead
* WRITE_BUFFER command
* READ_BUFFER command
* DOWNLOAD_MICROCODE
* Advanced Power Management feature set
* 48-bit Address feature set
* Device Configuration Overlay feature set
* Mandatory FLUSH_CACHE
* FLUSH_CACHE_EXT
* SMART error logging
* SMART self-test
* General Purpose Logging feature set
* IDLE_IMMEDIATE with UNLOAD
* SATA-I signaling speed (1.5Gb/s)
* Host-initiated interface power management
* Phy event counters
Device-initiated interface power management
* Software settings preservation
* SMART Command Transport (SCT) feature set
* SCT LBA Segment Access (AC2)
* SCT Error Recovery Control (AC3)
* SCT Features Control (AC4)
* SCT Data Tables (AC5)
Offline
Yeah I had a feeling you may say that ....
http://pastebin.archlinux.org/69
my output if it helps....
Mr Green I like Landuke!
Offline
Funny, I got similar results for reading from cache:
Timing cached reads: 1108 MB in 2.00 seconds = 554.04 MB/sec
Timing buffered disk reads: 218 MB in 3.02 seconds = 72.22 MB/sec
Offline
Mr.Green
your output is similar to mine. Still speed is below expectations (and disk capabilities). It may (not sure if this is true) that suse has some extra patches for sata.
"Funny, I got similar results for reading from cache:
Timing cached reads: 1108 MB in 2.00 seconds = 554.04 MB/sec"
yes not really good,
and this (for sata disk) does not look great either:
write test:
#date && dd if=/dev/zero of=deleteme.now count=1000000 && date
Sun Nov 19 10:18:14 PST 2006
1000000+0 records in
1000000+0 records out
512000000 bytes (512 MB) copied, 17.6471 s, 29.0 MB/s
Sun Nov 19 10:18:32 PST 2006
read test:
dd if=/dev/sda of=/dev/null
442337+0 records in
442336+0 records out
226476032 bytes (226 MB) copied, 6.49549 s, 34.9 MB/s
in the case that someone will pint to inaccuracy of hdparm test
Offline
I am wondering about the output of hdparm.
My laptop is yeilding better results with a laptop ide drive, than my sata drive in my server.
I am going to run some bonnie++ tests, to see if it is 'real', or if hdparm is just pulling numbers out of its posterior.
"Be conservative in what you send; be liberal in what you accept." -- Postel's Law
"tacos" -- Cactus' Law
"t̥͍͎̪̪͗a̴̻̩͈͚ͨc̠o̩̙͈ͫͅs͙͎̙͊ ͔͇̫̜t͎̳̀a̜̞̗ͩc̗͍͚o̲̯̿s̖̣̤̙͌ ̖̜̈ț̰̫͓ạ̪͖̳c̲͎͕̰̯̃̈o͉ͅs̪ͪ ̜̻̖̜͕" -- -̖͚̫̙̓-̺̠͇ͤ̃ ̜̪̜ͯZ͔̗̭̞ͪA̝͈̙͖̩L͉̠̺͓G̙̞̦͖O̳̗͍
Offline
well if hdparm does not support sata then I would not trust reading either
what about using smart tools ?
Not sure why Suse should read faster .... I would have to load it [no thanks!]
woah!!! Cactus has entered the building ... 8)
Mr Green I like Landuke!
Offline
bonnie++ -f
output: http://pastebin.archlinux.org/72
"Be conservative in what you send; be liberal in what you accept." -- Postel's Law
"tacos" -- Cactus' Law
"t̥͍͎̪̪͗a̴̻̩͈͚ͨc̠o̩̙͈ͫͅs͙͎̙͊ ͔͇̫̜t͎̳̀a̜̞̗ͩc̗͍͚o̲̯̿s̖̣̤̙͌ ̖̜̈ț̰̫͓ạ̪͖̳c̲͎͕̰̯̃̈o͉ͅs̪ͪ ̜̻̖̜͕" -- -̖͚̫̙̓-̺̠͇ͤ̃ ̜̪̜ͯZ͔̗̭̞ͪA̝͈̙͖̩L͉̠̺͓G̙̞̦͖O̳̗͍
Offline
like that means something :shock:
Mr Green I like Landuke!
Offline
well if hdparm does not support sata then I would not trust reading either
that is why I also provided dd results
I had suse installed on dektop sata (different drives from WD and Seagate mainly) and on laptop sata. In both cases results were better than Arch and sata on desktop at least 30% faster that laptop.
I don't really know what may cause this problem (slow cache reads). I am quite new to Arch, (but not to linux - since 97') and I like it so far. I have two issues so far this is the one and second issue would require separate post. So not bad
Offline
No issues only solutions my friend
Mr Green I like Landuke!
Offline
have you been testing with the vanilla kernel, or have you tried a beyond kernel too?
"Be conservative in what you send; be liberal in what you accept." -- Postel's Law
"tacos" -- Cactus' Law
"t̥͍͎̪̪͗a̴̻̩͈͚ͨc̠o̩̙͈ͫͅs͙͎̙͊ ͔͇̫̜t͎̳̀a̜̞̗ͩc̗͍͚o̲̯̿s̖̣̤̙͌ ̖̜̈ț̰̫͓ạ̪͖̳c̲͎͕̰̯̃̈o͉ͅs̪ͪ ̜̻̖̜͕" -- -̖͚̫̙̓-̺̠͇ͤ̃ ̜̪̜ͯZ͔̗̭̞ͪA̝͈̙͖̩L͉̠̺͓G̙̞̦͖O̳̗͍
Offline
that is vanilla 2.6.18.2 and I am just compiling 2.6.19-rc6
in my opinion beyond was never a speed champ, so in short I did not use it in Arch
Offline
This interests me, from what I have read other have the same issue .. only they look to hdparm for help
Will keep looking .....
Mr Green I like Landuke!
Offline
I have used kernel26-ck on Arch64 for the test above...
Offline
well if hdparm does not support sata then I would not trust reading either
that is why I also provided dd results
![]()
I had suse installed on dektop sata (different drives from WD and Seagate mainly) and on laptop sata. In both cases results were better than Arch and sata on desktop at least 30% faster that laptop.
I don't really know what may cause this problem (slow cache reads). I am quite new to Arch, (but not to linux - since 97') and I like it so far. I have two issues so far this is the one and second issue would require separate post. So not bad
Can you copy over your suse kernel and use that to boot into your arch distro? I don't see a suse live 10.2 released yet but I may download the install CD and try to pull out the kernel and give it a go..
Offline
Maybe SuSe 10.1 already has ADMA+NCQ patch for Nforces? It will be in vanilla 2.6.20.
However the difference shouldn't be so big anyway. :?
to live is to die
Offline
Can you copy over your suse kernel and use that to boot into your arch distro? I don't see a suse live 10.2 released yet but I may download the install CD and try to pull out the kernel and give it a go..
I doubt that simple copying will work out-of-the-box.
A better way IMHO would be to see what patches they have applied and what kernel config they have
to live is to die
Offline
I've noticed this problem as well, but I always wrote it off as an issue with hdparm and sata drives. IIRC, last time I dicked with bonnie I got reasonable results.
Unthinking respect for authority is the greatest enemy of truth.
-Albert Einstein
Offline
Is this definitly a hdparm issue ?
Offline
that is vanilla 2.6.18.2 and I am just compiling 2.6.19-rc6
in my opinion beyond was never a speed champ, so in short I did not use it in Arch
It was never intended to be a "speed champ". It just intended to make available a variety of popular features that have not made mainstream, in one kernel.
My results seem fine.
[root@sara iphitus]# hdparm -tT /dev/sda /dev/sdb /dev/hdc
/dev/sda:
Timing cached reads: 1926 MB in 2.00 seconds = 963.06 MB/sec
Timing buffered disk reads: 208 MB in 3.03 seconds = 68.74 MB/sec
/dev/sdb:
Timing cached reads: 1896 MB in 2.00 seconds = 948.35 MB/sec
Timing buffered disk reads: 228 MB in 3.02 seconds = 75.62 MB/sec
/dev/hdc:
Timing cached reads: 1900 MB in 2.00 seconds = 949.76 MB/sec
Timing buffered disk reads: 206 MB in 3.02 seconds = 68.29 MB/sec
all are 7200RPM seagates, first two sata, third is IDE. I'm not surprised that there's no difference, as the bottleneck isnt IDE, it's the drive itself. A 7200RPM drive only spins so fast, so many of the improvements in SATA are things like NCQ, which wont affect a synthetic test like hdparm.
James
Offline
For cached reads I'm getting about 900mb/sec via hdparm in Arch. Under a suse or any other livecd, I get almost 1200. Gentoo gives me 2200. Fairly wide swing. I booted the gentoo kernel into my arch system and again, got 900. Then booted into gentoo, chrooted into the arch system, and ran it and still got 900..Seems to be an issue with hdparm itself? I do seem to notice a lot more system latency during disk reads...firefox completely stops during a pacman sync for example...but I don't know if this is normal or not.
Offline
Pages: 1