You are not logged in.
Pages: 1
hej,
with hdparm -tT /dev/sda4 i get the ouput:
/dev/sda4:
Timing cached reads: 1686 MB in 2.00 seconds = 843.04 MB/sec
Timing buffered disk reads: 212 MB in 3.00 seconds = 70.66 MB/sec
this looks ok. but when i try it with real files (dd if=/home/hanna/temp/bigfile.txt of=/dev/null)
i geht only:
1419547960 Bytes (1,4 GB) kopiert, 69,6285 s, 20,4 MB/s
i have 2GB ram, if the file is smaller i get a much better read-speed. but why only 20MB/s? this looks soo slow.
lg
papierschiff
Offline
To compare your results:
hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 780 MB in 2.00 seconds = 389.84 MB/sec
Timing buffered disk reads: 184 MB in 3.03 seconds = 60.79 MB/sec
dd if=/home/evramp/bigfile.txt of=/dev/null
(595 MB) copied, 15,1516 s, 39,2 MB/s
What HDD do you have? What filesystem are you using?
What shows you hdparm /dev/sda -I? Aim at the line saying Recommended acoustic management value: 254, current value: xxx and line above saying Advanced power management level: xxx => these can slow your drive a bit.
Last edited by EVRAMP (2009-05-09 14:53:20)
Offline
hej, thanks for your reply. its a seagate hd, 500GB, ext3
sudo hdparm -I /dev/sda
/dev/sda:
ATA device, with non-removable media
Model Number: ST3500620AS
Serial Number: 9QM30RKX
Firmware Revision: LC11
Transport: Serial
Standards:
Used: unknown (minor revision code 0x0029)
Supported: 8 7 6 5
Likely used: 8
Configuration:
Logical max current
cylinders 16383 16383
heads 16 16
sectors/track 63 63
--
CHS current addressable sectors: 16514064
LBA user addressable sectors: 268435455
LBA48 user addressable sectors: 976773168
Logical/Physical Sector size: 512 bytes
device size with M = 1024*1024: 476940 MBytes
device size with M = 1000*1000: 500107 MBytes (500 GB)
cache/buffer size = 16384 KBytes
Nominal Media Rotation Rate: 7200
Capabilities:
LBA, IORDY(can be disabled)
Queue depth: 32
Standby timer values: spec'd by Standard, no device specific minimum
R/W multiple sector transfer: Max = 16 Current = 1
Recommended acoustic management value: 254, current value: 0
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=120ns IORDY flow control=120ns
Commands/features:
Enabled Supported:
* SMART feature set
Security Mode feature set
* Power Management feature set
* Write cache
* Look-ahead
* Host Protected Area feature set
* WRITE_BUFFER command
* READ_BUFFER command
* DOWNLOAD_MICROCODE
SET_MAX security extension
* 48-bit Address feature set
* Device Configuration Overlay feature set
* Mandatory FLUSH_CACHE
* FLUSH_CACHE_EXT
* SMART error logging
* SMART self-test
* General Purpose Logging feature set
* 64-bit World wide name
* Write-Read-Verify feature set
* WRITE_UNCORRECTABLE_EXT command
* {READ,WRITE}_DMA_EXT_GPL commands
* Gen1 signaling speed (1.5Gb/s)
* Gen2 signaling speed (3.0Gb/s)
* Native Command Queueing (NCQ)
* Phy event counters
Device-initiated interface power management
* Software settings preservation
* SMART Command Transport (SCT) feature set
* SCT Long Sector Access (AC1)
* SCT Error Recovery Control (AC3)
* SCT Features Control (AC4)
* SCT Data Tables (AC5)
unknown 206[12] (vendor specific)
Security:
Master password revision code = 65534
supported
not enabled
not locked
not frozen
not expired: security count
supported: enhanced erase
100min for SECURITY ERASE UNIT. 100min for ENHANCED SECURITY ERASE UNIT.
Logical Unit WWN Device Identifier: 5000c5000b9cef47
NAA : 5
IEEE OUI : 000c50
Unique ID : 00b9cef47
i don't see a big "problem". or is it perhaps only the harddisk??
lg
papierschiff
Offline
what shows a "filefrag /home/hanna/temp/bigfile.txt" ?
Offline
23754 extents found, perfection would be 11 extents
what means this?
Offline
That is the problem, the file is fragmented in 23754 parts (or is a sparse file) you can see the real size with du -sh /home/hanna/temp/bigfile.txt and compare with ls -lh /home/hanna/temp/bigfile.txt
What is your filesystem? if you use XFS you can run (as root) "xfs_fsr -v /home/hanna/temp/bigfile.txt"
Offline
ok, this make sence. its ext3, i thought i don'T neet fragmentation with etx?
Offline
Under some conditions the filesystems don't fragment much (*ideal world*), but when these rules are break (for example the free space is below 20%) new files stored may be become fragmented. In the *real world* files are always fragmented in differents levels.
If you have suficiente space, you can try copy the file to another location, delete original, and then move again. This should be ideally with an special copy, that avoids fragmentation, but you can try with a simple "cp" if reduces at least some fragments.
Try this:
filefrag fragmented-file
cp fragmented-file fragmented-file.new
filefrag fragmented-file.new #now compare if the fragmentation is reduced
Offline
hm, i have 150GB free, this should be enough free space. when i copy there are "only" >4000 extends.. -but i see, now it is a little faster with dd: 78,5 MB/s
i will try the tool skake or will copy all files to an extern HD and copy it back..
thanks for your help
Offline
Pages: 1