You are not logged in.
Pages: 1
I'm hoping to bring together a hodge podge of old parts lying about into a sweet smoking Arch machine.
I've got everything I need, except what I thought would be the easiest part of the whole box... a usable hard disk.
After finding two reasonably new disks, I threw them in the set up I had at the time, everything was peachy-keen.
Turns out, both disks flop with IO errors after about 36hour uptime, I think they have bad sectors.... maybe I dreamt this, but I thought
hard-disk firmware hides bad sectors from the OS by remapping them... maybe not.
Anyways I want to zero both disks, forcing the sectors to be remapped or quarantined off or w/e the manufacturers have them do.
So I booted the box up with my trust Arch install disk, booted the live cd and am sitting here at the root prompt
trying to run
dd of=/dev/sda if=/dev/null bs=512
the above returns the stats of execution:
0+0 records in
0+0 records out
0 byte (0 B) copied, etc.
Is the above bash not attempmting to do what I think it is?
plz halp! Thanks for reading.
Offline
/dev/null returns nothing but EOF on read. Use /dev/zero for this.
In any case, by the time you see I/O errors, you're probably hosed anyway. Modern drives DO remap bad blocks for you, letting you see such I/O errors only when all the spares are already gone and the disk is dying.
Offline
Thanks for the reply and the information, the changed made everything work
Totally bogus that a 500GB Wester Dig harddrive made in 2009 would have so many bad blocks.....
What usually produces a bad block, a hard kill of a system?
Offline
Shutdowns usually only cause data errors (files not written to disk). Disk problems are usually a result of physical abuse (sudden motion, shock, etc...).
Steven [ web : git ]
GPG: 327B 20CE 21EA 68CF A7748675 7C92 3221 5899 410C
Do not email: honeypot@stebalien.com
Offline
Pages: 1