You are not logged in.
I am preparing my HDDs for a RAID using dd to write random data to the disks. The power blinked off during this process I would like to resume where I left off writing to avoid starting completely over. Here is roughly where I was before the power stopped:
cryptsetup open --type plain /dev/sda1 sda --key-file /dev/random
dd if=/dev/zero of=/dev/mapper/sda status=progress
623302675968 bytes (623 GB) copied, 51509.329075 s, 12.1 MB/s
I found the seek option of dd in the man page that should do what I want.
seek=N skip N obs-sized blocks at start of output
...
obs=BYTES write BYTES bytes at a time (default: 512)
So using the numbers from my output do I just divide the bytes written by 512 to get the seek value?
623302675968/512=1217388039
Would the command to resume this be?
dd if=/dev/zero of=/dev/mapper/sda status=progress seek=1217388039
Last edited by maggie (2015-12-11 16:40:26)
Offline
bs=1M seek=623000
actually that's a bit off... 594000 perhaps, or bs=1MB
your math isn't wrong but dd default blocksize of 512 byte is just horrible performance-wise. You should always use bs=1M unless you have good reason not to
Last edited by frostschutz (2015-12-11 16:53:05)
Online
Well I am following the wiki that warns against it.
Last edited by maggie (2015-12-11 17:00:21)
Offline
the wiki is wrong
Online
the wiki is wrong
Well if I use the bs=1M do I just divide the bytes written by 1000 to get the seek value?
623302675968/1000 = 623302676
Offline
Well if I use the bs=1M do I just divide the bytes written by 1000 to get the seek value?
623302675968/1000 = 623302676
You're off by 3 logs.
N and BYTES may be followed by the following multiplicative suffixes: c =1, w =2, b =512, kB =1000, K =1024, MB =1000*1000, M =1024*1024, xM =M GB =1000*1000*1000, G =1024*1024*1024, and so on for T, P, E, Z, Y.
Last edited by graysky (2015-12-11 18:32:29)
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline
the wiki is wrong
I don't think the note on the wiki is wrong in this context since the goal is to wipe out all of the disk or partition with encrypted random gibberish. Seems like if users select a 1M block size which is 2048x bigger than the default, the last bit of the partition could remain unwritten, if it is not perfectly divisible by 1048576 bytes since this is the chunk size we're forcing, no?
EDIT: Perhaps a 2 step procedure can be used:
1) Use dd with 1M bs for speed knowing that the last tiny bit of the device/partition will be untouched.
2) Use dd with the default bs of 512 and the correct seek value to hit that last little bit.
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline
the last bit of the partition could remain unwritten
nope
# truncate -s $((512*654321)) foobar.img
# cryptsetup open --type=plain --cipher=aes-xts-plain64 foobar.img cryptfoobar
# dd bs=4G iflag=fullblock if=/dev/zero of=/dev/mapper/cryptfoobar
dd: error writing ‘/dev/mapper/cryptfoobar’: No space left on device
1+0 records in
0+0 records out
335012352 bytes (335 MB) copied, 1.31574 s, 255 MB/s
# blockdev --getsize64 /dev/mapper/cryptfoobar
335012352
dd is perfectly happy with incomplete last blocks, it writes until it's full.
it should work for odd byte numbers (not multiple of 512) too but the block layer does not have such odd numbers so ...
if I use the bs=1M do I just divide the bytes written by 1000
bytes/1024/1024 = M
bytes/kilobytes/megabytes
Last edited by frostschutz (2015-12-11 18:19:53)
Online
I don't understand the code output with truncate and blockdev. If it proves that dd will write partial blocks then the wiki should be updated because using the bs=1M option makes the writing go up to 90 MB/s for my hardware when no option is around 12 MB/s.
Offline
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline
It seems that bs=1M should be a safe choise:
sudo fdisk -l /dev/mapper/container
Disk /dev/mapper/container: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
sudo dd if=/dev/zero of=/dev/mapper/container status=progress bs=1M
2000371580928 byte (2,0 TB) copiati, 17199,345220 s, 116 MB/s
dd: errore scrivendo "/dev/mapper/container": Spazio esaurito sul device
1907730+0 record dentro
1907729+0 record fuori
2000398934016 byte (2,0 TB) copiati, 17214,7 s, 116 MB/s
Anyway It is strange to me that
2000371580928 byte (2,0 TB) copiati
is different from
2000398934016 byte (2,0 TB) copiati
I don't know why.
Offline