You are not logged in.

#1 2019-09-16 00:31:17

Registered: 2019-09-16
Posts: 1

Why block size needs to be 4M, in bootable USB.

Hi Community,

Firstly, sorry for all possible english mistakes, isn't my native language.

Well, Im migrating from ubuntu to Arch linux and in order to learn a little about how does it works, and make it as my primary system.
So i'm trying to do all the process of installation slow, reading the wonderful documentation slowly, and trying at least understand what each command does,
and why it needs to be done.

I built the bootable usb with arch linux using dd comand, as described in … tion_media
copying, with the unmounted usb drive:
dd bs=4M if=path/to/archlinux.iso of=/dev/sdx status=progress oflag=sync

i read the manual for the command, and understood all the flags, but i couldn't figure out why the bs(block size) needs to be 4Mbytes.
The block size needs to be 4Mbyte to make the usb bootable? Are there a problem if a use other block size to usb?


#2 2019-09-16 01:57:01

Registered: 2014-08-02
Posts: 260

Re: Why block size needs to be 4M, in bootable USB.

Short answer: It is just there to speed up the process.

If you know the proper block sizes you either want to select the exact one or multiples of it. The install media is formatted in FAT32 and default is 4KiB block size which I assume is selected here.
So basically it tries to write batches of 4M at a time which may or may not be optimized in some buffering, caching and controller magic (if the device does have a cache the value should be smaller than its capacity, e.g. 1/2 of it). If not the system has to wait anyway to write the next block.

What is more interesting and actually bad is the opposite: What happens if the selected block size does not align with the one of your storage device? In this case the device has to work extra hard, because there is a smallest possible size that can be written at once. I think for most flash drives it is 512 bytes or 4KiB, but don't quote me on this. Anyway, this means that once the first few bytes are written and more are added you have to write the whole block again: Read data and cache it, delete block (especially expensive), write data. This is like pulling a brake and you see major performance degradation (think 10-100 times slower).

Overall I assume the value was picked either by trial and error or as a rule of thumb (I assume this), because the optimal value is configuration and more importantly device specific.

What I didn't mention yet is the utility function of the parameter, because in combination with other parameters you can extract or write exact blocks of data. A practical example is copying and writing the MBR of a drive.

# Extract MBR (first 512 bytes) and write into file
dd if=/dev/sda of=mbr.dat bs=512 count=1
# Write it back
dd if=mbr.dat of=/dev/sda bs=512 count=1


Board footer

Powered by FluxBB