You are not logged in.
Pages: 1
If my root and home partitions are encrypted with aes-cbc-essiv:sha256 will my CPU (i7-3517U) support hardware encryption otr it is better to change it to aes-xts-plain64?
I've loaded modprobe aesni-intel
but "dd if=file of=/dev/null bs=1M count=1024" gives just 258 MB/s (aes-cbc-essiv:sha256) and 209 MB/s (aes-xts-plain)
While I use Samsung 840 Pro where "hdparm -Tt /dev/sda" "Timing buffered dusk reads 509 MB/s"
Last edited by ZeroLinux (2013-03-29 08:37:34)
Offline
It is using the aes-ni silicon. To compare the crypto performance of the two ciphers for your setup, have a look at
cryptsetup benchmark
Judging from https://wiki.archlinux.org/index.php/SS … Partitions
your crypt performance looks good though. However, take care to use the "conv=fdatasync,notrunc" with dd (see the examples on the wiki page). Otherwise you might measure the cache.
Offline
Yes, AES-NI is working on your system. You do not need to do anything for it to work. Without it you would be seeing speeds of ~100MB/s and High CPU usage. (also note that the Hardware Random Number Generator in your CPU is mapped to /dev/urandom so use that as the random source whenever you create SSH keys or whatever.)
This guy seemed to figure out how to get the best speeds. My report is right below his.
https://wiki.archlinux.org/index.php/SS … 30_256GB_2
I suspect that you are only getting ~200MB/s instead of ~300MB/s is because ether...
your low voltage CPU can only push that, or
other SSD optimizations need to be done.
fdisk, gdisk, LUKS, and cryptsetup should all align everything by default. However, EXT4 setting can be changed. In my report I show how I have ext4 write in 512KB blocks. You also want to use the noatime (nodiratime is not needed if you use noatime).
Your speeds are still VARY fast compared to a HDD. That is like 4 or 6 HDD in a RAID-10. So, do what you can and not worry about it. 200MB/s+ is more then you will ever need on your laptop. Personally, I loaded up with 16GB of ram and mount /tmp and /home/user/Downloads in a tmpfs's with 10GB limits. Then when I need to compile software or anything that needs really fast speeds I do in in there at *GB/s speeds.
It sounds right that CBC mode is faster. I am 80% sure I read that the Intel AES-NI has hardware circuits to do some of the CBC calculations. Personally, I don't use CBC because AES in that mode is mostly broken (but not, just close)(and I think only really if AES-CBC is being used as a streaming cypher like in WPA2), however you should be covered with the essiv:sha256 .... also note that SHA-512 is faster then SHA-256 on x86_64 computers... in most applications. << basically it is worth researching this stuff, don't take my word for it >>
Last edited by hunterthomson (2013-03-30 13:23:09)
OpenBSD-current Thinkpad X230, i7-3520M, 16GB CL9 Kingston, Samsung 830 256GB
Contributor: linux-grsec
Offline
Thank you, hunterthomson, very much
I tried different settings and came to conclusion that in my case SHA-512 is NOT faster then SHA-256 on x86_64 computers.
Also :
cryptsetup benchmark
# Tests are approximate using memory only (no storage IO).
PBKDF2-sha1 482769 iterations per second
PBKDF2-sha256 258524 iterations per second
PBKDF2-sha512 168689 iterations per second
PBKDF2-ripemd160 369216 iterations per second
PBKDF2-whirlpool 189137 iterations per second
# Algorithm | Key | Encryption | Decryption
aes-cbc 128b 558.4 MiB/s 2059.0 MiB/s
serpent-cbc 128b 75.6 MiB/s 280.0 MiB/s
twofish-cbc 128b 171.1 MiB/s 329.0 MiB/s
aes-cbc 256b 419.0 MiB/s 1538.0 MiB/s
serpent-cbc 256b 76.9 MiB/s 281.0 MiB/s
twofish-cbc 256b 171.3 MiB/s 334.0 MiB/s
aes-xts 256b 1041.0 MiB/s 1041.0 MiB/s
serpent-xts 256b 259.7 MiB/s 250.0 MiB/s
twofish-xts 256b 288.0 MiB/s 291.0 MiB/s
aes-xts 512b 889.0 MiB/s 889.1 MiB/s
serpent-xts 512b 262.0 MiB/s 251.0 MiB/s
twofish-xts 512b 287.0 MiB/s 290.1 MiB/s
Interesting compare between ext4 and btrfs FS
# df -T /dev/mapper/cryptroot
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/mapper/cryptroot ext4 30105552 7735568 20834036 28% /
# cryptsetup status /dev/mapper/cryptroot
/dev/mapper/cryptroot is active and is in use.
type: LUKS1
cipher: aes-cbc-essiv:sha256
keysize: 128 bits
device: /dev/sda4
offset: 2048 sectors
size: 61437952 sectors
mode: read/write
flags: discards
# echo 3 > /proc/sys/vm/drop_caches
# dd if=/root/1.tar.gz of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 4.11265 s, 261 MB/s
and btrfs
# df -T /dev/mapper/crypthome
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/mapper/crypthome btrfs 391362560 294580 388948620 1% /home
# cryptsetup status /dev/mapper/crypthome
/dev/mapper/crypthome is active and is in use.
type: LUKS1
cipher: aes-xts-plain
keysize: 256 bits
device: /dev/sda5
offset: 8192 sectors
size: 782725120 sectors
mode: read/write
flags: discards
# echo 3 > /proc/sys/vm/drop_caches
# dd if=/home/1.tar.gz of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.94485 s, 552 MB/s
Difference between ciphers is not significant. But speed depends mostly on FS
Why do I have this big difference between two filesystems?
# cat /etc/fstab
UUID=29e20d52-fb6d-2457-a664-568f0d9df255 / ext4 defaults,noatime,data=ordered,discard 0 1
# It was before changing FS from ext4 to btrfs
# UUID=544666f1-67e4-4beb-ae96-e109ed76e22b /home ext4 defaults,noatime,data=ordered,discard 0 2
UUID=8721d590-b928-4db9-a4a9-e96528f56ec0 /home btrfs defaults,max_inline=256 0 0
UUID=e248a667-670a-4fc0-a582-e8afcfb0b5ea /boot ext2 defaults 0 1
UUID=24B5-D44B /boot/efi vfat defaults 0 1
Last edited by ZeroLinux (2013-04-01 14:56:02)
Offline
Difference between ciphers is not significant. But speed depends mostly on FS
Why do I have this big difference between two filesystems?
That's an interesting comparison. A contributing factor may be that you have discards enabled for the ext4, but not the btfrs apparently (edit: or not, I overlooked how you dd). Fun fact is the comparison between your encrypted btfrs-io and the raw read in your first post too (benchmarks..).
The following comparison is a bit old, but very thorough I find and comes to the same conclusion as you (last paragraph/bullet point): http://www.mayrhofer.eu.org/ssd-linux-benchmark
When you look at the result table in there then, you will see that both filesystems change place depending on the application/what is done.
Last edited by Strike0 (2013-04-01 15:40:50)
Offline
Is is a good idea to leave /home under btrfs?
Does it sound good to move root (/) partition under btrfs?
Offline
Pages: 1