You are not logged in.
Pages: 1
Topic closed
I just got a Samsung 960 Evo and the write performance quickly drops from 2 GB/sec to around 380 MB/sec and stays there. Read performance is fine and sustains at about 2 GB/sec. I don't believe that this is due to thermal throttling since the temperatures are higher when reading a 100 GB file with dd than when writing a 100 GB file with dd. The highest temperature I've seen was 148F but that was only for about 30-60 seconds when reading the file. I just purchased a few RAM heatsinks and they should be here soon, I'll stick them on when they get here and see if it improves anything.
At first I was using ZFS on it since I will be using this in FreeNAS 10 and I thought that maybe the COW functions of ZFS were causing the slow write performance, so I destroyed everything and formatted it with EXT4, but I still got the same performance, so COW isn't the issue. I read in a few places that nobarrier slightly increases performance, but it actually reduced it by 60 MB/sec for me.
It's attached to an M.2 slot on my SuperMicro X10SDV-F
write performance
[root@server ~]# dd if=/dev/zero of=/mnt/nvme/test.img bs=2M count=50k status=progress
107239964672 bytes (107 GB, 100 GiB) copied, 278.004 s, 386 MB/s
51200+0 records in
51200+0 records out
107374182400 bytes (107 GB, 100 GiB) copied, 278.168 s, 386 MB/s
With nobarrier
[root@server ~]# dd if=/dev/zero of=/mnt/nvme/test.img bs=2M count=50k status=progress
106705190912 bytes (107 GB, 99 GiB) copied, 329.001 s, 324 MB/s
51200+0 records in
51200+0 records out
107374182400 bytes (107 GB, 100 GiB) copied, 329.448 s, 326 MB/s
barrier re-enabled
[root@server nvme]# dd if=/dev/zero of=test.img bs=2M count=5k status=progress
10292822016 bytes (10 GB, 9.6 GiB) copied, 6.00024 s, 1.7 GB/s
5120+0 records in
5120+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 6.37312 s, 1.7 GB/s
read performance
[root@server ~]# dd if=/mnt/nvme/test.img of=/dev/null bs=2M status=progress
105939730432 bytes (106 GB, 99 GiB) copied, 55.0003 s, 1.9 GB/s
51200+0 records in
51200+0 records out
107374182400 bytes (107 GB, 100 GiB) copied, 55.7034 s, 1.9 GB/s
[root@server ~]# gdisk -l /dev/nvme0n1
GPT fdisk (gdisk) version 1.0.1
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Disk /dev/nvme0n1: 488397168 sectors, 232.9 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 342769B0-FB96-461D-8C27-EF7CEFF432C7
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 488397134
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)
Number Start (sector) End (sector) Size Code Name
1 2048 488397134 232.9 GiB 8300 Linux filesystem
iozone run
Run began: Fri Jan 13 19:52:41 2017
Include fsync in write timing
O_DIRECT feature enabled
Auto Mode
File size set to 10240 kB
Record Size 4 kB
Command line used: iozone -e -I -a -s 10M -r 4k -i 0 -i 1 -i 2
Output is in kBytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 kBytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
random random bkwd record stride
kB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread
10240 4 121585 159065 201621 210609 63754 158516
iozone test complete.
Anyone have any idea what the issue is?
Last edited by brando56894 (2017-01-14 01:23:49)
Offline
Perhaps it's something about TRIM? These low write speeds might be how the drive behaves when it's completely "full" without unused cells available internally? What are you doing with regards to TRIM?
Offline
I doubt that's it because it manually ran fstrim on it before I did the last two write tests. With small writes it's quick, my /var partition was full on my SSD so I copied it over to the 960 and it copied 56 GB in a few minutes. It seems like it's only slow when writing files in excess of about 35 GB... which will almost never happen on it anyway haha It's going to be used for VMs (zvols [block devices] in FreeNAS) and Docker configs.
Last edited by brando56894 (2017-01-14 05:24:08)
Offline
Isn't this the standard performance for samsung evo and other drives that use TLC flash? They usually have fast SLC flash cache over the slow TLC flash and when you fill it the speed drops to the TLC flash speed. It is more noticeable on smaller capacity (128-256) drives due to them having smaller cache and having less flash chips they dont get the performance boost from parallelization.
Last edited by kevku (2017-01-14 11:01:40)
Offline
Use iozone not dd
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline
@graysky If you look at the bottom of the first post I did, in fact I copied the command string exactly from your thread about your 950 I just never really understood how to read the results of iozone or what were good tests to run.
Isn't this the standard performance for samsung evo and other drives that use TLC flash? They usually have fast SLC flash cache over the slow TLC flash and when you fill it the speed drops to the TLC flash speed. It is more noticeable on smaller capacity (128-256) drives due to them having smaller cache and having less flash chips they dont get the performance boost from parallelization.
I don't know as this is my first M.2/NVME drive, that seems to be the case though. I've been reading about NVME for a while and it always slightly confused me how the 1 TB and now 2 TB drives were usually faster than the 256 GB and 512 GB drives. Up until yesterday I didn't know (or forgot) that the Pros use different flash chips, I though it was just the usual extended warranty.
Last edited by brando56894 (2017-01-14 17:14:45)
Offline
i get low write performance on my 950 pro 512gb (nvme) as well. i am using ext4 and i get around 900-950mb (drive is rated for 1.5gb) a second while my reads are a nice 2.4-2.5gb (which is what the drive is rated for) a second. i have tested the drive under windows 10 shortly after i built the computer for testing before i fully went back to linux. under the native windows nvme driver my write speeds where about 800mb-1.2gb while my reads where 1.7-2gb. with the samsung driver they were 1.5gb / 2.5gb. so it just might be the native nvme linux driver isn't as optimized yet for write speeds for these drives. nvme drives are still newish.
Last edited by orlfman (2017-01-15 04:48:06)
Offline
Thanks for the info! I was tempted to try it in my desktop which also has an M.2 slot and runs Windows 10, but I didn't feel like ripping two PCs apart haha.
Offline
I'm also experiencing mixed performance with a 960 Evo in Linux. I seem to have good performance on Windows 10 (about the rated 3800MB/s reads), but almost half the speed in Linux (about 2200MB/s reads). See link for a screenshot of my iozone benchmark on Windows and Linux.
Offline
The 960 EVO 250GB has a cache of 13GB. The specified write speed is just the speed of the cache. If the SSD runs out of cache, the write speed will drop to the actual speed of the SSD: ~300MB/s.
See https://topnewreview.com/samsung-960-evo-250gb/ for more information:
In terms of the official specs, the peak sequential numbers look solid at first glance. Samsung says it’s good for 3.2GB/s reads and 1.5GB/s writes. The latter is a fair distance off the 2.1GB/s of the smallest 960 Pro, for instance, but it’s still a big old number.
Look closely at the spec sheet, however, and you’ll note that write performance is enabled by the SLC write cache, a block of flash memory running in single-level mode and acting as a write buffer. For this Samsung 960 Evo 250GB drive, it measures 13GB.
If you exhaust the cache, performance drops off dramatically. Samsung says sustained performance once the cache is filled comes in at just 300MB/s, Yikes.
Last edited by jkhsjdhjs (2019-09-18 17:50:58)
Offline
Closing this very old topic.
Offline
Pages: 1
Topic closed