You are not logged in.
Hi,
I've set up a LVM on LUKS partition scheme on my laptop with a SSD. I enabled trim at filesystem level (fstab), in LVM and in LUKS and did the test to see if it works (by creating a small random file, deleting it and see if the sectors have been cleared or not). The result is that it doesn't work...
I've also a non-encrypted boot partition on the same disk. I did the same test on this partition, with a negative result.
So, it seems the problem is that the automatic discard function (via the correct option in fstab) is not working...
I double checked that my SSD supports TRIM (via hdparm) with a positive result. I tried to launch fstrim on both partitions (/ and /boot) with success, so trim should work.
Do you have any ideas why the automatic trim via the option in fstab is not working ? If I can't get it to work, I'll use fstrim, but how often should I launch it via a crontask ?
Thanks
Offline
Trim is disabled by default on dm-crypt devices due to security considerations. See the wiki. TL;DR? As I understand it, trim basically wipes the random 'noise' you filled the drive with before encrypting. I'm still mulling over whether or not to enable it when I setup encryption on a new SSD.
Anyway, if you have any doubts about the results of your second test (unencrypted volume), I would run it again to double check. Maybe you were expecting it not to work and that's what you saw.
But whether the Constitution really be one thing, or another, this much is certain - that it has either authorized such a government as we have had, or has been powerless to prevent it. In either case, it is unfit to exist.
-Lysander Spooner
Offline
I know about dm-crypt disabling trim by default. But you can force it to allow trim via a specific option (allow-discard) which I set and that should be activated as reported by dmsetup.
Moreover, I'm sure of the result for the unencrypted volume (I double checked and I got exactly the same result).
Offline
I know this might seem basic, but make sure that the trim feature is properly supported. I have an older SSD and when I used the discard flag, my system came to a halt essentially.
I ended up using a cronjob to handle it.
Here is a link to my issue if you are interested:
https://bbs.archlinux.org/viewtopic.php?id=162408
Offline
I think the cronjob way is a better way of handling it in general. This is actually what I use, as I like the idea of batch trimming better.
But I noticed this as well not too long ago. I used to be able to delete a file and then check to see that there were all zeros, but now this does not happen... at least not right away. So my best guess as to why this is happening is that it has something to do with the drive (and maybe kernel's) handling of trim and garbage collection in general. That is, instead of immediately "trimming" the discard blocks, they are instead marked as unused and then done in a batch. Again, this is just a guess. But on my drives at least, I noticed that I could indeed get all zeros... eventually.
With the way wear levelling works, it shouldn't matter anyway if the blocks are cleared right away. It is not like those same parts of the drive are immediately going to be used again. If you were to think about all the blocks in the drive being in a line, then conceptually, wear levelling works by simply continuing to write while moving down the line of blocks. So it will get to the end and start over again and theoretically even out the writes (of course it is more complex than that in practice). The controller keeps track of the "location" of these blocks, or where the would have been if it were set up like a rotational drive with true physical bounds to a partition. So for this reason, it is not imperative that the blocks get truly set back to zero right away.
Again, I want to point out that this is my understanding (and assumptions), and not by any means any kind of official word on how and why things work the way they do. It could very well be a bug, but when I came across it, it made sense in my head.
Offline
@phyberoptycs : As said in my first post, hdparm on my disk return me the following lines :
* Data Set Management TRIM supported (limit 8 blocks)
* Deterministic read data after TRIM
So trim is supported by my disk.
@WonderWoofy : what you say makes sense. But, as a SSD drive is something a little bit fragile and expensive (at least, there are some concerns about their lifetime...) so I'd like to care a bit about mine ^^
If someone with some knowledge about the recent evolution of the kernel and trim support could confirm this (or confirm it's effectively a bug), it would be great !
I'll try to delete a file, note the position of the blocks and see if it gets cleared after a while.
Offline
This is a series of articles dating from 2009 up to around 2011. Admittedly SSD technology is way ahead of what was used at the time these articles were written. However, honestly, these articles are the most comprehensive over-view of SSD tech that I've read (and believe me, I search a lot for stuff about SSDs!).
First article: http://anandtech.com/show/2829
Second article: http://www.anandtech.com/show/2738/6
For the TL;DR people or those looking only to understand the under-pinnings of how SSDs store and read/write information to the drive see this: http://www.anandtech.com/show/2738/8
Finally, for the intellectual masochists or really geeky/tech-inclined & for those who want to understand the very technical aspects of how SSDs work/perform please see this: https://www.usenix.org/legacy/event/use … index.html
Warning: the last article is not for the faint of heart but it is in reasonably understandable English for the layman.
Offline