You are not logged in.
Hi all,
Apologies if this is in the wrong section it just appeared the most appropriate for my hunch I'm an Arch newbie after years of Centos/RHEL/Debian based distro's and I'm having an odd problem with trying to get TRIM to work on my SSD running Arch. I have 2 x OCZ Vertex 2E 60GB SSD's; one running Ubuntu 11.04 and one running the latest Arch x64 build. My Arch install doesnt appear to have TRIM function, yet my Ubuntu one does.
I have the EXACT same hard drive model running Ubuntu 11.04 and TRIM is working fine. The /etc/fstab for my Ubuntu install is:
# / was on /dev/sdf1 during installation
UUID=[removed] / ext4 discard,noatime,barrier=0,nobh,commit=100,nouse$
# swap was on /dev/sdf5 during installation
UUID=[removed] none swap sw 0 0
And when I run the TRIM test found here (http://andyduffell.com/techblog/?p=852) it returns the desired result (all 0's after deleting a file and re-checking the sectors).
Now when I run that test on my new SSD running Arch, it doesnt clear the sectors. I have tried everything to get it to work, moved SATA ports beetween SSD's to check BIOS issues (works fine on Ubuntu in both ports, fails in Arch in both).
I downloaded the OCZ firmware upgrader tool and upgraded them both to 1.33, again Ubuntu works and Arch doesnt.
I added "discard" to my /etc/fstab for Arch along with noatime etc as per the guide. Anyone got any idea why its not working? Here is the /etc/fstab for my Arch:
UUID=[removed] /boot ext2 defaults 0 1
UUID=[removed] / ext4 discard,noatime,defaults 0 1
UUID=[removed] /home ext4 discard,noatime,defaults 0 1
UUID=[removed] swap swap defaults 0 0
From what I can remember without booting into it, My ubuntu install is on 2.6.38.10 and my Arch shows "3.0" on uname -r.
The only thing I can come up with is its a possible kernel regression error or something similar? Has anyone had any experience of this?
Best,
Sam
Last edited by vcolonel (2011-08-22 08:23:41)
Personal Blog
Registered Linux User #542318
Offline
I would move "defaults" to the beginning of the options so as not to override the previous ones.
I tried the test on my SSD but it doesn't work. The hdparm command shows the wrong sector because it's obviously not filled with random numbers... It's probably not working because I'm using LVM.
Edit: I tried on /boot which is not on LVM and hdparm shows what looks like random data, but again, deleting the file doesn't fill the area with zeroes. My SSD is a 64 GB OCZ Onyx, BTW.
Last edited by stqn (2011-08-22 15:56:59)
Offline
Thanks for looking into it stqn I may have got it totally wrong, but I tried downloading and booting off a different kernel to see if that was the issue. As i said im new to Arch, but what I did was go onto AUR, get the Kernel2.6-PF build, download the tar.gz, ran "makepkg" and then did "pacman -U ker....". Then went and modified the menu.lst in /boot/grub.
I booted off the kernel, and from CLI (Nvidia / GDM / X didnt work on new kernel unsuprisingly) ran the test aforementioned and got the same negative result (no 000's result after removing file). I wonder why this could be happening? I'm thoroughly stumped.
Personal Blog
Registered Linux User #542318
Offline
I would move "defaults" to the beginning of the options so as not to override the previous ones.
vcolonel, have you not tried this?
There are only two ways to live your life: One is as though nothing is a miracle. The other is as though everything is a miracle. - Albert Einstein
How wonderful it is that nobody need wait a single moment before starting to improve the world. - Anne Frank
Offline
Sorry, Yes I have and it didnt make a difference. Out of interest, I mounted my Ubuntu SSD in /mnt/old with the /etc/fstab entry:
/dev/sdg1 /mnt/old ext4 discard,noatime 0 0
And ran the test, and it worked (All 0's) so I'm now looking at the SSD again wondering what on earth is happening with it. I changed the /etc/fstab for "/" and "/home" to 0 0 from 0 1, and tried again, no luck. I also tried (just for fun) removing UUID and mounting based on /dev/sd* the "/" and "/home" partitions, again no joy.
How would one go about checking that something LVM isnt in use on this SSD? (just thinking of anything now).
Personal Blog
Registered Linux User #542318
Offline
Just did a "vgs" from "lvm>" and I get:
0 disks
11 partitions
0 LVM physical volume whole disks
0 LVM physical volumes
So doesnt look like any issue with LVM running either. So the conundrum is, why, with 2 identical SSD's with identical firmware, in identical SATA port configurations, on (assumingly) identical EXT4 file systems, with identical /etc/fstab mount configurations, would one fail TRIM tests and one pass easily.
Personal Blog
Registered Linux User #542318
Offline
The same problem is discussed in this thread: https://bbs.archlinux.org/viewtopic.php … 02#p978702
Edit: And here: https://bbs.archlinux.org/viewtopic.php … 84#p924784 but this one is less informative.
Last edited by stqn (2011-08-23 15:18:59)
Offline
The same problem is discussed in this thread: https://bbs.archlinux.org/viewtopic.php … 02#p978702
Edit: And here: https://bbs.archlinux.org/viewtopic.php … 84#p924784 but this one is less informative.
Thats interesting, It may explain why it would show 0's on Ubuntu and not Arch, for example. But running the same test in Arch, on one drive it works and on one it doesnt.
"hdparm -I /dev/sd..." for both drives is pretty much identical also. Very odd.
Personal Blog
Registered Linux User #542318
Offline
Turns out trimming works on my SSD. The test was flawed.
I'm not an SSD expert but I believe that erase blocks may be as big as 1MiB (and as small as 128KiB?), and that when you erase a file, the SSD won't TRIM the block is there is another file on it.
Anyway the following test did return sectors filled with zeroes on my Onyx SSD:
# test on /boot because it's not on top of LVM, and nothing is writing to it
cd /boot
# write 3 times the biggest possible erase block size
sudo dd if=/dev/urandom of=tmpfile bs=1M count=3 && sync
# note beginning and end sectors of file
sudo hdparm --fibmap tmpfile
# erase and wait a bit
sudo rm tmpfile && sync && sleep 120
# check if the middle of tmpfile is filled with zeroes or not...
sudo hdparm --read-sector <address halfway between beginning and end of tmpfile> /dev/sda
Last edited by stqn (2011-08-23 15:56:37)
Offline
Just to make sure, you mounted an SSD with 'discard,noatime' and the behavior expected was realized. Did you change the other SSD's fstab to the same settings, sans "defaults" which was most likely overriding the previous set statements, and receive different results?
UUID=[removed] /boot ext2 defaults 0 1
UUID=[removed] / ext4 discard,noatime 0 1
UUID=[removed] /home ext4 discard,noatime 0 1
UUID=[removed] swap swap defaults 0 0
There are only two ways to live your life: One is as though nothing is a miracle. The other is as though everything is a miracle. - Albert Einstein
How wonderful it is that nobody need wait a single moment before starting to improve the world. - Anne Frank
Offline
Basically, we have SSD "UbuntuX" and SSD "ArchX".
They are both mounted the same in /etc/fstab on both distro's now (ext4 discard,noatime 0 0).
From either distro, when I run the test I get expected results for UbuntuX (all 0's after removal of file), and I get unexpected for ArchX (random data remains after removal of file). I have tried with different kernels and on Arch, but this was pretty much null and void when I mounted UbuntuX via Arch as "/dev/sdg1 /mnt/old ext4 discard,noatime 0 0" and ran the test from WITHIN Arch, and got all 0's on UbuntuX.
I then did the same, added "/dev/sde4 /mnt/old ext4 discard,noatime 0 0" to my Ubuntu /etc/fstab, and ran the test against ArchX from WITHIN Ubuntu, and got random data after deletion. I am thoroughly stumped. Only thing I can think of now is either a knackered SSD, or that the ext4 FS on Arch is different somehow to my ext4 on Ubuntu (daft as it sounds).
Next step is to dd the Arch image from ArchX onto UbuntuX and test again. Other than that, im baffled.
Personal Blog
Registered Linux User #542318
Offline
OK chaps, Update.
I dd'd the Arch image off the SSD, and installed a Vanilla Ubuntu 11.04 install onto it. I then edited the fstab, so it was:
/dev/sda1 / ext4 discard,noatime 0 1
After a reboot, I ran the TRIM test and it worked, returning all 0's. So the issue is now definitely with the Arch install - and more specifically the Arch install of the ext4 file system. Are there any logs I can look at to get any more information on this?
Personal Blog
Registered Linux User #542318
Offline
Bit more info from /var/log/kernel.log and /var/log/messages.log:
Aug 24 16:52:11 localhost kernel: [ 4.508520] EXT4-fs (sdf3): INFO: recovery required on readonly filesystem
Aug 24 16:52:11 localhost kernel: [ 4.508523] EXT4-fs (sdf3): write access will be enabled during recovery
Aug 24 16:52:11 localhost kernel: [ 4.513568] EXT4-fs (sdf3): recovery complete
Aug 24 16:52:11 localhost kernel: [ 4.513731] EXT4-fs (sdf3): mounted filesystem with ordered data mode. Opts: (null)
Aug 24 16:52:11 localhost kernel: [ 6.063363] EXT4-fs (sdf3): re-mounted. Opts: discard
Aug 24 16:52:11 localhost kernel: [ 6.109409] EXT4-fs (sdf4): mounted filesystem with ordered data mode. Opts: discard
Aug 24 16:52:15 localhost kernel: [ 12.044804] EXT4-fs (sdf3): re-mounted. Opts: discard,commit=0
Aug 24 16:52:15 localhost kernel: [ 12.061707] EXT4-fs (sdf4): re-mounted. Opts: discard,commit=0
Aug 24 16:52:15 localhost kernel: [ 12.071259] EXT4-fs (sdd1): re-mounted. Opts: commit=0
Aug 24 16:52:15 localhost kernel: [ 12.073541] EXT4-fs (sdg1): re-mounted. Opts: discard,commit=0
/dev/sdf is Arch, /dev/sdg is Ubuntu.
Personal Blog
Registered Linux User #542318
Offline
Still no joy with TRIM, my speeds have dropped from 250MB to 150MB also:
[sam@archbox ~]$ sudo hdparm -t /dev/sdd
/dev/sdd:
Timing buffered disk reads: 468 MB in 3.01 seconds = 155.38 MB/sec
Does it matter if /boot if ext2? Thats the only thing i can of thats stopping this working.
Personal Blog
Registered Linux User #542318
Offline
You could also try wiper.sh (included with hdparm.)
Offline
I had the same problem with my Agility 3, the only one soultion that i found was to TARing into safe location my / and recreate the partition with GParted.
The only difference that before this partition was Logical ext4, an now it is Primary ext4. After i changed in fstab and grub.cfg uuid of partition, and now TRIM is working fine.
ArchLinux amd64 , kernel 3.2.9, on OCZ Agility 3
Offline
Does TRIM require a GPT map? I don't have an SSD but I set one up for somebody before Christmas and I thought that was why I used GPT there. (Whereas I use GPT here out of sheer bloodimindedness on a regular HDD.)
Ubuntu probably defaults to GPT if you let it auto-prepare the drive. The regular Arch install uses MBR.
A logical partition must be MBR. But if you just recreated the partition as primary rather than recreating the map, GPT can't be required?
CLI Paste | How To Ask Questions
Arch Linux | x86_64 | GPT | EFI boot | refind | stub loader | systemd | LVM2 on LUKS
Lenovo x270 | Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz | Intel Wireless 8265/8275 | US keyboard w/ Euro | 512G NVMe INTEL SSDPEKKF512G7L
Offline
No, GPT isn't required.
Offline