You are not logged in.
Hi all,
I moved to an SSD for my root partition recently and have been trying to migrate my /var partition off the SSD onto one a spinning disk. I rsync'd the existing /var directory to the spinning disk partition, and then created a new empty /var on the SSD's root partition, which is where I am trying to mount /var. In /etc/fstab I have:
UUID=48dadb35-4c37-4cb7-9153-9e23163a7d08 /var ext4 x-initrd.mount 0 2
With this configuration I end up with a black screen where I would normally get the display manager login screen. If I boot in single-user mode and manually mount /var, then end the shell session and allow the boot to start normally, everything works fine and the separate /var is mounted.
From what I've been able to gather it seems the x-initrd.mount flag (I did have defaults also, originally) is supposed to cause the partition to mount as early as possible in the boot sequence, but it seems like maybe this is not happening.
What am I missing?
Last edited by merc68k (2015-01-02 02:36:44)
Offline
This is entirely unnecessary -- see this link:
http://techreport.com/review/27436/the- … -petabytes
tl;dr:
At this rate, it'll take me a thousand years to reach that total.
Modern SSDs have greater longevity than their spinning rust cousins.
Freedom for Öcalan!
Offline
I don't think that x-initrd.mount is a valid mount option (in Arch/mkinitcpio anyway). Systemd is able to handle mount dependencies and understands things like /var, /boot, /home, etc. You don't need to tell it that it is important... it knows.
Though Head_on_a_Stick is right, and unless you really need to free up the actual storage space, you are probably wasting your time moving /var off the SSD. In fact, you are probably pretty much just slowing down your system by doing that, while gaining nothing.
Offline
Hi all,
I moved to an SSD for my root partition recently and have been trying to migrate my /var partition off the SSD onto one a spinning disk. I rsync'd the existing /var directory to the spinning disk partition, and then created a new empty /var on the SSD's root partition, which is where I am trying to mount /var. In /etc/fstab I have:
UUID=48dadb35-4c37-4cb7-9153-9e23163a7d08 /var ext4 x-initrd.mount 0 2
With this configuration I end up with a black screen where I would normally get the display manager login screen. If I boot in single-user mode and manually mount /var, then end the shell session and allow the boot to start normally, everything works fine and the separate /var is mounted.
From what I've been able to gather it seems the x-initrd.mount flag (I did have defaults also, originally) is supposed to cause the partition to mount as early as possible in the boot sequence, but it seems like maybe this is not happening.
What am I missing?
I did the same thing as you (with the same motivation) a couple of months ago. You don't need the "x-initrd" flag -- I think mount doesn't understand it and systemd reaches a timeout or something. FYI, here is my fstab:
# System partitions (on SSD)
/dev/mapper/takahe-root / ext4 defaults,ro,nodev,discard 1 1
UUID=63073278-ff55-4234-a4ee-4638f59cc250 /boot ext4 defaults,ro,noexec,nosuid,nodev,discard 1 2
/dev/mapper/takahe-home /home ext4 defaults,nosuid,nodev,noexec,discard 1 2
/dev/mapper/takahe-swap swap swap defaults,discard 0 0
# /var, containers and scratch (on 1st HDD)
/dev/mapper/takahe-var /var ext4 defaults,nosuid,nodev,noexec 1 2
/dev/mapper/takahe-virt /var/lib/lxc ext4 defaults,nodev 0 2
/dev/mapper/takahe-scratch /export/scratch ext4 defaults,nosuid,nodev 0 2
# Pseudo FS (general hardening)
tmpfs /dev/shm tmpfs noexec,nosuid,nodev 0 0
tmpfs /tmp tmpfs noexec,nosuid,nodev 0 0
tmpfs /run tmpfs noexec,nosuid,nodev 0 0
tmpfs /media tmpfs noexec,nosuid,nodev,size=128K,mode=0755 0 0
# Backup (on 2nd HDD)
/dev/sdc1 /export/backup ext4 defaults,nosuid,nodev,noexec 0 2
So, there are 3 drives: 1 SSD and 2 HDDs. The SSD houses /, /boot, /home and swap (which is never written to in practice). The 1st HDD is for /var, LXC containers and scratch space (for running codes). The 2nd HDD is for backups. You can tell SSD and HDD apart by the "discard" mount flag. The only difference from your setup is that I have SSD and 1st HDD in one LVM pool (volgroup).
BTW, check your /var underneath the mount point. Even if the mount command failed, the system should proceed normally, just writing to the / FS (which I assume is not full). The fact that you can not boot means that there is a bug in systemd, as it should be able to handle invalid fstab entries...
Modern SSDs have greater longevity than their spinning rust cousins.
Yeah? What about situations when your firewall logs (iptables.log) grows @ 10MB/hour? And the stupid journald only amplifies the problem (it is about 5x larger than syslog-ng text files)? And I am not even talking about cronie (this is a multi-user system with lots of cronjobs)...
Last edited by Leonid.I (2014-12-31 02:10:58)
Arch Linux is more than just GNU/Linux -- it's an adventure
pkill -9 systemd
Offline
10MB/hr is 240MB/day... the lowest rated SSD that I own is rated at 20GB/day for three years (not sure if it is exactly that... somewhere around there). I just got an Intel 720 which is rated at 70GB/day for five years. So even on a machine running 24/7 and assuming that your statement that the journal makes it "about 5x larger", that still doesn't even come remotely close.
Honestly, if you bought an SSD, your intention is to make your machine faster. Putting all the frequent writes on a HDD works directly against that newly purchased speed advantage. Why purchase an SSD if you are not going to use it?
Also, if you read the articel that Head_on_a_Stick linked to, they did a very informal test of SSD endurance. All of them lasted well beyond their expected endurance rating. So presumably, the manufacturer stated limits tend to be on the very conservative side.
Offline
Yeah? What about situations when your firewall logs (iptables.log) grows @ 10MB/hour? And the stupid journald only amplifies the problem (it is about 5x larger than syslog-ng text files)? And I am not even talking about cronie (this is a multi-user system with lots of cronjobs)...
Freedom for Öcalan!
Offline
Thanks everyone for weighing in, I'd actually read that tech report article before and realized it probably wasn't worth worrying about moving /var but figured it likely wouldn't hurt except maybe in cases of heavy IO to that partition, and it would ease my paranoia a little at the same time
It did annoy me that just setting up the fstab and rebooting didn't work so even if there was no benefit I wanted to understand why it didn't work.
Annoyingly, I took out the x-initrd today and put just the defaults flag and it worked right away! This is how I did it originally so I don't know why it didn't work the first time... Oh well.
Thanks again for the input and happy new year!
Offline
Don't forget to mark your thread as [Solved] if you are satisfied with the outcome.
Happy new year to you too!
Offline