You are not logged in.
Pages: 1
One host:
# df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 79G 37G 39G 49% /
dev 1.9G 0 1.9G 0% /dev
run 1.9G 404K 1.9G 1% /run
/dev/sda2 79G 37G 39G 49% /
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
tmpfs 1.9G 317M 1.6G 17% /tmp
Other host:
# df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 823G 1.5G 780G 1% /
dev 30G 0 30G 0% /dev
run 30G 372K 30G 1% /run
/dev/sda1 823G 1.5G 780G 1% /
tmpfs 30G 0 30G 0% /dev/shm
tmpfs 30G 0 30G 0% /sys/fs/cgroup
tmpfs 2.0G 480K 2.0G 1% /tmp
I suppose it's a systemd related question: why are all those pseudofilesystems of the size 1.9G and 30G ? Can I limit them in size (for potential prevention of DDoS out-of-resources attacks)?
Offline
The size of a tmpfs is usually half your RAM size by default. Those 30G seem strange. You can set the size with the size= mount option.
Offline
I think you need to supply more information. No one can tell you why your filesystems are of different sizes with no information about your systems.
Last edited by WonderWoofy (2012-11-21 16:21:53)
Offline
The size of a tmpfs is usually half your RAM size by default. Those 30G seem strange. You can set the size with the size= mount option.
Thanks. The 30G is OK:
# free -h
total used free shared buffers cached
Mem: 58G 700M 58G 0B 38M 179M
-/+ buffers/cache: 482M 58G
Swap: 0B 0B 0B
You can set the size with the size= mount option.
Beg your pardon: what mount option? Those filesystems are systemd-managed, no fstab entries...
Offline
I think you need to supply more information. No one can tell you why your filesystems are of different sizes with no information about your systems.
I'm not asking about filesystems. Those or systemd-managed pseudofilesystems ;-)
Offline
Yeah, that is what I meant. In other words, you are asking about the sizes of tmpfs without telling us how much ram you have. This is an important factor as Awebb pointed out.
Offline
Beg your pardon: what mount option? Those filesystems are systemd-managed, no fstab entries...
You can still create one.
$ grep '/tmp' /etc/fstab
tmpfs /tmp tmpfs nodev,nosuid 0 0
$ systemctl status tmp.mount
tmp.mount - /tmp
Loaded: loaded (/etc/fstab; static)
Active: active (mounted) since Wed, 2012-11-21 11:24:01 GMT; 5h 36min ago
Where: /tmp
What: tmpfs
Process: 272 ExecMount=/bin/mount tmpfs /tmp -t tmpfs -o nodev,nosuid (code=exited, status=0/SUCCESS)
CGroup: name=systemd:/system/tmp.mount
Nov 21 11:24:01 sakura systemd[1]: Mounted /tmp.
Last edited by WorMzy (2012-11-21 17:04:26)
Sakura:-
Mobo: MSI MAG X570S TORPEDO MAX // Processor: AMD Ryzen 9 5950X @4.9GHz // GFX: AMD Radeon RX 5700 XT // RAM: 32GB (4x 8GB) Corsair DDR4 (@ 3000MHz) // Storage: 1x 3TB HDD, 6x 1TB SSD, 2x 120GB SSD, 1x 275GB M2 SSD
Making lemonade from lemons since 2015.
Offline
What is the recommended place for it: /etc/fstab or /usr/lib/systemd/system/tmp.mount or any other file?
Last edited by quayasil (2012-11-21 17:09:44)
Offline
Never modify anything in /usr/lib/systemd, as these changes will be overwritten when you update/downgrade/reinstall the package that provides the file you've modified.
If you want to modify the .mount files, copy them from /usr/lib/systemd/system to /etc/systemd/system, and then edit them there. The /etc copies will be used instead of the /usr equivalents.
As for *.mount files vs fstab, it all comes down to user preference.
Sakura:-
Mobo: MSI MAG X570S TORPEDO MAX // Processor: AMD Ryzen 9 5950X @4.9GHz // GFX: AMD Radeon RX 5700 XT // RAM: 32GB (4x 8GB) Corsair DDR4 (@ 3000MHz) // Storage: 1x 3TB HDD, 6x 1TB SSD, 2x 120GB SSD, 1x 275GB M2 SSD
Making lemonade from lemons since 2015.
Offline
Yes... and no. By editing any of those files I can set limit on /tmp , for example to 200MB :
# df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 79G 37G 39G 49% /
dev 1.9G 0 1.9G 0% /dev
run 1.9G 404K 1.9G 1% /run
/dev/sda2 79G 37G 39G 49% /
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
tmpfs 200M 317M 1.6G 17% /tmp
It's not exactly all I tried to do. Other tmpfses are 2GB (half of my RAM). How about, for example: /sys/fs/cgroup ? I don't think setting an entry for this in /etc/fstab is a good idea.
# grep -R cgroup /usr/lib/systemd/system
gives no matches. So where is it? Still I suspect the systemd but where?
Offline
Yes... and no. By editing any of those files I can set limit on /tmp , for example to 200MB :
# df -h Filesystem Size Used Avail Use% Mounted on rootfs 79G 37G 39G 49% / dev 1.9G 0 1.9G 0% /dev run 1.9G 404K 1.9G 1% /run /dev/sda2 79G 37G 39G 49% / tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup tmpfs 200M 317M 1.6G 17% /tmp
It's not exactly all I tried to do. Other tmpfses are 2GB (half of my RAM). How about, for example: /sys/fs/cgroup ? I don't think setting an entry for this in /etc/fstab is a good idea.
# grep -R cgroup /usr/lib/systemd/system
gives no matches. So where is it? Still I suspect the systemd but where?
afaik the cgroup mounts are hardcoded into systemd as they are required for systemd to work properly. As you can see they don't actually use space, so I wouldn't worry about them.
btw, systemd will mount /tmp by default, even without an fstab entry (falconindy said that the /tmp entry in the default fstab will be removed at some point since it is redundant). If you want to override the /tmp settings, use fstab. Overriding tmp.mount will also work, but it is more complex and even upstream recommends using fstab.
Offline
afaik the cgroup mounts are hardcoded into systemd as they are required for systemd to work properly. As you can see they don't actually use space, so I wouldn't worry about them.
Sounds like an Apple-like "kind of magic but don't worry" I'm afraid there is growing contradiction between systemd and the KISS principle
Offline
How about, for example: /sys/fs/cgroup ? I don't think setting an entry for this in /etc/fstab is a good idea.
I don't see why you'd think setting it in a mount unit would be better or worse than setting it in fstab. At the end of the day, they both do the same thing.
Any way, as a proof of concept, I made an entry in my fstab for that tmpfs:
$ grep /sys/fs/cgroup /etc/fstab
tmpfs /sys/fs/cgroup tmpfs rw,nosuid,nodev,noexec,mode=755,size=200M 0 0
$ mount | grep "/sys/fs/cgroup "
tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,relatime,size=204800k,mode=755)
$ /bin/df -h /sys/fs/cgroup
Filesystem Size Used Avail Use% Mounted on
tmpfs 200M 0 200M 0% /sys/fs/cgroup
As you can see, it worked fine.
I don't see why you're so worried about these tmpfs' in the first place. Unless you're planning on moving a 2TB file into one of them or something, you really don't need to worry about them.
Sakura:-
Mobo: MSI MAG X570S TORPEDO MAX // Processor: AMD Ryzen 9 5950X @4.9GHz // GFX: AMD Radeon RX 5700 XT // RAM: 32GB (4x 8GB) Corsair DDR4 (@ 3000MHz) // Storage: 1x 3TB HDD, 6x 1TB SSD, 2x 120GB SSD, 1x 275GB M2 SSD
Making lemonade from lemons since 2015.
Offline
Pages: 1