You are not logged in.
So the issue is I started doing a bunch of python machine learning tutorials and the libraries are filling up my root partition. Python alone is taking up 3.1 GB and cuda wants another 4 GB. That pytorch, keras, tensorflow, etc.. It's just adding up.
I don't want to mess with re-partitioning to make the root partition bigger because I don't have a backup drive big enough to backup my data.
I would like to just install these libraries on the other terabyte sized partition on the same SSD.
Anyway to do that or do I really need to just re-partition everything?
Last edited by Wupwup (2022-09-11 19:53:40)
Offline
Not trivially so, if you absolutely must do that, use bind mounts to redirect certain paths that are big to the secondary drive. See https://wiki.archlinux.org/title/EFI_sy … bind_mount for a small primer on how to do that, read the man page if you need more information.
Offline
Offline
As a warning https://freedesktop.org/wiki/Software/s … is-broken/
----------
And just for the records, because it still pisses me off.
systemd itself is actually mostly fine with /usr on a separate file system
…
A popular way to do this is for example via udev rules. The binaries called from these rules are sometimes located on /usr/bin, or link against libraries in /usr/lib, or use data files from /usr/share.
…
It isn't systemd's fault. systemd mostly works fine with /usr on a separate file system that is not pre-mounted at boot.
…
Don't blame us
https://en.wikipedia.org/wiki/Udev#History
In April 2012, udev's codebase was merged into the systemd source tree, making systemd 183 the first version to include udev.[5][10][11] In October 2012, Linus Torvalds criticized Kay Sievers's approach to udev maintenance and bug fixing related to firmware loading, stating:[12]
Yes, doing it in the kernel is "more robust". But don't play games, and stop the lying. It's more robust because we have maintainers that care, and because we know that regressions are not something we can play fast and loose with. If something breaks, and we don't know what the right fix for that breakage is, we revert the thing that broke. So yes, we're clearly better off doing it in the kernel. Not because firmware loading cannot be done in user space. But simply because udev maintenance since Greg gave it up has gone downhill.
Online
Not trivially so, if you absolutely must do that, use bind mounts to redirect certain paths that are big to the secondary drive. See https://wiki.archlinux.org/title/EFI_sy … bind_mount for a small primer on how to do that, read the man page if you need more information.
I am unclear on which man pages I would need to refer too for this. I am not sure from the wiki link provided how to do something like a mount -bind for my usr/lib directory, but that seems like what I want to be doing.
Do you have any suggested reading?
Offline
If you are intent on moving /usr/lib you need to do it in the initrd as that will have a mount command with its required libraries present. /usr/lib needs to be populated before the switch to the root file system or you will find most commands are broken due to missing libraries. This is why it is strongly recommend to keep /usr on the root filesystem.
Last edited by loqs (2022-09-08 22:15:37)
Offline
the man page of mount regarding bind mounts that is linked in the section I linked and available on your system via man mount .
You basically need to mount your big secondary partition somewhere, e.g. /bigpartition and then do a bind mount of a path you want to land there, so e.g. you make a dir on /bigpartition/opt/cuda move all the current contents of /opt/cuda there and then have an fstab entry
/bigpartition/opt/cuda /opt/cuda none defaults,bind 0 0
you can have multiple bind mounts referring to different base directories.
If you need more help than that you need to elaborate on your partition setup with the outputs of "lsblk -f" and "mount" and your current /etc/fstab in code tags
And to elaborate on the comments above, you probably don't want to move /usr/lib as a whole to that kind of setup, but a select few dirs/tools that are not vital to your system booting up but take large amounts of space
Last edited by V1del (2022-09-08 22:22:46)
Offline
And to elaborate on the comments above, you probably don't want to move /usr/lib as a whole to that kind of setup, but a select few dirs/tools that are not vital to your system booting up but take large amounts of space
Yeah that sounds like the better plan. I could move blender & cuda & tensorflow over.
I only have one physical storage drive on this computer.
Input: lsblk
Output:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
zram0 254:0 0 4G 0 disk [SWAP]
nvme0n1 259:0 0 1.8T 0 disk
├─nvme0n1p1 259:1 0 511M 0 part /boot
├─nvme0n1p2 259:2 0 19.5G 0 part /
└─nvme0n1p3 259:3 0 1.8T 0 part /home
Input: lsblk -f
Output:
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
zram0 [SWAP]
nvme0n1
├─nvme0n1p1 vfat FAT32 5CE0-CED0 443.5M 13% /boot
├─nvme0n1p2 ext4 1.0 6aab966b-c1ca-420f-b938-1e4d4e8840e9 3.2G 78% /
└─nvme0n1p3 ext4 1.0 b7f149ca-4770-4368-85a7-c0186608119b 1.6T 6% /home
Input: mount
Output:
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
sys on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
dev on /dev type devtmpfs (rw,nosuid,relatime,size=16376320k,nr_inodes=4094080,mode=755,inode64)
run on /run type tmpfs (rw,nosuid,nodev,relatime,mode=755,inode64)
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
/dev/nvme0n1p2 on / type ext4 (rw,relatime)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=24594)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,size=16384436k,nr_inodes=1048576,inode64)
/dev/nvme0n1p3 on /home type ext4 (rw,relatime)
/dev/nvme0n1p1 on /boot type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=3276884k,nr_inodes=819221,mode=700,uid=1000,gid=1000,inode64)
This what my /etc/fstab looks like:
# Static information about the filesystems.
# See fstab(5) for details.
# <file system> <dir> <type> <options> <dump> <pass>
# /dev/nvme0n1p2
UUID=6aab966b-c1ca-420f-b938-1e4d4e8840e9 / ext4 rw,relatime 0 1
# /dev/nvme0n1p1
UUID=5CE0-CED0 /boot vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro 0 2
# /dev/nvme0n1p3
UUID=b7f149ca-4770-4368-85a7-c0186608119b /home ext4 rw,relatime 0 2
---------------------------------------------------
So would I just put this text in fstab?
# Static information about the filesystems.
# See fstab(5) for details.
# <file system> <dir> <type> <options> <dump> <pass>
# /dev/nvme0n1p2
UUID=6aab966b-c1ca-420f-b938-1e4d4e8840e9 / ext4 rw,relatime 0 1
/home/usr/lib/cuda /usr/lib/cuda none defaults,bind 0 0
# /dev/nvme0n1p1
UUID=5CE0-CED0 /boot vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro 0 2
# /dev/nvme0n1p3
UUID=b7f149ca-4770-4368-85a7-c0186608119b /home ext4 rw,relatime 0 2
EDIT: Fixed the output blocks
Last edited by Wupwup (2022-09-09 14:09:50)
Offline
As mentioned please wrap outputs in code tags, read the bbcode link under every posting box.
The bind mount needs to be after the partition that you are bind mounting to, so it should be after the /home mount point. And as mentioned the directory you are bind mounting to needs to exists you'd need to prepare that directory structure beforehand and pay attention that you are picking the correct paths, all of the cuda stuff from standard Arch lands in /opt/cuda so for this example you'd do
mkdir -p /home/opt/
mv /opt/cuda /home/opt/cuda
mount --bind /home/opt/cuda /opt/cuda
after the last step /opt/cuda should look like it did before, but will actually reference the stuff on /home/opt/cuda. Once you've verified this you can add that to the fstab at the end in that form. For tensorflow I'm not sure how you'd do that trivially as the big libs appear directly under /usr/lib and bind mounting masks the parent path so you can't really use that for that.
To be frank I consider 20GB in general to be very little/too low for a root that wants to run all of this stuff (... is this a archinstall default? Is this easily configurable? It should be...) so all in all actually doing the repartitioning might be the more hassle free option in the long run (if you'd opt for that you'd need to make sure you have a backup of important data and then I'd reccommend you use a gparted live disk for doing the resizing/moving operations)
Last edited by V1del (2022-09-09 03:21:44)
Offline
To be frank I consider 20GB in general to be very little/too low for a root that wants to run all of this stuff (... is this a archinstall default? Is this easily configurable? It should be...) so all in all actually doing the repartitioning might be the more hassle free option in the long run (if you'd opt for that you'd need to make sure you have a backup of important data and then I'd reccommend you use a gparted live disk for doing the resizing/moving operations)
Yeah it was an archinstall default. I did it by manually back in 2013 and I got frustrated trying to get my chromebook to run arch this year as my other PC parts mailed in so I ended up just using archinstall. Though to be honest it was a few months of daily driving this machine before I learned that I may have wanted a much larger root partition and why.
I think I do need to just bite the bullet and re-size partitions. You all have given me good options but it seems like the main source of my storage issue "usr/lib" is not one I want to mess with and my opt folder has few programs on it. (I ran into this issue when trying to put cuda on my system so it is not there yet).
Offline
I made a back up to an external with rsync using
rsync -aAXHv --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} / /mnt/Backup
Which I first had to mount the external HDD with
sudo mount -t ntfs3 /dev/sda1 /mnt
because it was ntfs3 which meant in my set up I needed to mount it manually.
I made a gparted live disk using unetbootin AUR package after downloading a gparted iso from their website
Then I resized my root to 300 GB. (I used google to convert my desired GiB to MIB numbers that gparted uses)
I did not lose any data or break anything [to future person with same issue]
Offline
Well done. But good thing you did not need the backup; read a little more about NTFS and Linux file permissions. Best to back up on a partition with a Linux file system.
Please remember to mark your thread [SOLVED] (edit the title of your first post).
Offline