You are not logged in.

#1 2014-04-22 07:58:58

pypi
Wiki Maintainer
Registered: 2014-04-22
Posts: 250

'Depends based' 'task' management - is this a good idea?

### Sum up ###

EDIT: Probably the wrong subforum - I will 'move' this tomorrow unless a moderator/admin does. Sorry...

I am currently playing around with managing multiple arch machines/virtual machines.
I am posting this in an effort to  get feedback on whether my current solution is a 'good idea', and perhaps alternatives if not.
What I think of as not a 'good idea' is anything likely to break my system, be a security risk, be a hazard (easy for me to stuff up), etc.
I only have about 1 1/2 years of Linux experience, but I have successfully got the 'ancient machine' running CLFS - up to the temporary stage before I realised that the 32MB of RAM might not be quite enough to finish compiling the system, so I can at least claim that I am not a complete linux noob.

What I want is a way to sync the packages installed across multiple machines. However, the packages on each machine are not identical - one might be used for gaming, for instance, while another might be dedicated to compiling software.

### More detailed... ###

Here is an example set up - what I want to end up with in the near future, sans planned hardware acquisitions (and unplanned acquisitions, at least as of now).

Machine A: (Main machine)
  - General CLI use
  - Development (python programming)
  - Gaming
  - DE for Blender/non-framebuffer applications

Machine B: (Raspberry Pi)
  - General CLI use
  - 'Gaming' - running any games I finish
  - DE (so that it can be used as a backup machine while Machine A is being borrowed by my brother for Blender)
  - Package server (for my custom packages)
  - Backup server

Machine C: (16 year old laptop, also known as the 'ancient machine'/'ancient laptop')
  - General CLI use
  - Development (python programming)i

I could do this by setting up each machine individually, and repeating my changes on each machine each time I modify something (CLI tools, for instance).
However, that would be a lot of duplicated time and effort, and if I ever get another machine (a better laptop, or another tower PC) I would prefer to not have to redo everything. Further complicating matters is that each machine (so far) uses a different architecture - x86_64, i586 (not yet running arch - TODO), and ARM.
The power variation doesn't make things easy, either, but the main PC is a reasonably modern tower computer.

My current solution is to make special packages using makepkg that reflect the task required. These group the packages together - if I add a package to "General CLI use" (also known as <my_name>-base), then it will be added across all the machines at the next update. If I want to move a task from one machine to another, then I remove the task package and its dependencies on the old machine, and install it again on the new machine. For instance, if I was moving the package server from the old machine (the raspberry pi) to another machine (a new raspberry pi or a more grunty machine that also compiles the packages), I could just uninstall on the old machine and reinstall on the new. Of course, the config would have to be synced as well, but that is very much WIP.

The PKGBUILDs are fairly simple; they 'depend' on any packages required for that specific task. As an extra, they can also install any custom config and data required as they go, since they are only being used on my machines. I can 'sync' my added /etc/profile.d added files, for instance, and "General CLI use" can also add my main user as part of it's installation.
This only goes so far, though - I can't change existing files config files sensibly, and to avoid conflicts I can't do too much scripting to do things start a http server on port 80 - given that another task might also do that. Of course, I could always set it up so the two tasks conflict...
A better example would be to remove users and their home directories if the user package - which requires a '<my_name>-desktop' in the case of my brother - as doing that automatically could also catch me unawares and delete stuff, if I had forgotten to move things first. A Bad Idea in my books...
I still have to manually change things that are architecture specific, like drivers, but since they also change per machine (even per x86_64 machine!) it doesn't matter - I can't see a way of keeping those synced short of special packages for each hardware setup...

### Somewhat unrelated - possibly a no-go under 'bikeshed' and/or offtopic ###

The remaining syncing - home directory syncing and config syncing - are WIP, but I have my eye on Unison for home directory syncing. Ideas on config syncing would be appreciated as well, but I have yet to do much research into existing solutions, although I did try using a VCS (git). That kind of worked, but files that had multiple layers of changes depending on the packages installed were not manageable. Although I may be able to use includes and config modifying tools to work around that. Pacnew/save/orig file management would be nice as well, but would definitely require some form of user intervention. 'Patch based' will be the next thing I try unless I can find a ready made solution.

Another thing that I have not yet researched thoroughly apart from checking pacman's options, is whether I can do an update just from one server. This would be nice to avoid spamming the mirror with repeated update requests while I really only want to update off my own repo. The 'spamming' is caused by me trying to debug problems with the package (incorrect file permissions, for example), if I cannot deinstall/reinstall the package to test using a local version first.

### Back to the issue at hand ###

Is using PKGBUILDs - depends-based 'task' management as outlined above - a 'good idea'?
If so, why/why not?

Thanks for reading this far,
pypi

P.S. If there are any instances of 'dd' and ':w' in this, please forgive me - I normally use vim.

Last edited by pypi (2014-04-22 08:57:11)

Offline

#2 2014-04-22 09:43:28

Slithery
Administrator
From: Norfolk, UK
Registered: 2013-12-01
Posts: 5,776

Re: 'Depends based' 'task' management - is this a good idea?

For managing your configs take a look at dots.

https://github.com/EvanPurkhiser/dots
https://bbs.archlinux.org/viewtopic.php?id=174793

Last edited by Slithery (2014-04-22 09:43:54)


No, it didn't "fix" anything. It just shifted the brokeness one space to the right. - jasonwryan
Closing -- for deletion; Banning -- for muppetry. - jasonwryan

aur - dotfiles

Offline

#3 2014-04-22 09:58:47

Awebb
Member
Registered: 2010-05-06
Posts: 6,285

Re: 'Depends based' 'task' management - is this a good idea?

If you do not manage those packages manually, then you can use your meta packages. A lot of distros do this. However, I would prefer a script or a set of scripts calling pacman, because you could also automate AUR packages and config files in a single command.

Offline

#4 2014-04-23 20:39:43

pypi
Wiki Maintainer
Registered: 2014-04-22
Posts: 250

Re: 'Depends based' 'task' management - is this a good idea?

Thanks for the opinion!

@Awebb: As long as it works I don't mind. I am a fan of scripts, but I prefer to use existing solutions whenever possible unless they are overly complex/have lots of extra dependencies. What kind of scripts did you have in mind, though? I am imagining some form of sed script for the config or a patch file and a list of dependencies to install.

@slithery: If I read it correctly then 'dots' manages files stored in a users home directory - I was thinking more of managing and syncing changes to the files in /etc, such as the basic firewall and ssh config. However, it does look like a cool program (I had come across it before, but discarded it as non applicable) so I will check it out again and see whether I could use it for the few config files in my home directory.

pypi

Offline

#5 2014-04-23 21:47:03

Awebb
Member
Registered: 2010-05-06
Posts: 6,285

Re: 'Depends based' 'task' management - is this a good idea?

pypi wrote:

@Awebb: As long as it works I don't mind. I am a fan of scripts, but I prefer to use existing solutions whenever possible unless they are overly complex/have lots of extra dependencies. What kind of scripts did you have in mind, though? I am imagining some form of sed script for the config or a patch file and a list of dependencies to install.

Very simple stuff, actually. A bash script with an array of packages, that is fed to pacman -S, a loop using cower to download an array of AUR packages, cd'ing to the folder, building and installing them and a git repository for your config files. You might even tune the script over time to accept arguments to indicate which machine it is running at (or identify the machine by reading the hostname), so the script could also sit in the git repo and all you'd need to do was fetching the script, telling it what setup you want (simple keywords like "fileserver, cad, gamestation") and be done with it. It's also easier to modify than patching your meta-PKGBUILD and reinstalling it every time, because adding dependencies is not a problem, but you would have to uninstall any dropped packages.

Offline

Board footer

Powered by FluxBB