You are not logged in.
Pages: 1
Topic closed
Continued from this thread...
Summary
Some folks (myself included) want to build packages from source to benefit from the processor-specific CFLAGS users will enter into /etc/makepkg.conf. Now, whether or not these will make a significant difference isn't the point. I wanted an app to do this automatically for me and found several including srcpac, pacbuilder, and bauberbill. I think that I'll use bauberbill to manage new packages (updates), but to back-fill my system of 700+ packages, milomouse made the following suggestion:
@graysky: Isn't the unsupervised "overnight" fashion blindly trusting, anyway?
I think you might just want to use the ABS program and write a simple shell script for makepkg. If so, run abs and then do something like..
yourbuilddir=$HOME/abs/ ; packages=(blah blah) ; for i in $packages ; blah blah.. for d in $(find /var/abs/*/$i -type d); do ; cp -rp $d $yourbuildir ; done ; cd $yourbuilddir/$i ; makepkg -scf --noconfirm && cp -p *.pkg.tar* ../ ; done
and to install after all are done: for post in $packages*pkg.tar* ; blah blah.. if [[ -f $post ]]; then ; pacman -U --noconfirm $post ; done
Something along those lines (much more refined [and working], of course). I guess the $packages variable could be a list you manually type in or scrape from pacman for base and base-devel. Then you can check $yourbuilddir in the morning and, if you added the code to install, see your pacman.log. It should be really easy to do although a little "blind" as well. If this is a horrible idea someone please delete this. I've used similar methods before and they work fine so..
Great idea! How about:
#!/bin/bash
# Note that since the repos have begun using the tar.xz format you MUST
# UNCOMMENT the following line in /etc/makepkg.conf if it is not already
#
# PKGEXT='.pkg.tar.xz'
# Select a scratch drive for the building
# If you have quite a bit of RAM you should
# consider selecting /dev/shm/somedir to save raping your HDD with I/O
mybuilddir=/path/to/builddir
# Define which packages you want to rebuild using one of the three lines below
# Comment the remaining two lines
packages=(`pacman -Qq | grep -vx "$( pacman -Qmq )"`) # this reads in all installed packages
# . /path/to/list # or you can read in a list you manually generated
# packages=(pkg1 pkg2) # or you can manually define them here
if [ ! -d $mybuilddir ]; then
mkdir $mybuilddir
fi
# Copy packages from your abs tree to your build space and build them one-by-one
# This would be a good time to update your abs tree and to make sure that you have
# the correct CFLAGS etc. in your /etc/makepkg.conf
# See, http://en.gentoo-wiki.com/wiki/Safe_Cflags
for i in ${packages[@]}; do
for d in $(find /var/abs/*/$i -type d); do
cp -a $d $mybuilddir
done
cd $mybuilddir/$i
makepkg -src --noconfirm && mv *.pkg.tar.{x,g}z* ../
done
The script is chugging away now building these guys. Guess I question I have about installing these once they are built is how to handle the "--asdeps" flags for those packages which are optional deps...?
Last edited by graysky (2010-11-28 21:10:44)
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline
So far I've built 328 of them using the script. I'm wondering if a better strategy to install the packages would be to simply copy them all (overwriting) to /var/cache/pacman/pkg/ and then simply run: pacman -S $(pacman -Qq)
Thoughts?
Last edited by graysky (2010-11-28 12:27:25)
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline
Have you checked pacbuilder?
Offline
Have you checked pacbuilder?
Yep
I wanted an app to do this automatically for me and found several including srcpac, pacbuilder, and bauberbill.
It gave problems whose details I can't remember. I actually really like milomouse's suggestion and have been building since 4 AM this morning.
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline
thx for your contribution i will try it in a chroot enviroment because i don't want to clutter up my system with unneeded make dependencies
Offline
@arch0r - the -r switch on makepkg will remove them when the package is successfully built. No need for a chroot.
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline
yes i know, but i'm too much a perfectionist to install and remove the make deps after each building process. they might even leave some orphaned files after removal
. i better keep them jailed in a chroot, far away from my minimal system :>
Offline
Heh, I understand your compulsion. After all, here I am building my entire set of packages so I can have my customized CFLAGS enabled!
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline
of course there's a few variables to consider when running something like this; for one you might end up building an older version from ABS if the tree hasn't been updated to the newer version yet as it's only synced once(?) a day, and i found that sometimes i'll download something from say "staging" or "testing" and it wont be in the ABS tree yet so when i try to recompile using ABS for it i wont be able to find it. just something to think about if you're really tempered about having the exact version rebuilt as what's currently installed. i could be wrong in some aspects of this but i build basically all my packages from pkgbuilds (usually from ABS) so i've come across this a few times.
i think the script will be good for most packages but i don't know about <all> packages. sounds like there's a bit of room for error when doing an entire system, also taking in the fact that if it's scraped from pacman it will be in alphabetical order(?) instead of by which depends on which, etc. i like to (re)build dependencies first. so you might need something a bit more complicated.
regardless, i still like the idea and like i said i think it will work good for small batches of programs. maybe have a case $1 for command-line defined packages so you wont always have to edit the script. or have it read from a txt file. should be pretty simple to do. glad you followed up on this though and good job :}
Offline
i have never recognized any impovements in speed after building the same package again, even with custom cflags (-march=amdfam10 -O2 -mmmx -m3dnow -msse -msse3 -mfpmath=sse,387 -pipe -ffast-math). i would rather look into stripping packages down and remove unneeded options like pidgin-mini, where the gstreamer support and other stuff have been completely removed, to minimize the list of dependencies, security holes, space, rage of functions, etc.
Offline
for d in $(find /var/abs/*/$i -type d); do
So what happens when a package is both in testing and in core/extra? or in community and community-testing? You build both? What if a user doesn't want to build from testing?
You could fix this by not forking to find and testing iteratively...
for repo in testing core extra community{-testing,}; do
[[ -d /var/abs/$repo/$i ]] && { cp -a "/var/abs/$repo/$i" "$mybuilddir"; break; }
done
This also would let the user pick whether or not to include the testing repos.
You could also optionally allow a list of packages to be piped in, as such:
# if stdin is closed on this terminal, it's tied to a file which can be read from
if [[ ! -t 0 ]]; then
read -a packages
else
packages=( ... ) # some default listing
done
Also, bash syntax nitpicking:
- If you declare a /bin/bash shebang, please use Bash syntax. [[, not [.
- please use more quotes around variables. You can't know if things like $mybuilddir will or won't have white space in them.
Offline
for d in $(find /var/abs/*/$i -type d); do
So what happens when a package is both in testing and in core/extra? or in community and community-testing? You build both? What if a user doesn't want to build from testing?
You could fix this by not forking to find and testing iteratively...
for repo in testing core extra community{-testing,}; do [[ -d /var/abs/$repo/$i ]] && { cp -a "/var/abs/$repo/$i" "$mybuilddir"; break; } done
This also would let the user pick whether or not to include the testing repos.
I have two ABS scripts right now which use the following approach which pulls the enabled repos out of pacman.conf in the proper order:
# get enabled repos out of pacman.conf in the proper order
enabled_repos() { sed '/^\[\(.*\)\]$/!d;s//\1/g;/options/d' /etc/pacman.conf; }
for repo in $(enabled_repos); do
dir="/var/abs/$repo/$i"
if [[ -d "$dir" ]]; then
# do work
break
fi
done
//github/
Offline
thanks
that is script I was searching for
it will have in newer versions deep check of dependencies?
Last edited by akemrir (2011-07-23 11:34:18)
Offline
Wow this is still working!
Offline
dslink, please do not bump ancient threads with empty posts.
https://wiki.archlinux.org/index.php/Fo … bumping.22 and https://wiki.archlinux.org/index.php/Fo … mpty_posts
Closing.
Are you familiar with our Forum Rules, and How To Ask Questions The Smart Way?
BlueHackers // fscanary // resticctl
Offline
Pages: 1
Topic closed