You are not logged in.
Goal: each time I spawn a gzip pid, keep track of it and when it is no longer running copy the tar.gz somewhere.
I have a backup script that creates tar files for various mountpoints, then gzips them and finally copies them to a NAS. I currently have a while loop just watch for all gzip pids to return null and then copy them them. I'd like to make each one aware of its gzip parent, and when that dies (successfully compresses) then trigger the copy of only that tar.gz rather than waiting for the entire batch. How is this best accomplished?
...
for i in foo bar thud grunt; do
mount LABEL=$i $mp
cd $mp
bsdtar cf /scratch/$i.tar --exclude='*.pkg.tar.xz' ./
cd / && umount $mp
gzip -9 /scratch/$i.tar&
done
while [[ -n $(pidof gzip) ]]; do
echo "Will move gzips to media when gzip is finished... waiting 5 sec and testing again."
sleep 5s
done
mv /scratch/*.tar.gz $BACKUP
Last edited by graysky (2015-12-27 19:45:13)
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Online
Maybe something like the following (I've not tested it though)
#Here we will store the pid of each gzip process
declare -A gzip_pids=()
for i in foo bar thud grunt; do
mount LABEL=$i $mp
cd $mp
bsdtar cf /scratch/$i.tar --exclude='*.pkg.tar.xz' ./
cd / && umount $mp
gzip -9 /scratch/$i.tar&
# Store the pid of the last command (that was gzip)
gzip_pids["$/scratch/$i.tar.gz"]="$!"
done
end_flag=1
while [[ $end_flag -gt 0 ]]; do
end_flag=0
# Check if a gzip process is no longer running
for bkp_file in ${!gzip_pids[@]}; do
pid="${gzip_pids[$bkp_file]}"
if [[ ! -z "$pid" ]] && ! kill -O "$pid" &> /dev/null; then
mv --verbose "${bkp_file}" "$BACKUP" &
# This will prevent multiple executions of the mv command on the same file
gzip_pids[$bkp_file]=''
end_flag=1
fi
done
sleep 5s
done
Last edited by mauritiusdadd (2015-12-27 11:55:14)
Offline
Have you considered using functions? Something like this (you will have to adapt to your needs):
function do_some_work() {
sleep 5
echo $1
}
for i in a b c d
do
do_some_work $i &
done
R00KIE
Tm90aGluZyB0byBzZWUgaGVyZSwgbW92ZSBhbG9uZy4K
Offline
Instead of polling for whether it has finished, I would rather collect the PIDs of the gzip sub-processes and `wait` on each of them:
gzip_pids=''
for ...; do
...
gzip ... &
gzip_pids="$gzip_pids $!"
done
for gp in $gzip_pids; do
wait $gp 2>/dev/null # it's possible that some sub-processes have already finished here, thus >/dev/null
done
mv ...
EDIT Of course the PIDs could also be stored in an array, given that you seem to be using /bin/bash, but I don't know how that works.
Last edited by ayekat (2015-12-27 14:03:47)
Offline
I ended up solving this by backgrounding a compress function. Thanks to all who replied!
...
compress() {
gzip -9 /scratch/$i.tar
mv /scratch/$i.tar.gz $BACKUP
}
...
for i in foo bar thud grunt; do
mount LABEL=$i $mp
cd $mp
bsdtar cf /scratch/$i.tar --exclude='*.pkg.tar.xz' ./
cd / && umount $mp
compress &
done
Last edited by graysky (2015-12-27 19:50:09)
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Online
gzip -9 /scratch/$i.tar &
pid=$!
wait $pid
That's a pointless complication - you're backgrounding gzip, only to wait for it to finish anyway, so the backgrounding serves no purpose.
You're already doing the desired backgrounding, with "compress &"
Edit: Formatting
Last edited by brebs (2015-12-27 18:26:11)
Offline
@brebs - Ah, you are correct :red face:
Last edited by graysky (2015-12-27 19:49:29)
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Online