You are not logged in.
I just wrote a bash script that uses reflector to update the rsync servers in use by powerpill. I also have a systemd service and timer for running it weekly. I'm just looking for advice on how to improve it.
#!/bin/bash
declare -a servers
servers=($(reflector -p rsync -f 30 -l 20 -p 20 --sort rate | grep -o 'rsync:\/\/.*'))
declare -i i=0
regex='^[[:blank:]]+\"rsync://.*'
( IFS='\n'
while read -r line
do
if [[ ! $line =~ $regex ]]; then
echo "$line"
elif ((i < ${#servers[@]})) ; then
lead=$( expr index "$line" "\"" )
server=
if ((i < ${#servers[@]} - 1)) ; then
server="${servers[$i]}\","
else
server="${servers[$i]}\""
fi
echo "${line:0:$lead}${server}"
((i++))
fi
done < updateTest > o
mv o updateTest )
Offline
Instead of that while loop, why don't you just use something simple like:
printf "\"%s\",\n" "${servers[@]}"|head -c -2
Managing AUR repos The Right Way -- aurpublish (now a standalone tool)
Offline
Running it through ShellCheck throws up a couple of pointers.
Offline
Instead of that while loop, why don't you just use something simple like:
|head -c -2
This use of head just grabs the file from the end? Starting the write at the last lines?
Offline
printf "\"%s\",\n" "${servers[@]}"|head -c -2
printf -- print out each element in the "servers" array, in the format:
"element1",
"element2",
"element3",
Obviously, the trailing comma + linebreak is unwanted and specifically the comma will break json parsing. Rather than looping and checking each array index for the last element in order to switch between echo commands... just take all-but-the-last-two-bytes by piping it to the `head` command.
I don't know of any way to tell printf to use a different format string for the last argument, so that will have to do.
Managing AUR repos The Right Way -- aurpublish (now a standalone tool)
Offline
printf "\"%s\",\n" "${servers[@]}"|head -c -2
printf -- print out each element in the "servers" array, in the format:
"element1", "element2", "element3",
Obviously, the trailing comma + linebreak is unwanted and specifically the comma will break json parsing. Rather than looping and checking each array index for the last element in order to switch between echo commands... just take all-but-the-last-two-bytes by piping it to the `head` command.
I don't know of any way to tell printf to use a different format string for the last argument, so that will have to do.
That makes sense with how the server list ends up printing out.
The only thing I'm at a loss for how to do is overwriting part of a file starting at a certain line without a while loop.
Last edited by zerophase (2016-06-16 01:54:15)
Offline
printf is not a file, it goes to stdout. As I said, it is piped to `head`.
What file do you think is being overwritten?
...
Ah, I think I see it now.
I was having a hard time seeing what all that for loop did. But I guess the "updateTest" file is the powerpill config?
You can probably use something like sed to replace it -- might improve the readability.
Managing AUR repos The Right Way -- aurpublish (now a standalone tool)
Offline
Oh I mean I was redirecting the output to the file I'm writing to with >>
I'm Just wondering if there is anyway to redirect to certain lines, other than sed.
Offline
I ended up using perl over sed for it. The sed command needed was just more complicated to write out.
#!/bin/bash
# TODO rename to reflector-rsync.sh
declare -a servers
cd /home/zerophase # remove for final iteration
servers=($(reflector -p rsync -f 30 -l 20 -p 20 --sort rate | grep -o 'rsync:\/\/.*'))
linesToWrite=$(printf "\t\t\t\"%s\",\\\n" "${servers[@]}" | head -c -4 )
perl -i -0pE 'use strict; my $repo = "\$repo"; my $arch = "\$arch";'\
's|([[:blank:]]+\"rsync:\/\/[^\]]+)|'"${linesToWrite}"'\n\t\t|gs' testSed # change to powerpill.json
much nicer to read now.
By the way, is there a suggested place to put bash scripts in use by systemd? Just /bin so they're accessible from everywhere, or just /etc/systemd/system?
Offline
By the way, is there a suggested place to put bash scripts in use by systemd? Just /bin so they're accessible from everywhere, or just /etc/systemd/system?
I created a /etc/systemd/scripts directory for mine.
Offline
My only comment is that you create ./o without looking to see if it exists. Then, at the end, you move it to ./updateTest. You might consider checking to see if that interim file does not exist before you obliterate it.
Also, are the parentheses starting in front of IFS='\n' and ending after mv o updateTest not superfluous? [IDK, I am not a Bash Jockey]
Nothing is too wonderful to be true, if it be consistent with the laws of nature -- Michael Faraday
Sometimes it is the people no one can imagine anything of who do the things no one can imagine. -- Alan Turing
---
How to Ask Questions the Smart Way
Offline
The parentheses run that in a subshell, which is mostly pointless except it means the IFS change only happens in the subshell.
You could get around that by saving and restoring the IFS, assuming there was an actual concern that you'd need a default IFS within the same script anyway... still, as a style thing it isn't the worst idea ever to get used to always putting things back just in case.
Since the whole thing has been replaced by a perl substitution it doesn't matter anymore.
Last edited by eschwartz (2016-06-16 14:27:20)
Managing AUR repos The Right Way -- aurpublish (now a standalone tool)
Offline
Eschwartz; Thanks, make sense.
Nothing is too wonderful to be true, if it be consistent with the laws of nature -- Michael Faraday
Sometimes it is the people no one can imagine anything of who do the things no one can imagine. -- Alan Turing
---
How to Ask Questions the Smart Way
Offline
I just scoped the IFS since I didn't know if it could potentially escape my script and cause issues elsewhere. Just assumed avoid any potential headaches.
Offline
Scripts are run as a child bash shell (same as a subshell except subshells also get to see non-exported variables and a few other things). Variables cannot escape from a child process to a parent process.
In this case it was unnecessary. But like I said, it isn't a terrible idea to do that on general principle. Though I would use a SAVEIFS="$IFS" to backup and restore rather than adding an unnecessary subshell. It's also more readable.
Managing AUR repos The Right Way -- aurpublish (now a standalone tool)
Offline