You are not logged in.
That was really helpful. Thank you guys!
Offline
Been a while...
#!/usr/bin/bash
# Update mail certs
target="$HOME/.local/certs"
PS3='Select a host: '
options=("dom1" "dom2" "dom3")
select opt in "${options[@]}"
do
case "$opt" in
dom1) domain="mail.dom1.com"
name="one"
break
;;
dom2) domain="mail.dom2.com"
name="two"
break
;;
dom3) domain="mail.dom3.com"
name="three"
break
;;
*) printf "%s\n" "Invalid host"
exit 1
;;
esac
done
openssl s_client -connect "$domain":993 -showcerts 2>&1 < /dev/null |\
awk '/BEGIN/,/END/{print}/END/{exit}' > "$target"/"$name".pem
# vim:set ts=2 sts=2 sw=2 et:
Offline
#!/bin/bash
wd="~/.scripts/feedgen"
for i in $(cat $wd/urls) ; do
title=$(curl -s -L $i 2> /dev/null | grep -m 1 title | awk -F \> '{print $2}' | awk -F \< '{print $1}')
test -z "$title" && continue
feed=$(echo $title | sed 's/ //g')
if !(find $wd -type d | grep -q $feed) ; then
mkdir -p $wd/$feed
w3m $i > $wd/$feed/$feed.stale || exit 1
echo "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>" >> $wd/$feed/$feed.xml
echo "<rss version=\"2.0\">" >> $wd/$feed/$feed.xml
echo "" >> $wd/$feed/$feed.xml
echo "<channel>" >> $wd/$feed/$feed.xml
echo " <title>"$title"</title>" >> $wd/$feed/$feed.xml
echo " <description>Generated by feedgen.sh</description>" >> $wd/$feed/$feed.xml
echo " <link>"$i"</link>" >> $wd/$feed/$feed.xml
echo " <pubDate>"$(date '+%a, %d %b %Y %H:%M:%S %z')"</pubDate>" >> $wd/$feed/$feed.xml
echo "" >> $wd/$feed/$feed.xml
echo " <item>" >> $wd/$feed/$feed.xml
echo " <title>Now tracking "$title"</title>" >> $wd/$feed/$feed.xml
echo " <description>Now tracking "$title"</description>" >> $wd/$feed/$feed.xml
echo " <link>"$i"</link>" >> $wd/$feed/$feed.xml
echo " <pubDate>"$(date '+%a, %d %b %Y %H:%M:%S %z')"</pubDate>" >> $wd/$feed/$feed.xml
echo " </item>" >> $wd/$feed/$feed.xml
echo "" >> $wd/$feed/$feed.xml
echo "</channel>" >> $wd/$feed/$feed.xml
echo "</rss>" >> $wd/$feed/$feed.xml
else
w3m $i > $wd/$feed/$feed.fresh || exit 1
change=$(diff $wd/$feed/$feed.stale $wd/$feed/$feed.fresh)
if ! test -z "$change" ; then
head -n -2 $wd/$feed/$feed.xml > $wd/$feed/$feed.xml.tmp
mv $wd/$feed/$feed.xml.tmp $wd/$feed/$feed.xml
echo " <item>" >> $wd/$feed/$feed.xml
echo " <title>"$title" has changed!</title>" >> $wd/$feed/$feed.xml
echo " <description>"$change"</description>" >> $wd/$feed/$feed.xml
echo " <link>"$i"</link>" >> $wd/$feed/$feed.xml
echo " <pubDate>"$(date '+%a, %d %b %Y %H:%M:%S %z')"</pubDate>" >> $wd/$feed/$feed.xml
echo " </item>" >> $wd/$feed/$feed.xml
echo "" >> $wd/$feed/$feed.xml
echo "</channel>" >> $wd/$feed/$feed.xml
echo "</rss>" >> $wd/$feed/$feed.xml
fi
mv $wd/$feed/$feed.fresh $wd/$feed/$feed.stale
fi
done
Offline
for i in $(cat $wd/urls) ; do
...
done
Don't do this, for loops don't loop over lines, they loop over space-separated words -- instead use:
while IFS= read -r i ; do
...
done < $wd/urls
wd="~/.scripts/feedgen"
for i in $(cat $wd/urls) ; do
The $wd variable contains the literal text ~/ and this will resolve to the directory name ./\~/.scripts/feedgen which I doubt you want. Don't quote home directory tilde expansions. Better yet, use $HOME which gets expanded properly.
title=$(curl -s -L $i 2> /dev/null | grep -m 1 title | awk -F \> '{print $2}' | awk -F \< '{print $1}')
This is a lot of chained commands for something awk is not very good at. I would recommend sed here:
title=$(curl -s -L $i 2> /dev/null | sed -nr '/title/{s/.*>(.*?)<.*/\1/p;q}'
But in truth, you should use a real xml parser, like xmllint or xmlstarlet.
if !(find $wd -type d | grep -q $feed) ; then
The !(...) is not needed here, simply use
if ! find $wd -type d | grep -q $feed ; then
You don't need to run the find | grep pipeline inside an *additional* subshell, and it's not part of the grammar of the "if" keyword.
change=$(diff $wd/$feed/$feed.stale $wd/$feed/$feed.fresh)
if ! test -z "$change" ; then
Since you don't care about creating a patch file, but only wish to assert that the two files are different, consider using "cmp -s" instead of diff.
Last edited by eschwartz (2020-06-01 03:39:17)
Managing AUR repos The Right Way -- aurpublish (now a standalone tool)
Offline
Performs full system backup with rsync tool in automated or user interacted ways. Invoked via system systemd timer in non-interactive shell. I will be glad to receive any criticism, since this is one of my first ever written shell script which takes more than 10 lines of code
Style issues I have found but didn't know how to fix:
1. Sciprt calculates two times backup_samples amount:
- at the start to check whether any sample exists to decrypt and unpack it
- in the function "remove_obsolete" after(if) new sample was successfully created
2. Don't know if this common practice, but I declared variable "latest" in function "check_first_run" and used it in "remove_obsolete" which perfectly worked but in other languages such usage would be impossible since variable "latest" would be inacessible for latter function "remove_obsolete"
3. Since script is invoked from non-interactive shell in "check_first_run" I have used some kind of a popup terminal window for receiving user password to decrypt latest gpg backup sample. Please, let me know if I could do this in any way better
PS1=Probably I should even cut out the whole part for checking existing samples of backup and don't unpack latest gpg file. Instead just store mirror of the whole system in "/var/tmp/data". Which in turn will drastically reduce time of backing up stuff, since I have a lot of media files such as audio/video/pictures and each time moving them with rsync and then compress already compressed formats of video/audio which will not result in any descension of total occupied size of backup sample seems like waste of energy
#!/bin/sh
recepient="assertion9@gmail.com"
dir="/var/tmp/fsbackup"
archive="/var/tmp/fsbackup/backup.tar.zst"
gpg_file="${dir:?}/$(date '+%I-%M%p_%d_%b_%Y').gpg"
MAX_BACKUP_SAMPLES=2
backup_samples=$(find "$dir" -maxdepth 1 -name "*.gpg" | wc -l)
dirs_skip_list="$dir /dev/ /proc/ /sys/ /tmp/ /run/ /mnt/"
get_user_input(){
message=$1
printf "%s\n" "$message"
read -r answer
if [ "$answer" != "${answer#[Yy]}" ]; then
return 1
else
exit
fi
}
remove_obsolete(){
backup_samples=$(find "$dir" -maxdepth 1 -name "*.gpg" | wc -l)
[ "$backup_samples" -ge "$MAX_BACKUP_SAMPLES" ] && rm -f "${oldest:?}"
}
check_first_run(){
if [ -d "$dir" ] && [ "$backup_samples" -gt 0 ]; then
samples="$(find /var/tmp/fsbackup -maxdepth 1 -type f -printf '%T+ %p\n' | sort | awk '{printf $NF " "}' | sed 's/.$//')"
latest=${samples##* }
oldest=${samples%% *}
urxvt -name popup -e dash -c "gpg --homedir=/home/assertion9/.local/share/gnupg -o $archive -d $latest"
if [ -f "$archive" ];then
tar --zstd -xf "$archive"
rm "$archive"
else
su assertion9 -c 'dunstify -u critical -i luckybackup "Bad passphrase! Backup aborted!"'
exit 1
fi
else
mkdir "$dir"
chmod 700 "$dir"
fi
}
backup(){
src="/"
dest="/var/tmp/fsbackup/data"
msg="The full system backup will be created at $dir as a file $gpg_file. Proceed? (y/n)"
[ "$1" != "auto" ] && get_user_input "$msg"
su assertion9 -c 'dunstify -u critical -i luckybackup "Starting system backup..."'
check_first_run
exclude="$(printf ' --exclude=%s' $dirs_skip_list)"
rsync -aAX --delete $exclude "$src" "$dest"
tar --zstd -cf "$archive" "$dest"
gpg --homedir=/home/assertion9/.local/share/gnupg -o "$gpg_file" -e -r "$recepient" "$archive"
rm -rf "$dest" "$archive"
remove_obsolete
su assertion9 -c 'dunstify -u critical -i luckybackup "System backup finished!"'
}
restore(){
msg="System will restore data from '$dest' to '/'. Proceed? (y/n)"
get_user_input "$msg"
while :
do
printf "Enter absolute path of gpg file: "
read -r gpg_file_path
if [ -z "$gpg_file_path" ];then
printf "(!) Empty input (!)\n"
elif [ -d "$gpg_file_path" ];then
printf "(!) Input is a directory, not a gpg file (!)\n"
elif [ -f "$gpg_file_path" ];then
case "$gpg_file_path" in
*.gpg )
src="$gpg_file_path"
dest="/"
exclude="$(printf ' --exclude=%s' $dirs_skip_list)"
rsync -aAX --info=progress2 $exclude "$src" "$dest"
;;
*)
printf "(!) Input isn't a gpg file (!)\n"
return
;;
esac
else
printf "(!) Input isn't a gpg file (!)\n"
fi
done
}
case "$1" in
(auto)
backup auto
exit 0
;;
(backup)
backup
exit 0
;;
(restore)
restore
exit 0
;;
(*)
echo "Usage: $0 {backup|restore|auto}"
exit 2
;;
esac
Last edited by assertion9 (2020-07-04 12:25:51)
Offline
If you have a cluttered $HOME (like I do), this automatic Git updater will, at least, take care of having your local Git stuff up-to-date:
#!/opt/schily/bin/bosh
# Update all Git repositories in/below $HOME.
# ---- VARIABLES
# Contains a list of (substrings of) directories to skip.
BLACKLIST="$HOME/go/ $HOME/Library"
# Contains a list of Git repository directories.
# Currently, this is collected automatically, but it can be a
# manual list as well.
GITDIRS=`find $HOME -maxdepth 3 -name '.?*' -prune -o -type d -call '[ -e "$1/.git" ]' {} \; -prune -print`
# ---- CODE
for DIR in ${GITDIRS}; do
# Blacklist check:
allowed=1
for BAN in ${BLACKLIST}; do
substrings=`expr "$DIR" : "$BAN"`
if [ $substrings -ne 0 ] ; then
# This directory is in/below a blacklisted one.
allowed=0
break
fi
done
if [ $allowed -eq 0 ]; then
continue
fi
# This directory is promised to be not blacklisted.
# Process it.
cd $DIR
# Check the remote state:
git fetch
UPSTREAM=${1:-'@{u}'}
LOCAL=`git rev-parse @`
REMOTE=`git rev-parse "$UPSTREAM"`
BASE=`git merge-base @ "$UPSTREAM"`
if [ $LOCAL = $REMOTE ]; then
# This repository is up-to-date.
# Do nothing with it.
continue
elif [ $LOCAL = $BASE ]; then
# The remote version is newer.
echo "Updating Git repository: $DIR"
git pull
else
# Something is different.
echo "The Git repository in $DIR needs to be merged."
echo "This cannot be done automatically."
fi
done
Note that you’ll need to change the GITDIRS call if you’re using a non-Schily POSIX shell.
Last edited by YesItsMe (2020-07-08 07:18:09)
Offline
Here's one I use to move my camera files out of a folder and into parallel folders named from the file dates.
The workflow being something like this:
Copy files from the camera into a folder "Newstuff"
cd to "Newstuff" and run the script
The new files are then moved to date-named folders, parallel to "Newstuff"
It has lots of improvement potential; I don't think it handles files with spaces well, and it could need a nicer confirmation dialogue to avoid calling it in the wrong folder (which can cause a real mess )
#!/bin/bash
group_by_date () {
for file in `ls`; do
filedate=`stat -c %y $file | cut -d ' ' -f 1`
if [ -d ../$filedate ]; then
mv $file ../$filedate
else
mkdir ../$filedate
mv $file ../$filedate
fi
done
}
# Make sure I do this from the correct directory
echo "Do you want to group by date the files in $PWD?"
select yn in "Yes" "No"; do
case $yn in
Yes ) group_by_date; break;;
No ) exit;;
esac
done
Offline
Another way without parsing ls, or using cut.
for file in *; do
filedate=$(date +%F -r "$file")
echo "$filedate"
done
Offline
This will do everything in the group_by_date function:
stat -c "%y %n" * | while read date time offset file; do
mkdir -p ../$date
mv "$file" ../$date
done
No need to parse ls, and only one subshell and one stat process total, rather than one subshell per file and a stat process per file.
EDIT: changed $date to ../$date as this is intended to be a relative directory.
Last edited by Trilby (2020-07-30 13:45:57)
"UNIX is simple and coherent" - Dennis Ritchie; "GNU's Not Unix" - Richard Stallman
Offline
exiftool can also do this; for example:
exiftool '-Directory<DateTimeOriginal' -d %Y/%m/%d dir
Offline
Here is a script I use that notifies whats playing through pulseaudio. Works nicely with mpv.
#!/bin/bash
#
# Init variables
#
notification_title="Now Playing"
while [ true ]
do
#
# Get the current media.name playing according to pactl
#
name=$(pactl list | grep "media\.name.* - " | awk -F "=" '{print substr($2,3,length($2)-3)}')
#
# Only display the name if it differs from the previous one
#
if [ -n "$name" ] && [ "$name" != "$plays" ]; then
plays="$name"
#
# If X is running, notify, else echo to the terminal
#
if [[ $DISPLAY ]]; then
notify-send "$notification_title" "$plays" > /dev/null 2>&1
else
echo "$plays"
fi
fi
#
# Check every 5 seconds for a changed title playing
#
sleep 5
done
Offline
Poire, why do you use [ and [[ interchangeably? You have bash as the shebang, so there shouldn't be any need for [. Most notably, it's a complete no-op (but resource use) each time through the while loop. Don't test for an infinite loop:
while true
...
There's also no point in piping grep to awk:
name=$(pactl list | awk -F = '/media\.name.* -/ {print ...
You could also skip checking if name is non-empty every time through the loop, simply by setting 'plays' to some arbitrary string before the loop.
"UNIX is simple and coherent" - Dennis Ritchie; "GNU's Not Unix" - Richard Stallman
Offline
Poire, why do you use [ and [[ interchangeably? You have bash as the shebang, so there shouldn't be any need for [. Most notably, it's a complete no-op (but resource use) each time through the while loop. Don't test for an infinite loop:
while true ...
There's also no point in piping grep to awk:
name=$(pactl list | awk -F = '/media\.name.* -/ {print ...
You could also skip checking if name is non-empty every time through the loop, simply by setting 'plays' to some arbitrary string before the loop.
I thought the point of piping grep into awk was not actually knowing any awk and giving up on googling "how to X with sed and grep" and using that one line from sack overflow
Offline
Yeah, pretty much. But in this case, it's trivial - line matching is a fundamental aspect of awk like search and replace is a fundamental aspect of sed. I may get irked when that's all people know how to do with it, but most people know at least how to do that.
Last edited by Trilby (2020-08-19 18:57:39)
"UNIX is simple and coherent" - Dennis Ritchie; "GNU's Not Unix" - Richard Stallman
Offline
how to X with sed and grep
Without grep …
Offline
Inspired by this thread, I thought I'd play around a bit with the AURWeb RPC, curl and jq.
It needs jq, curl and expac.
#/usr/bin/bash
url='https://aur.archlinux.org/rpc.php'
preamble='/rpc/?v=5&type=info'
packages=$(pacman -Qqm)
searchstring=$(echo '&arg[]='$packages | sed 's/\ /\&arg[]=/g')
searchresult=$(curl -s $url$preamble$searchstring)
for pkg in $packages
do
remoteversion=$(echo $searchresult | jq --arg PKG "$pkg" '.results[] | select(.Name == $PKG) | .Version' | sed 's/"//g')
localversion=$(expac %v $pkg)
if [[ $remoteversion > $localversion ]]
then
echo "$pkg needs an update $localversion -> $remoteversion"
elif [[ $localversion > $remoteversion ]]
then
if [[ $pkg =~ "-git" ]]
then
echo "$pkg AUR is behind local: $localversion > $remoteversion"
else
echo "$pkg AUR is behind local or version string format mismatch! $localversion > $remoteversion"
fi
else
if [[ $pkg =~ "-git" ]]
then
echo "$pkg is up-to-date according to the AUR. Check upstream."
else
echo "$pkg is up-to-date."
fi
fi
done
I tried to include some error handling and there is no flood control for the response limit yet. This is mostly a POC, that anybody with an hour to spare can write their own AUR helper.
Last edited by Awebb (2020-08-26 13:12:33)
Offline
FYI:
searchstring=$(printf '&arg[]=%s' $packages)
Also, I don't think shell test comparisons will be very robust in version checks.
That is about how my tool writen in python works though - where it can use LooseVersion from distutils.version for the checks.
Last edited by Trilby (2020-08-26 13:19:41)
"UNIX is simple and coherent" - Dennis Ritchie; "GNU's Not Unix" - Richard Stallman
Offline
FYI:
searchstring=$(printf '&arg[]=%s' $packages)
That is cool! I need to up my printf game.
Also, I don't think shell test comparisons will be very robust in version checks.
Yes, I was betting on this from "help test"
STRING1 > STRING2
True if STRING1 sorts after STRING2 lexicographically.
I'm aware that this is a hard gamble, because [[ ]] isn't "test". I think I'll burn that particular bridge when I cross it. I probably should write this in Python, but bash is my pocket knife on Linux, so I try to do as much in bash as possible, even if it ends up eating the cat.
Offline
Script to change the backlight level:
#!/bin/sh
echo()
{
printf %s\\n "$*"
}
usage()
{
echo "Use '${0##*/} up' & '${0##*/} down' to set the backlight level"
}
get_increment()
{
IFS= read -r max < "${file%/*}"/max_brightness
increment=$((max/10))
}
get_brightness()
{
IFS= read -r brightness < "$file"
}
brighter()
{
for file in /sys/class/backlight/*/brightness; do
get_increment
get_brightness
echo $((brightness+increment)) > "$file"
done
}
dimmer()
{
for file in /sys/class/backlight/*/brightness; do
get_increment
get_brightness
echo $((brightness-increment)) > "$file"
done
}
main()
{
case "$1" in
up) brighter ;;
down) dimmer ;;
*) usage ;;
esac
}
main "$1"
Add a line to /etc/sudoers to allow any user to run it from a keybind with sudo (add 'up' & 'down' as arguments to the script):
ALL = (root) NOPASSWD: /full/path/to/script
Any corrections, suggestions or insults are gratefully welcomed
Para todos todo, para nosotros nada
Offline
hi, can you explain me this assignment syntax?
IFS= read -r brightness < "$file"
Help me to improve ssh-rdp !
Retroarch User? Try my koko-aio shader !
Offline
^ It assigns the numerical value in "$file" to the "$brightness" variable.
Para todos todo, para nosotros nada
Offline
Is that meant as a joke? You turned what should be about a dozen lines into 50 lines just do to painful abstractions and a redefintion of the shell built in "echo" ... and you include IFS settings and read's -r when you know the data read will not have separators or escape sequences? Why? This would have the same result:
#!/bin/sh
case "$1" in
up) sign=1 ;;
down) sign=-1 ;;
*) echo "Use '$0 up' & '$0 down' to set the backlight level"; exit ;;
esac
for file in /sys/class/backlight/*; do
read max < $file/max_brightness
read cur < $file/brightness
echo $((cur + sign * max / 10)) > $file/brightness
done
My own version allows for increments / decrement, specifiying a percent, or just checking the current value with not much more code (and it also checks bounds which will allow incrementing/decrementing to 100% or 0 while your code will fail to do so):
#!/bin/sh
read max < /sys/class/backlight/intel_backlight/max_brightness
read cur < /sys/class/backlight/intel_backlight/brightness
step=80
case $1 in
+|up) let cur+=$step ;;
-|down) let cur-=$step ;;
[0-9]*) cur=$((max * $1 / 100)) ;;
*) echo $cur; exit ;;
esac
[ $cur -le 0 ] && cur=0
[ $cur -ge $max ] && cur=$max
echo $cur >> /sys/class/backlight/intel_backlight/brightness
I only have one backlight device, so there is no need to loop.
Then there is no reason to require sudo. Add your user to the video group as recommended in the wiki.
Last edited by Trilby (2020-09-20 13:03:52)
"UNIX is simple and coherent" - Dennis Ritchie; "GNU's Not Unix" - Richard Stallman
Offline
Is that meant as a joke?
Well no but I'm glad it amused you :-)
You turned what should be about a dozen lines into 50 lines just do to painful abstractions and a redefintion of the shell built in "echo"? Why?
Because I'm crap at scripting
Thanks for the superior alternative, it is very much appreciated.
Oh, and I use a function for echo because I don't think it should have options. It's a religious thing.
Para todos todo, para nosotros nada
Offline
Oh, and I use a function for echo because I don't think it should have options. It's a religious thing.
Then don't pass options to echo, or just use printf from the start. There's no need for several lines to redefine echo as printf just so you can use echo when you really wanted printf.
And sorry HoaS, my tone was inappropriate. I was really just surprised as you are an experienced archer and I thought I've seen content from you before that displays a capacity and approach to scripting that just seemed at odds with this script.
Last edited by Trilby (2020-09-20 16:57:10)
"UNIX is simple and coherent" - Dennis Ritchie; "GNU's Not Unix" - Richard Stallman
Offline
I'm strongly in favor of just forgetting "echo" exists. Use printf "%s\n" "message" everywhere. Or define
msg() { printf "%s\n" "$*"; }
which is noteworthy for not calling itself "echo" and thus not having terrible baggage.
I'm not fundamentally against the use of functions, though I don't believe it helps here, but the leaky use of globals doesn't really make a compelling argument in favor of it. Functions should be standalone abstractions. Your get_*() functions should print the values and be used like this:
increment=$(get_increment)
Of course then you realize you don't gain much *and* incur the cost of a subshell, so you might as well not bother.
(makepkg includes some abstraction functions that take as an argument, the name of a global variable to write to. If you absolutely must work without subshells, this strikes me as a lot conceptually safer than baking in variable names like that.)
inb4 "I know this script and it's really short and won't ever reuse variable names".
Counter: then you don't need convenience functions either.
Managing AUR repos The Right Way -- aurpublish (now a standalone tool)
Offline