You are not logged in.
If you haven't seen it before, this is a great sed tutorial. And while it's not the only website that suggests an inverse correlation between the quality of a website's content and it's aesthetics, it's certainly a prime example.
Though I've learned sed just suites me well - it works like I think.
But in this context, this is even an abuse of sed. The whole approach makes some pretty big assumptions about the consistency of format of the nat-geo html. Ideally you should use an html or at least xml parser to extract tags/properties from the content. Or you could likely use w3m / links to do much of this for you (and these would also take care of text encoding themselves too).
Last edited by Trilby (2022-08-14 02:29:21)
"UNIX is simple and coherent" - Dennis Ritchie; "GNU's Not Unix" - Richard Stallman
Offline
Got tired of manually performing the same steps on my AUR packages again and again:
0 ✓ rne@thinkpad ~/Projekte/aur/python-rcon $ cat /usr/local/bin/versionbump
#! /bin/bash
test -n "$1" || exit 1
test -f PKGBUILD || exit 2
if [ -n "$2" ]; then
RELEASE="$2"
else
RELEASE="1"
fi
sed -i -e "s/pkgver=.*/pkgver=$1/" PKGBUILD
sed -i -e "s/pkgrel=.*/pkgrel=$RELEASE/" PKGBUILD
makepkg --printsrcinfo > .SRCINFO
git add PKGBUILD .SRCINFO
git commit -m "Update to $1-$RELEASE"
git push
Inofficial first vice president of the Rust Evangelism Strike Force
Offline
This script takes two IPv4 addresses as arguments and returns their smallest common subnet, along with some additional information. The output is supposed to resemble that of ipcalc. Criticism and suggestions are most welcome.
#!/bin/bash
USAGE () {
echo "Invalid argument(s)."
echo "Note: This script takes exactly two IPv4 addresses as arguments."
exit 1
}
[[ $# -ne 2 ]] && USAGE
VALIDATE_ARGUMENTS () {
local VALID_PATTERN='^([0-9]+)\.([0-9]+)\.([0-9]+)\.([0-9]+)$'
[[ $SPECIMEN =~ $VALID_PATTERN ]] || USAGE
for OCTET in {1..4} ; do
local OCTET_DEC
OCTET_DEC=$(echo "$SPECIMEN" | cut -d '.' -f "$OCTET")
[[ $OCTET_DEC -gt 255 || $OCTET_DEC -ne 0 && $OCTET_DEC =~ ^0 ]] && USAGE
done
}
SPECIMEN="$1"
VALIDATE_ARGUMENTS "$SPECIMEN"
SPECIMEN="$2"
VALIDATE_ARGUMENTS "$SPECIMEN"
ADDRESS_A=$(echo -e "$1\n$2" | sort -n | head -n 1)
ADDRESS_B=$(echo -e "$1\n$2" | sort -n | tail -n 1)
OCTET_1A_DEC=$(echo "$ADDRESS_A" | cut -d '.' -f 1)
OCTET_1B_DEC=$(echo "$ADDRESS_A" | cut -d '.' -f 2)
OCTET_1C_DEC=$(echo "$ADDRESS_A" | cut -d '.' -f 3)
OCTET_1D_DEC=$(echo "$ADDRESS_A" | cut -d '.' -f 4)
OCTET_2A_DEC=$(echo "$ADDRESS_B" | cut -d '.' -f 1)
OCTET_2B_DEC=$(echo "$ADDRESS_B" | cut -d '.' -f 2)
OCTET_2C_DEC=$(echo "$ADDRESS_B" | cut -d '.' -f 3)
OCTET_2D_DEC=$(echo "$ADDRESS_B" | cut -d '.' -f 4)
D2B=({0..1}{0..1}{0..1}{0..1}{0..1}{0..1}{0..1}{0..1})
OCTET_1A_BIN=${D2B[$OCTET_1A_DEC]}
OCTET_1B_BIN=${D2B[$OCTET_1B_DEC]}
OCTET_1C_BIN=${D2B[$OCTET_1C_DEC]}
OCTET_1D_BIN=${D2B[$OCTET_1D_DEC]}
OCTET_2A_BIN=${D2B[$OCTET_2A_DEC]}
OCTET_2B_BIN=${D2B[$OCTET_2B_DEC]}
OCTET_2C_BIN=${D2B[$OCTET_2C_DEC]}
OCTET_2D_BIN=${D2B[$OCTET_2D_DEC]}
ADDRESS_A_BIN_DOTS="${OCTET_1A_BIN}.${OCTET_1B_BIN}.${OCTET_1C_BIN}.${OCTET_1D_BIN}"
ADDRESS_B_BIN_DOTS="${OCTET_2A_BIN}.${OCTET_2B_BIN}.${OCTET_2C_BIN}.${OCTET_2D_BIN}"
ADDRESS_A_BIN=$(echo "$ADDRESS_A_BIN_DOTS" | tr -d .)
ADDRESS_B_BIN=$(echo "$ADDRESS_B_BIN_DOTS" | tr -d .)
for BIT in {1..32} ; do
CIDR=$((32-BIT))
[ "${ADDRESS_A_BIN::-$BIT}" == "${ADDRESS_B_BIN::-$BIT}" ] && break
done
case $CIDR in
0) NETMASK=("0.0.0.0" "255.255.255.255" "4,294,967,294") ;;
1) NETMASK=("128.0.0.0" "127.255.255.255" "2,147,483,646") ;;
2) NETMASK=("192.0.0.0" "63.255.255.255" "1,073,741,822") ;;
3) NETMASK=("224.0.0.0" "31.255.255.255" "536,870,910") ;;
4) NETMASK=("240.0.0.0" "15.255.255.255" "268,435,454") ;;
5) NETMASK=("248.0.0.0" "7.255.255.255" "134,217,726") ;;
6) NETMASK=("252.0.0.0" "3.255.255.255" "67,108,862") ;;
7) NETMASK=("254.0.0.0" "1.255.255.255" "33,554,430") ;;
8) NETMASK=("255.0.0.0" "0.255.255.255" "16,777,214") ;;
9) NETMASK=("255.128.0.0" "0.127.255.255" "8,388,606") ;;
10) NETMASK=("255.192.0.0" "0.63.255.255" "4,194,302") ;;
11) NETMASK=("255.224.0.0" "0.31.255.255" "2,097,150") ;;
12) NETMASK=("255.240.0.0" "0.15.255.255" "1,048,574") ;;
13) NETMASK=("255.248.0.0" "0.7.255.255" "524,286") ;;
14) NETMASK=("255.252.0.0" "0.3.255.255" "262,142") ;;
15) NETMASK=("255.254.0.0" "0.1.255.255" "131,070") ;;
16) NETMASK=("255.255.0.0" "0.0.255.255" "65,534") ;;
17) NETMASK=("255.255.128.0" "0.0.127.255" "32,766") ;;
18) NETMASK=("255.255.192.0" "0.0.63.255" "16,382") ;;
19) NETMASK=("255.255.224.0" "0.0.31.255" "8,190") ;;
20) NETMASK=("255.255.240.0" "0.0.15.255" "4,094") ;;
21) NETMASK=("255.255.248.0" "0.0.7.255" "2,046") ;;
22) NETMASK=("255.255.252.0" "0.0.3.255" "1,022") ;;
23) NETMASK=("255.255.254.0" "0.0.1.255" "510") ;;
24) NETMASK=("255.255.255.0" "0.0.0.255" "254") ;;
25) NETMASK=("255.255.255.128" "0.0.0.127" "126") ;;
26) NETMASK=("255.255.255.192" "0.0.0.63" "62") ;;
27) NETMASK=("255.255.255.224" "0.0.0.31" "30") ;;
28) NETMASK=("255.255.255.240" "0.0.0.15" "14") ;;
29) NETMASK=("255.255.255.248" "0.0.0.7" "6") ;;
30) NETMASK=("255.255.255.252" "0.0.0.3" "2") ;;
31) CIDR=30
NETMASK=("255.255.255.252" "0.0.0.3" "2") ;;
esac
NETMASK_OCTET_1=$(echo "${NETMASK[0]}" | cut -d '.' -f 1)
NETMASK_OCTET_2=$(echo "${NETMASK[0]}" | cut -d '.' -f 2)
NETMASK_OCTET_3=$(echo "${NETMASK[0]}" | cut -d '.' -f 3)
NETMASK_OCTET_4=$(echo "${NETMASK[0]}" | cut -d '.' -f 4)
NETMASK_OCTET_1_BIN=${D2B[$NETMASK_OCTET_1]}
NETMASK_OCTET_2_BIN=${D2B[$NETMASK_OCTET_2]}
NETMASK_OCTET_3_BIN=${D2B[$NETMASK_OCTET_3]}
NETMASK_OCTET_4_BIN=${D2B[$NETMASK_OCTET_4]}
NETMASK_BIN_DOTS="$NETMASK_OCTET_1_BIN.$NETMASK_OCTET_2_BIN.$NETMASK_OCTET_3_BIN.$NETMASK_OCTET_4_BIN"
WILDCARD_OCTET_1=$(echo "${NETMASK[1]}" | cut -d '.' -f 1)
WILDCARD_OCTET_2=$(echo "${NETMASK[1]}" | cut -d '.' -f 2)
WILDCARD_OCTET_3=$(echo "${NETMASK[1]}" | cut -d '.' -f 3)
WILDCARD_OCTET_4=$(echo "${NETMASK[1]}" | cut -d '.' -f 4)
WILDCARD_OCTET_1_BIN=${D2B[$WILDCARD_OCTET_1]}
WILDCARD_OCTET_2_BIN=${D2B[$WILDCARD_OCTET_2]}
WILDCARD_OCTET_3_BIN=${D2B[$WILDCARD_OCTET_3]}
WILDCARD_OCTET_4_BIN=${D2B[$WILDCARD_OCTET_4]}
WILDCARD_BIN_DOTS="$WILDCARD_OCTET_1_BIN.$WILDCARD_OCTET_2_BIN.$WILDCARD_OCTET_3_BIN.$WILDCARD_OCTET_4_BIN"
NETWORK_ADDRESS=$(printf "%d.%d.%d.%d" "$((OCTET_1A_DEC & NETMASK_OCTET_1))" "$((OCTET_1B_DEC & NETMASK_OCTET_2))" "$((OCTET_1C_DEC & NETMASK_OCTET_3))" "$((OCTET_1D_DEC & NETMASK_OCTET_4))")
BROADCAST_ADDRESS=$(printf "%d.%d.%d.%d" "$((OCTET_1A_DEC & NETMASK_OCTET_1 | 255-NETMASK_OCTET_1))" "$((OCTET_1B_DEC & NETMASK_OCTET_2 | 255-NETMASK_OCTET_2))" "$((OCTET_1C_DEC & NETMASK_OCTET_3 | 255-NETMASK_OCTET_3))" "$((OCTET_1D_DEC & NETMASK_OCTET_4 | 255-NETMASK_OCTET_4))")
NETWORK_OCTET_1=$(echo "$NETWORK_ADDRESS" | cut -d '.' -f 1)
NETWORK_OCTET_2=$(echo "$NETWORK_ADDRESS" | cut -d '.' -f 2)
NETWORK_OCTET_3=$(echo "$NETWORK_ADDRESS" | cut -d '.' -f 3)
NETWORK_OCTET_4=$(echo "$NETWORK_ADDRESS" | cut -d '.' -f 4)
NETWORK_OCTET_1_BIN=${D2B[$NETWORK_OCTET_1]}
NETWORK_OCTET_2_BIN=${D2B[$NETWORK_OCTET_2]}
NETWORK_OCTET_3_BIN=${D2B[$NETWORK_OCTET_3]}
NETWORK_OCTET_4_BIN=${D2B[$NETWORK_OCTET_4]}
NETWORK_BIN_DOTS="$NETWORK_OCTET_1_BIN.$NETWORK_OCTET_2_BIN.$NETWORK_OCTET_3_BIN.$NETWORK_OCTET_4_BIN"
BROADCAST_OCTET_1=$(echo "$BROADCAST_ADDRESS" | cut -d '.' -f 1)
BROADCAST_OCTET_2=$(echo "$BROADCAST_ADDRESS" | cut -d '.' -f 2)
BROADCAST_OCTET_3=$(echo "$BROADCAST_ADDRESS" | cut -d '.' -f 3)
BROADCAST_OCTET_4=$(echo "$BROADCAST_ADDRESS" | cut -d '.' -f 4)
BROADCAST_OCTET_1_BIN=${D2B[$BROADCAST_OCTET_1]}
BROADCAST_OCTET_2_BIN=${D2B[$BROADCAST_OCTET_2]}
BROADCAST_OCTET_3_BIN=${D2B[$BROADCAST_OCTET_3]}
BROADCAST_OCTET_4_BIN=${D2B[$BROADCAST_OCTET_4]}
BROADCAST_BIN_DOTS="$BROADCAST_OCTET_1_BIN.$BROADCAST_OCTET_2_BIN.$BROADCAST_OCTET_3_BIN.$BROADCAST_OCTET_4_BIN"
ADDRESS_A_CLASS="Public internet"
[[ $OCTET_1A_DEC -eq 0 ]] && ADDRESS_A_CLASS="IANA special use (RFC 791)"
[[ $OCTET_1A_DEC -eq 10 ]] && ADDRESS_A_CLASS="Private internet, Class A (RFC 1918)"
[[ $OCTET_1A_DEC -eq 100 && $OCTET_1B_DEC -ge 64 && $OCTET_1B_DEC -le 127 ]] && ADDRESS_A_CLASS="IANA special use (RFC 6598)"
[[ $OCTET_1A_DEC -eq 127 ]] && ADDRESS_A_CLASS="Loopback address (RFC 1122)"
[[ $OCTET_1A_DEC -eq 172 && $OCTET_1B_DEC -ge 16 && $OCTET_1B_DEC -le 31 ]] && ADDRESS_A_CLASS="Private internet, Class B (RFC 1918)"
[[ $OCTET_1A_DEC -eq 169 && $OCTET_1B_DEC -eq 254 ]] && ADDRESS_A_CLASS="Link local address (RFC 3927)"
[[ $OCTET_1A_DEC -eq 192 && $OCTET_1B_DEC -eq 0 && $OCTET_1C_DEC -eq 0 ]] && ADDRESS_A_CLASS="IANA special use (RFC 6890)"
[[ $OCTET_1A_DEC -eq 192 && $OCTET_1B_DEC -eq 168 ]] && ADDRESS_A_CLASS="Private internet, Class C (RFC 1918)"
[[ $OCTET_1A_DEC -eq 192 && $OCTET_1B_DEC -eq 0 && $OCTET_1C_DEC -eq 0 && $OCTET_1D_DEC -le 7 ]] && ADDRESS_A_CLASS="IANA special use (RFC 7335)"
[[ $OCTET_1A_DEC -eq 192 && $OCTET_1B_DEC -eq 0 && $OCTET_1C_DEC -eq 2 ]] && ADDRESS_A_CLASS="IANA special use (RFC 5737)"
[[ $OCTET_1A_DEC -eq 192 && $OCTET_1B_DEC -eq 31 && $OCTET_1C_DEC -eq 196 ]] && ADDRESS_A_CLASS="IANA special use (RFC 7535)"
[[ $OCTET_1A_DEC -eq 192 && $OCTET_1B_DEC -eq 52 && $OCTET_1C_DEC -eq 193 ]] && ADDRESS_A_CLASS="IANA special use (RFC 7450)"
[[ $OCTET_1A_DEC -eq 192 && $OCTET_1B_DEC -eq 88 && $OCTET_1C_DEC -eq 99 ]] && ADDRESS_A_CLASS="6to4 relay anycast (RFC 7450) [Deprecated]"
[[ $OCTET_1A_DEC -eq 192 && $OCTET_1B_DEC -eq 175 && $OCTET_1C_DEC -eq 48 ]] && ADDRESS_A_CLASS="IANA special use (RFC 7534)"
[[ $OCTET_1A_DEC -eq 198 && $OCTET_1B_DEC -ge 18 && $OCTET_1B_DEC -le 19 ]] && ADDRESS_A_CLASS="IANA special use (RFC 2544)"
[[ $OCTET_1A_DEC -eq 198 && $OCTET_1B_DEC -eq 51 && $OCTET_1C_DEC -eq 100 ]] && ADDRESS_A_CLASS="IANA special use (RFC 5737)"
[[ $OCTET_1A_DEC -eq 203 && $OCTET_1B_DEC -eq 0 && $OCTET_1C_DEC -eq 113 ]] && ADDRESS_A_CLASS="IANA special use (RFC 5737)"
[[ $OCTET_1A_DEC -ge 240 && $OCTET_1A_DEC -le 255 ]] && ADDRESS_A_CLASS="IANA reserved (RFC 1112)"
[[ $ADDRESS_A == "192.0.0.8" ]] && ADDRESS_A_CLASS="IANA special use (RFC 7600)"
[[ $ADDRESS_A == "192.0.0.9" ]] && ADDRESS_A_CLASS="IANA special use (RFC 7723)"
[[ $ADDRESS_A == "192.0.0.10" ]] && ADDRESS_A_CLASS="IANA special use (RFC 8155)"
[[ $ADDRESS_A == "192.0.0.170" ]] && ADDRESS_A_CLASS="IANA special use (RFC 8880)"
[[ $ADDRESS_A == "192.0.0.171" ]] && ADDRESS_A_CLASS="IANA special use (RFC 7050)"
[[ $ADDRESS_A == "255.255.255.255" ]] && ADDRESS_A_CLASS="Limited broadcast (RFC 919 / RFC 8190)"
ADDRESS_B_CLASS="Public internet"
[[ $OCTET_2A_DEC -eq 0 ]] && ADDRESS_B_CLASS="IANA special use (RFC 791)"
[[ $OCTET_2A_DEC -eq 10 ]] && ADDRESS_B_CLASS="Private internet, Class A (RFC 1918)"
[[ $OCTET_2A_DEC -eq 100 && $OCTET_2B_DEC -ge 64 && $OCTET_2B_DEC -le 127 ]] && ADDRESS_B_CLASS="IANA special use (RFC 6598)"
[[ $OCTET_2A_DEC -eq 127 ]] && ADDRESS_B_CLASS="Loopback address (RFC 1122)"
[[ $OCTET_2A_DEC -eq 172 && $OCTET_2B_DEC -ge 16 && $OCTET_2B_DEC -le 31 ]] && ADDRESS_B_CLASS="Private internet, Class B (RFC 1918)"
[[ $OCTET_2A_DEC -eq 169 && $OCTET_2B_DEC -eq 254 ]] && ADDRESS_B_CLASS="Link local address (RFC 3927)"
[[ $OCTET_2A_DEC -eq 192 && $OCTET_2B_DEC -eq 0 && $OCTET_2C_DEC -eq 0 ]] && ADDRESS_B_CLASS="IANA special use (RFC 6890)"
[[ $OCTET_2A_DEC -eq 192 && $OCTET_2B_DEC -eq 168 ]] && ADDRESS_B_CLASS="Private internet, Class C (RFC 1918)"
[[ $OCTET_2A_DEC -eq 192 && $OCTET_2B_DEC -eq 0 && $OCTET_2C_DEC -eq 0 && $OCTET_2D_DEC -le 7 ]] && ADDRESS_B_CLASS="IANA special use (RFC 7335)"
[[ $OCTET_2A_DEC -eq 192 && $OCTET_2B_DEC -eq 0 && $OCTET_2C_DEC -eq 2 ]] && ADDRESS_B_CLASS="IANA special use (RFC 5737)"
[[ $OCTET_2A_DEC -eq 192 && $OCTET_2B_DEC -eq 31 && $OCTET_2C_DEC -eq 196 ]] && ADDRESS_B_CLASS="IANA special use (RFC 7535)"
[[ $OCTET_2A_DEC -eq 192 && $OCTET_2B_DEC -eq 52 && $OCTET_2C_DEC -eq 193 ]] && ADDRESS_B_CLASS="IANA special use (RFC 7450)"
[[ $OCTET_2A_DEC -eq 192 && $OCTET_2B_DEC -eq 88 && $OCTET_2C_DEC -eq 99 ]] && ADDRESS_B_CLASS="6to4 relay anycast (RFC 7450) [Deprecated]"
[[ $OCTET_2A_DEC -eq 192 && $OCTET_2B_DEC -eq 175 && $OCTET_2C_DEC -eq 48 ]] && ADDRESS_B_CLASS="IANA special use (RFC 7534)"
[[ $OCTET_2A_DEC -eq 198 && $OCTET_2B_DEC -ge 18 && $OCTET_2B_DEC -le 19 ]] && ADDRESS_B_CLASS="IANA special use (RFC 2544)"
[[ $OCTET_2A_DEC -eq 198 && $OCTET_2B_DEC -eq 51 && $OCTET_2C_DEC -eq 100 ]] && ADDRESS_B_CLASS="IANA special use (RFC 5737)"
[[ $OCTET_2A_DEC -eq 203 && $OCTET_2B_DEC -eq 0 && $OCTET_2C_DEC -eq 113 ]] && ADDRESS_B_CLASS="IANA special use (RFC 5737)"
[[ $OCTET_2A_DEC -ge 240 && $OCTET_2A_DEC -le 255 ]] && ADDRESS_B_CLASS="IANA reserved (RFC 1112)"
[[ $ADDRESS_B == "192.0.0.8" ]] && ADDRESS_B_CLASS="IANA special use (RFC 7600)"
[[ $ADDRESS_B == "192.0.0.9" ]] && ADDRESS_B_CLASS="IANA special use (RFC 7723)"
[[ $ADDRESS_B == "192.0.0.10" ]] && ADDRESS_B_CLASS="IANA special use (RFC 8155)"
[[ $ADDRESS_B == "192.0.0.170" ]] && ADDRESS_B_CLASS="IANA special use (RFC 8880)"
[[ $ADDRESS_B == "192.0.0.171" ]] && ADDRESS_B_CLASS="IANA special use (RFC 7050)"
[[ $ADDRESS_B == "255.255.255.255" ]] && ADDRESS_B_CLASS="Limited broadcast (RFC 919 / RFC 8190)"
if [ "$1" == "$2" ] ; then
echo -e '\n'
printf "%12s %-20s %38s\n" "Address:" "$ADDRESS_A" "$ADDRESS_A_BIN_DOTS"
printf "%12s %s\n" "" "--> $ADDRESS_A_CLASS"
echo -e '\n'
printf "%12s %-20s %38s\n" "Netmask:" "255.255.255.255 = 32" "11111111.11111111.11111111.11111111"
printf "%12s %-20s %38s\n" "Wildcard:" "0.0.0.0" "00000000.00000000.00000000.00000000"
printf "%12s %-20s %38s\n" "Network:" "$1" "$NETWORK_BIN_DOTS"
printf "%12s %-20s\n" "Max hosts:" "1"
exit
fi
echo -e '\n'
printf "%12s %-20s %38s\n" "Address A:" "$ADDRESS_A" "$ADDRESS_A_BIN_DOTS"
printf "%12s %s\n" "" "--> $ADDRESS_A_CLASS"
echo -e '\n'
printf "%12s %-20s %38s\n" "Address B:" "$ADDRESS_B" "$ADDRESS_B_BIN_DOTS"
printf "%12s %s\n" "" "--> $ADDRESS_B_CLASS"
echo -e '\n'
printf "%12s %-20s %38s\n" "Netmask:" "${NETMASK[0]} = $CIDR" "$NETMASK_BIN_DOTS"
printf "%12s %-20s %38s\n" "Wildcard:" "${NETMASK[1]}" "$WILDCARD_BIN_DOTS"
printf "%12s %-20s %38s\n" "Network:" "$NETWORK_ADDRESS" "$NETWORK_BIN_DOTS"
printf "%12s %-20s %38s\n" "Broadcast:" "$BROADCAST_ADDRESS" "$BROADCAST_BIN_DOTS"
printf "%12s %-20s\n" "Max hosts:" "${NETMASK[2]}"
Edit: Renamed one variable more sensibly and heeded some shellcheck advice.
Last edited by salonkommunist (2022-08-22 16:55:04)
Offline
Need to play a sound in your bash script? Why not embed the sound as text in the script itself?
Offline
Pacman wrapper to search & install / remove packages, using fzf.
To install
#!/bin/sh
if [[ -n "$1" ]]; then
doas pacman -S "$@" && exit
else
echo -e '\e[1;37m[PACMAN] \e[1;32mInstall new packages (TAB to select, ENTER to install, PREVIEW-WINDOW: ?- toggle, shift+up/down- movement)\e[0m';\
# apk list | sed 's/-[0-9].*//' |\
pacman -Ssq |\
fzf -e --multi --preview='pacman -Si {1}' --reverse --info=inline --height='80%' \
--color='hl:148,hl+:154,pointer:032,marker:010,bg+:237,gutter:008' \
--prompt='> ' --pointer='▶' --marker='✓' \
--bind '?:toggle-preview' \
--bind 'shift-up:preview-up' \
--bind 'shift-down:preview-down' \
--bind 'ctrl-a:select-all' |\
xargs -ro doas pacman -S
fi
To remove
#!/bin/sh
if [[ -n "$1" ]]; then
doas pacman -Rns $@ && exit
else
echo -e '\e[1;37m[PACMAN] \e[1;35mRemove packages (TAB to select, ENTER to install, PREVIEW_WINDOW: ?- toggle, shift+up/down- movement)\e[0m';\
pacman -Qq |\
fzf -e --multi --preview='pacman -Qi {1}' --reverse --info=inline --height='80%' \
--color='hl:148,hl+:154,pointer:032,marker:010,bg+:237,gutter:008' \
--prompt='> ' --pointer='▶' --marker='✓' \
--bind '?:toggle-preview' \
--bind 'shift-up:preview-up' \
--bind 'shift-down:preview-down' \
--bind 'ctrl-a:select-all' |\
xargs -ro doas pacman -Rns
fi
Note: original idea is already in wiki, I have added some colors and keybindings.
Last edited by Docbroke (2022-09-18 19:15:17)
Arch is home!
https://github.com/Docbroke
Offline
@docbroke made a similar thing in https://git.jfischer.org:444/xeruf/dotf … shell/arch called `yas`/`yar` using `yay`.
As for AUR maintanence helpers (@schard): https://git.jfischer.org:444/xeruf/dotf … ts/git-aur
My dotfiles have thousands of lines full of gems, I should really repackage parts of them...
Offline
As for AUR maintanence helpers (@schard):
Which runs:
find ... -print -exec sudo rm -rI {} +;;
Just run git clean -xf.
Last edited by Alad (2022-09-27 05:44:37)
Mods are just community members who have the occasionally necessary option to move threads around and edit posts. -- Trilby
Offline
As for AUR maintanence helpers (@schard): https://git.jfischer.org:444/xeruf/dotf … ts/git-aur
This website is expired!
My dotfiles have thousands of lines full of gems, I should really repackage parts of them...
Please do!
Offline
Change directory in Krusader using fzf
Script:
#!/bin/bash
# prerequisites: xdotool, fzf, fd
cd /
DIR=$(fd -t d -a | fzf --preview="tree -C -L 1 {}" --bind="space:toggle-preview")
[[ -z $DIR ]] && exit
WIN=$(xdotool search --limit 1 --onlyvisible --class Krusader)
xdotool windowactivate $WIN && xdotool key --window $WIN ctrl+l
xdotool windowactivate $WIN && xdotool type --window $WIN $DIR
xdotool windowactivate $WIN && xdotool key --window $WIN KP_Enter
exit
Define a UserAction in Krusader to call the script.
Last edited by jaywk (2022-10-27 15:52:05)
Offline
Inspired by this discussion:
#! /usr/bin/env python3
"""Encode arbitrary strings with ANSI background coloring."""
from argparse import ArgumentParser, Namespace
from sys import stdin, stdout
def get_args(description: str = __doc__) -> Namespace:
"""Parse command line arguments."""
parser = ArgumentParser(description=description)
parser.add_argument('-d', '--decode', action='store_true')
return parser.parse_args()
def color_code(data: bytes, *, end: str = '\x1b[0m') -> str:
"""Color code the bytes."""
return ''.join(
f'\x1b[{int(octet) + 40}m ' for octet
in reversed(oct(int.from_bytes(data, 'little'))[2:])
) + end
def decode_colors(text: str) -> bytes:
"""Decode a color code."""
return (i := int(''.join(
str(int(code) - 40) for code
in reversed(text.replace('\x1b[', '').replace('m', '').split()[:-1])
), 8)).to_bytes(i.bit_length() // 8 + bool(i.bit_length() % 8), 'little')
def main() -> None:
"""Run the script."""
args = get_args()
if args.decode:
for line in stdin:
stdout.buffer.write(decode_colors(line))
stdout.flush()
else:
print(color_code(stdin.buffer.read()))
if __name__ == '__main__':
main()
Update
Now also available in Rust: https://github.com/conqp/color-code
Last edited by schard (2022-11-03 20:27:16)
Inofficial first vice president of the Rust Evangelism Strike Force
Offline
Inspired by this discussion...
Nice. How about a hexdump / sed version that's as ugly as it is tiny:
#!/bin/sh
hexdump -bve '/1 "|%0.3o|\n"' | sed -n '
/|[0-9]\+|/ { s/|\(.\)\(.\)\(.\).*/^[[4\1m ^[[4\2m ^[[4\3m /;H; }
$ { g;s/\n//g;s/$/^[[0m/;p; }
'
If anyone knows how to get hexdump to not print offsets this could be simplified a fair bit.
EDIT: escape codes don't copy paste cleanly ... standby. EDIT 2: I added their representation back in, but this cant just be copied and used as is. The "^[" sequences need to be replaced by actual escapes (e.g., in vim type "ctrl-v esc" to insert these). As an interesting irony, I could feed this script to itself and post a screenshot of the resulting image which would less ambiguously represent the actual content of the script with escape codes than does the text above! New forum rule: never post an image of text ... unless it's an image of octal-color-coded blocks.
And also a C version:
#include <stdio.h>
int main(void) {
int c;
while ((c=getchar()) != EOF)
printf("\033[4%hhum \033[4%hhum \033[4%hhum ", c & 7, (c>>3) & 7, (c>>6) & 7);
printf("\033[0m\n");
return 0;
}
Last edited by Trilby (2022-11-03 14:59:33)
"UNIX is simple and coherent" - Dennis Ritchie; "GNU's Not Unix" - Richard Stallman
Offline
Finally got around to trying to improve on my fumbling about in bash to download Nat Geo picture of the day.
Took Trilby's advice to use an actual parser, and did it in python which I'm currently learning:
#!/usr/bin/env python3
from bs4 import BeautifulSoup
from datetime import date
from pathlib import Path
from requests import get
def get_image_url(soup: BeautifulSoup) -> str:
"""Find image link tag and pull out url contained in "content" attribute"""
image_tag = soup.find("meta", property="og:image")
return image_tag["content"]
def get_image_name(url: str) -> str:
"""Split image url into list of strings by path seperator and return last element"""
return url.split("/")[-1]
def get_description(soup: BeautifulSoup) -> str:
"""Find description tag and pull out description text in "content" attribute"""
description_tag = soup.find("meta", attrs={"name": "description"})
return description_tag["content"]
def newest_file(path: Path, pattern: str = "*.jpg") -> Path:
"""Return newest jpg in passed path, if no file match return passed path"""
files = path.glob(pattern)
return max(files, key=lambda x: x.stat().st_ctime, default=path)
BASE_PATH = Path("/home/ghost/Pictures/NGWallpapers/")
INDEX_FILE = "image-details.txt"
URL = "https://www.nationalgeographic.com/photo-of-the-day"
soup = BeautifulSoup(get(URL).text, "html.parser")
today = date.today()
image_url = get_image_url(soup)
remote_ngpotd = get_image_name(image_url)
newest_local_ngpotd = newest_file(BASE_PATH)
description_text = get_description(soup)
if newest_local_ngpotd is BASE_PATH or newest_local_ngpotd.name != remote_ngpotd:
response = get(image_url)
open(BASE_PATH.joinpath(remote_ngpotd), "wb").write(response.content)
print(f"{remote_ngpotd} downloaded succesfully")
f = open(BASE_PATH.joinpath(INDEX_FILE), "a")
f.write(f"{today}: {description_text}\n")
f.close()
else:
print(f"{remote_ngpotd} already exists!")
Any feedback is certainly welcomed.
Last edited by CarbonChauvinist (2022-11-10 05:33:49)
"the wind-blown way, wanna win? don't play"
Offline
Finally got around to trying to improve on my fumbling about in bash to download Nat Geo picture of the day.
Took Trilby's advice to use an actual parser, and did it in python which I'm currently learning:
...
Any feedback is certainly welcomed.
Cool script thanks for sharing! I noticed that the script will fail if `BASE_PATH` did not contain any `.jpg` files. The issue is in the function `newest_file()`
def newest_file(path: Path, pattern: str = "*.jpg") -> Path:
"""Return newest jpg in passed path"""
files = path.glob(pattern)
return max(files, key=lambda x: x.stat().st_ctime)
The error:
❯ ./sss-Nat-Geo-POD
Traceback (most recent call last):
File "sss-Nat-Geo-POD", line 45, in <module>
newest_local_ngpotd = newest_file(BASE_PATH)
File "sss-Nat-Geo-POD", line 33, in newest_file
return max(files, key=lambda x: x.stat().st_ctime)
ValueError: max() arg is an empty sequence
Offline
The fix would be to give a default argument to max():
return max(files, key=lambda x: x.stat().st_ctime, default=None)
Note that "None" may not be a good default, perhaps a new-constructed empty Path would be better - but I don't have much experience with pathlib, e.g.:
return max(files, key=lambda x: x.stat().st_ctime, default=Path())
Last edited by Trilby (2022-11-09 13:51:10)
"UNIX is simple and coherent" - Dennis Ritchie; "GNU's Not Unix" - Richard Stallman
Offline
Wow, thanks for feedback! Didn't really contemplate someone else actually using the script tbh. So I didn't make it that robust with exception handling etc. On my system the path would never not have a jpg, and on any new system that would only be true the first time running the script.
Trilby's suggestion I think is best -- use a default value in max (Python 3.4+ only, but this is Arch after all, so that's not a problem). I decided to just use the same path that was passed to the function and not newly constructed path though.
def newest_file(path: Path, pattern: str = "*.jpg") -> Path:
"""Return newest jpg in passed path"""
files = path.glob(pattern)
return max(files, key=lambda x: x.stat().st_ctime, default=path)
and then added as another test in the if clause:
if newest_local_ngpotd is BASE_PATH or newest_local_ngpotd.name != remote_ngpotd:
I've edited my earlier post to reflect these changes.
"the wind-blown way, wanna win? don't play"
Offline
AUR helpers are often mentioned, but how they resolve dependencies is usually left in the dark. Here's a basic example which, despite the bulk functionality being below 40 lines, handles split packages and nested dependencies. It also caches dependencies to reduce load on the AUR.
#!/usr/bin/python3
import json
import requests
import re
import logging
import sys
from urllib.parse import quote
from copy import copy
def aurjson(pkgs):
endpoint = 'https://aur.archlinux.org/rpc/v5/info'
payload = {'arg[]': [quote(x) for x in pkgs] }
response = requests.post(endpoint, data=payload)
logging.debug(pkgs)
if len(pkgs) == 0:
return []
if response.status_code == 200:
response_dict = json.loads(response.text)
if response_dict['type'] == 'error':
logging.error(response_dict['error'])
raise RuntimeError('response error')
return response_dict['results']
response.raise_for_status()
def depends(pkgs, types, max_req=30):
depends = copy(pkgs)
cache = {} # cache to avoid duplicate entries
reqby = {} # reverse depends (parent nodes in dependency graph)
for a in range(0, max_req):
level = aurjson(depends)
if not len(level):
break # no results
depends.clear()
for node in level: # iterate over array of dicts
cache[node['Name']] = copy(node)
for dtype in types:
if dtype not in node:
continue # no dependency of this type
for spec in node[dtype]:
# split versioned dependency
nver = re.split(r'<=|>=|<|=|>', spec)
dep = nver[0]
ver = nver[1] if len(nver) == 2 else None
# populate reverse depends
if dep in reqby:
reqby[dep].append(node['Name'])
else:
reqby[dep] = [node['Name']]
# check for cache hits
if dep in cache:
continue
depends.append(dep)
# mark as incomplete (dep retrieved in next step or repo package)
cache[dep] = None
return cache, reqby
if __name__ == "__main__":
logging.basicConfig(stream=sys.stderr, level=logging.ERROR)
types = ['Depends', 'MakeDepends'] # modify to suit
results, reqby = depends(sys.argv, types)
# From here onwards, you can do anything with the results.
# The below gives dependency <-> package pairs suitable for tsort(1)
for dep in reqby:
for pkg in reqby[dep]:
#print(dep, pkg) # to print everything
if results[dep] is not None:
#print(dep, pkg) # to only print AUR targets
print(results[dep]['PackageBase'], results[pkg]['PackageBase']) # as above, but AUR pkgbase instead of pkgname
The "requires by" dictionary is optional, but allows to e.g. filter out AUR dependencies that are already provided by some other repository package.
Side-note: AUR helpers tend to use much more complicated logic than the above, because they are trying to make things work with pacman -U. With pacman -S (local repositories), most dependency tasks can be left directly to pacman, and the above suffices.
Last edited by Alad (2022-11-15 11:16:57)
Mods are just community members who have the occasionally necessary option to move threads around and edit posts. -- Trilby
Offline
I needed to analyze some systemd journal entries.
So I wrote a script to search a journal JSON dump for keywords and count their occurrence.
Additionally the script can log the boots.
This is in order to use the resulting JSON file with canvas.js or the likes to plot the stuff on a web page:
#! /usr/bin/env python3
"""Parse journal timestamps for event plotting."""
from argparse import ArgumentParser, Namespace
from collections import defaultdict
from datetime import datetime
from functools import partial
from json import dumps, loads
from pathlib import Path
from typing import Iterable, Iterator
def iter_journal(filename: Path) -> Iterator[dict[str, str]]:
"""Yield journal entries as JSON objects."""
with filename.open('r', encoding='utf-8') as file:
for line in file:
yield loads(line)
def count(
keywords: Iterable[str],
entries: Iterable[dict[str, str]],
*,
boots: bool = False,
normalize: bool = False
) -> dict[str, dict]:
"""Count the entries for the respective keywords
at the respective timestamps.
If boots is True, also accumulate the boot timestamps.
"""
events = defaultdict(partial(defaultdict, int))
for entry in entries:
if isinstance(message := entry.get('MESSAGE'), str):
for keyword in keywords:
if keyword in message:
events[keyword][
get_timestamp(entry, normalize=normalize).isoformat()
] += 1
if boots and (bootid := entry.get('_BOOT_ID')) not in events['boots']:
events['boots'][bootid] = get_timestamp(
entry, normalize=normalize
).isoformat()
return events
def get_timestamp(
entry: dict[str, str],
*,
normalize: bool = False
) -> datetime:
"""Parse a datetime timestamp from a journal entry."""
timestamp = datetime.fromtimestamp(
int(entry['__REALTIME_TIMESTAMP']) / 1000000
)
if normalize:
return timestamp.replace(microsecond=0)
return timestamp
def get_args(description: str = __doc__) -> Namespace:
"""Parse the command line arguments."""
parser = ArgumentParser(description=description)
parser.add_argument(
'file', type=Path, help='systemd journal file in JSON format'
)
parser.add_argument(
'keyword', nargs='*', help='keywords to filter for (REGEX)'
)
parser.add_argument(
'-b', '--boots', action='store_true', help='add boots to plot'
)
parser.add_argument(
'-i', '--indent', type=int, help='indentation for JSON output'
)
parser.add_argument(
'-n', '--normalize', action='store_true',
help='normalize timestamps to seconds'
)
return parser.parse_args()
def main() -> None:
"""Run the script."""
args = get_args()
result = count(
args.keyword,
iter_journal(args.file),
boots=args.boots,
normalize=args.normalize
)
print(dumps(result, indent=args.indent))
if __name__ == '__main__':
main()
Last edited by schard (2022-11-18 12:21:47)
Inofficial first vice president of the Rust Evangelism Strike Force
Offline
Video compressor using ffmpeg, ffprobe and fdkaac meant for Discord to fit videos under 8MB file upload limit or whatever size you desire
// MIT License | ugjka@proton.me
// https://github.com/ugjka/X/blob/main/8mb.video/main.go
//
// https://8mb.video was down, so...
// Fit a video into a 8mb file (Discord nitro pls?)
//
// Needs ffmpeg ffprobe fdkaac
// Tested only on Linux/Termux
//
// To build this you need the Go compiler:
// go build -o 8mb.video main.go
package main
import (
"bytes"
"flag"
"fmt"
"os"
"os/exec"
"os/signal"
"path"
"strconv"
"strings"
"syscall"
)
const USAGE = `Usage: %s [OPTIONS] [FILE]
Compress a video to target size
(default audio: 32kbps stereo he-aac v2)
Options:
-down float
resolution downscale multiplier (default 1)
values above 100 scales by the width in pixels
-music
64kbps stereo audio (he-aac v1)
-voice
16kbps mono audio (he-aac v1)
-mute
no audio
-preset string
h264 encode preset (default "slow")
-size float
target size in MB (default 8)
`
func main() {
// check for dependencies
exes := []string{"ffmpeg", "ffprobe", "fdkaac"}
for _, exe := range exes {
if _, err := exec.LookPath(exe); err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
}
size := flag.Float64("size", 8, "target size in MB")
preset := flag.String("preset", "slow", "h264 encode preset")
down := flag.Float64("down", 1, "resolution downscale multiplier, "+
"values above 100 scales by the width in pixels")
music := flag.Bool("music", false, "64kbps stereo audio (he-aac v1)")
voice := flag.Bool("voice", false, "16kbps mono audio (he-aac v1)")
mute := flag.Bool("mute", false, "no audio")
flag.Usage = func() {
fmt.Fprintf(os.Stderr, USAGE, path.Base(os.Args[0]))
}
flag.Parse()
if len(flag.Args()) == 0 {
fmt.Fprintln(os.Stderr, "error: no filename given")
os.Exit(1)
}
if *down < 1 {
fmt.Fprintln(os.Stderr, "downscale multiplier cannot be less than 1")
os.Exit(1)
}
file := flag.Args()[0]
// get video lenght in seconds
probe := exec.Command(
"ffprobe",
"-i", file,
"-show_entries", "format=duration",
"-v", "quiet",
"-of", "csv=p=0",
)
secbytes, err := probe.Output()
if err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
secbytes = bytes.TrimSpace(secbytes)
seconds, err := strconv.ParseFloat(string(secbytes), 64)
if err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
const MEG = 8388.608 // https://trac.ffmpeg.org/wiki/Encode/H.264#twopass
bitfloat := *size * MEG / seconds
// ffmpeg encodes stuff in chunks
// we need to deal with possible bitrate overshoot
// guessed values
switch {
case bitfloat > 800:
// 256KB overshoot
bitfloat -= 0.25 * MEG / seconds
case bitfloat > 400:
// 64KB overshoot
bitfloat -= 0.0625 * MEG / seconds
default:
// 32KB overshoot
bitfloat -= 0.03125 * MEG / seconds
}
// muxing overhead (not exact science)
// based on observed values
// depends on fps and who knows...
overhead := 86.8 / bitfloat * 0.05785312
bitfloat -= bitfloat * overhead
// allocate cmds
wavfile := &exec.Cmd{}
aacfile := &exec.Cmd{}
pass1 := &exec.Cmd{}
pass2 := &exec.Cmd{}
// trap ctrl+c and kill
sig := make(chan os.Signal, 1)
signal.Notify(sig, os.Interrupt, syscall.SIGTERM)
go func() {
<-sig
if pass1.Process != nil {
pass1.Process.Kill()
}
if wavfile.Process != nil {
wavfile.Process.Kill()
}
if aacfile.Process != nil {
aacfile.Process.Kill()
}
if pass2.Process != nil {
pass2.Process.Kill()
}
}()
// remove tmp files
// we don't use /tmp because Termux and Android
cleanup := func() {
os.Remove(file + "-0.log")
os.Remove(file + "-0.log.mbtree")
os.Remove(file + "-0.log.temp")
os.Remove(file + "-0.log.mbtree.temp")
os.Remove(file + ".wav")
os.Remove(file + ".m4a")
}
abitrate := 32
audioch := 2
profile := "29" // HE-AACv2
if *music {
abitrate = 64
profile = "5" // HE-AACv1
}
if *voice {
abitrate = 16
audioch = 1
profile = "5" // HE-AACv1
}
// we need to do this mumbo jumbo because fdk_aac encoder is disabled
// on 99.99% of ffmpeg installations (even on Arch)
// fdkaac standalone encoder is fine though
//
// wav decode
wavfile = exec.Command(
"ffmpeg", "-y",
"-i", file,
"-ar", "44100",
"-ac", fmt.Sprintf("%d", audioch),
file+".wav",
)
wavfile.Stderr = os.Stderr
wavfile.Stdout = os.Stdout
if !*mute {
err = wavfile.Run()
if err != nil {
// catch kill signals
if strings.Contains(err.Error(), "signal") {
cleanup()
os.Exit(1)
} else {
// if wav decode fails, there's no audio
*mute = true
}
}
}
// aac encode
aacfile = exec.Command(
"fdkaac",
"-p", profile,
"-b", fmt.Sprintf("%d000", abitrate),
file+".wav",
)
aacfile.Stderr = os.Stderr
aacfile.Stdout = os.Stdout
if !*mute {
err = aacfile.Run()
if err != nil {
cleanup()
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
}
// video bitrate
vbitrate := int(bitfloat) - abitrate
if *mute {
vbitrate = int(bitfloat)
}
// construct output filename
arr := strings.Split(file, ".")
output := strings.Join(arr[0:len(arr)-1], ".")
output = fmt.Sprintf("%gmb.%s.mp4", *size, output)
// beware: changing this changes the muxing overhead
const FPS = 24
// resolution scale/crop filter and FPS
// because h264 wants to be multiples of 2
vfparams := ":force_original_aspect_ratio=increase," +
"setsar=1," +
"crop=trunc(iw/2)*2:trunc(ih/2)*2," +
"fps=%d"
vfopt := fmt.Sprintf(
"scale=iw/%f:-1"+vfparams, *down, FPS,
)
if *down >= 100 {
vfopt = fmt.Sprintf(
"scale=%f:-1"+vfparams, *down, FPS,
)
}
// pass 1
pass1 = exec.Command(
"ffmpeg", "-y",
"-i", file,
"-vf", vfopt,
"-c:v", "libx264",
"-preset", *preset,
"-b:v", fmt.Sprintf("%dk", vbitrate),
"-pass", "1",
"-passlogfile", file,
"-movflags", "+faststart",
"-an",
"-f", "null",
"/dev/null",
)
pass1.Stderr = os.Stderr
pass1.Stdout = os.Stdout
err = pass1.Run()
if err != nil {
cleanup()
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
// pass 2
if *mute {
pass2 = exec.Command(
"ffmpeg", "-y",
"-i", file,
"-vf", vfopt,
"-c:v", "libx264",
"-preset", *preset,
"-b:v", fmt.Sprintf("%dk", vbitrate),
"-pass", "2",
"-passlogfile", file,
"-movflags", "+faststart",
"-c:a", "copy",
"-an",
output,
)
} else {
pass2 = exec.Command(
"ffmpeg", "-y",
"-i", file,
"-i", file+".m4a",
"-vf", vfopt,
"-c:v", "libx264",
"-preset", *preset,
"-b:v", fmt.Sprintf("%dk", vbitrate),
"-pass", "2",
"-passlogfile", file,
"-movflags", "+faststart",
"-c:a", "copy",
"-map", "0:v:0",
"-map", "1:a:0",
output,
)
}
pass2.Stderr = os.Stderr
pass2.Stdout = os.Stdout
err = pass2.Run()
if err != nil {
cleanup()
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
// remove the mess
cleanup()
}
Last edited by ugjka (2023-02-04 16:58:34)
https://ugjka.net
paru > yay | vesktop > discord
pacman -S spotify-launcher
mount /dev/disk/by-...
Offline
youtoday
========
Edit: "2.0", uses fzf instead of dialog and can handle channel @aliases and uses feh as a makeshift thumbnail preview
requires: xmlstarlet, fzf, xargs, mpv, yt-dlp, feh
Update:
- replace xargs w/ read -a
- indicate age w/ colors
- load $HOME/.local/share/youtoday.jpg as first file ("loading screen")
#!/bin/bash
# requires: xmlstarlet, fzf, xargs, mpv, yt-dlp, feh
# for mpv, you can use sth. like eg.
# -ytdl-raw-options='format="mp4,best[height<720]"'
# to limit bandwidth and prefer h264
# video format to limit bandwidth and prefer h264
# "<720" will usually pick 480p, but is rebust enough for pivoted videos
# see "FORMAT SELECTION" in the yt-dlp manpage
ytdlp_format='format="mp4,best[height<720]"'
fullscreen="--fullscreen"
if [ "$1" = "help" ]; then
exe=$(basename $0)
cat << EOF
Usage:
------
· $exe help
· $exe channelid 'https://www.youtube.com/watch?v=AbCDefG-HiJKlmn'
· $exe [goodnight] [group [group [...]]]
- group:
select on or multiple groups of channels (news, entertainment, spors, nerd stuff, …)
- goodnight:
quit after all videos played instead of infinitely looping (and updating the list ever 30 mins)
- channelid:
query the channel ID from any video on that or or @alias of the channel (that you can copy and paste from your browser ;)
EOF
exit
fi
FEH_PID=0
youtoday_tumbs="/tmp/youtoday.tumbs"
if [ "$1" = "preview" ]; then
TMB="${2##* }"; TMB="${TMB##*.com/vi/}"; TMB="${TMB%%/hqdefault.jpg}.jpg"
mkdir -p "$youtoday_tumbs"
[ -e "$youtoday_tumbs/$TMB" ] || curl -so "$youtoday_tumbs/$TMB" "${2##* }"
ln -sf "$youtoday_tumbs/$TMB" "$youtoday_tumbs/tmb.jpg"
if ! xdotool search --name youtoday_preview; then
setsid feh --title youtoday_preview -x "$youtoday_tumbs/tmb.jpg" >&/dev/null &
fi
exit
fi
if [ "$1" = "channelid" ]; then
echo "Hold on…"
if [[ "$2" = "@"* ]]; then
source="https://www.youtube.com/$2"
else
source="$2"
fi
basename "$(yt-dlp -q --print channel_url --playlist-items 1 "$source" 2>/dev/null)"
exit
fi
goodnight=false
if [ "$1" = "goodnight" ]; then
goodnight=true
shift
fi
if (( $# < 1)); then
groups=(all)
else
groups=("$@")
fi
. $HOME/.config/youtoday.channels
is_known_group() {
[[ "$1" =~ ^[a-zA-Z_][a-zA-Z0-9_]*$ ]] || return 1 # invalid name
[[ -z ${!1+x} ]] && return 1 # doesn't exist
return 0 # \o/
}
channels=()
for group in "${groups[@]}"; do
if is_known_group "$group"; then
array="${group}[@]"
channels+=(${!array})
elif [[ "$group" = "@"* ]]; then
if [ -e $HOME/.config/youtoday.aliases ]; then
while read alias channelid; do
[ "$alias" = "$group" ] && break
channelid=""
done < $HOME/.config/youtoday.aliases
fi
if [ -z "$channelid" ]; then
channelid="$(basename "$(yt-dlp -q --print channel_url --playlist-items 1 "https://www.youtube.com/$group" 2>/dev/null)")"
echo "$group $channelid" >> $HOME/.config/youtoday.aliases
fi
channels+=( "$channelid" )
fi
done
#https://www.youtube.com/feeds/videos.xml?playlist_id=PLQl70RwNJA1SNzUqsRcDH0QWd-m_zCDBm
youtoday_cache="/tmp/.youtoday.$$.cache"
video_list() {
if $1; then
(for channel in ${channels[@]}; do
curl -s "https://www.youtube.com/feeds/videos.xml?channel_id=$channel" | \
xmlstarlet sel -T -N atom="http://www.w3.org/2005/Atom" -t -m '/atom:feed/atom:entry' \
-v atom:published -o " " -v atom:link/@href -o " \e[0;34m" -v atom:author/atom:name \
-o "\e[0m " -v atom:title -o " " -v media:group/media:thumbnail/@url -n
done) | sed "/$(date +%F)/s/\\\e\[0\;34m/\\\e\[1\;33m/g; /$(date +%F -d yesterday)/s/\\\e\[0\;34m/\\\e\[1\;34m/g" | sort -r > "$youtoday_cache"
fi
echo -e 'reload reload Reload \e[1;37;41m Tubes \e[0m·'
echo -e "$(< $youtoday_cache)"
}
cleanup() {
rm -f "$youtoday_cache"
rm -rf "$youtoday_tumbs"
xdotool search --name youtoday_preview windowkill
}
trap "cleanup; exit 1" 10
PROC="$$"
last_update=0
while true; do
now=$(date +%s)
((now - last_update > 30*60)) && update=true || update=false
last_update=$now
shopt -s lastpipe
declare -a vids
video_list $update | (fzf --ansi -m --layout=reverse-list --preview-window=right,0 --with-nth 3 -d " " \
--no-sort --preview="$0 preview {}" || kill -10 $PROC) | cut -f2 | read -d '' -r -a vids
shopt -u lastpipe
if (( ${#vids[@]} == 1 )) && [ "${vids[0]}" = "reload" ]; then
last_update=0 # force update
continue
fi
xdotool search --name youtoday_preview windowkill
(( ${#vids[@]} > 0 )) && mpv --force-window=immediate $fullscreen --no-terminal -ytdl-raw-options="$ytdlp_format" $HOME/.local/share/youtoday.jpg "${vids[@]}"
$goodnight && break
done
cleanup
requires: xmlstarlet, dialog, xargs, mpv, yt-dlp
This fetches the rss from configurable channels and merges all uploads of the last two days into a sorted list.
You can select them in a dialog (it's possible to make dialog look not shit) and play them in a row.
Similar to youtube-viewer.
Pros:
- faster (because of the RSS approach)
- merged channel list (because of sort)
- UI mouse capable (because of dialog)
Cons:
- no video duration hint (not in the RSS afaict)
- takes up more horizontal space (because of the dialog buildlist, checklist would not allow to sort the playlist)
- you cannot use it to search videos etc. the sole purpose is to feed you your daily tubecasts
#!/bin/bash
# requires: xmlstarlet, dialog, xargs, mpv, yt-dlp
# for mpv, you can use sth. like eg.
# -ytdl-raw-options='format="mp4,best[height<720]"'
# to limit bandwidth and prefer h264
# video format to limit bandwidth and prefer h264
# "<720" will usually pick 480p, but is rebust enough for pivoted videos
# see "FORMAT SELECTION" in the yt-dlp manpage
ytdlp_format='format="mp4,best[height<720]"'
fullscreen="--fullscreen"
if [ "$1" = "help" ]; then
exe=$(basename $0)
cat << EOF
Usage:
------
· $exe help
· $exe channelid 'https://www.youtube.com/watch?v=AbCDefG-HiJKlmn'
· $exe [goodnight] [group [group [...]]]
- group:
select on or multiple groups of channels (news, entertainment, spors, nerd stuff, …)
- goodnight:
quit after all videos played instead of infinitely looping (and updating the list ever 30 mins)
- channelid:
query the channel ID from any video on that channel (that you can copy and paste from your browser ;)
EOF
exit
fi
if [ "$1" = "channelid" ]; then
echo "Hold on…"
basename "$(yt-dlp -q --print channel_url --playlist-items 1 "$2" 2>/dev/null)"
exit
fi
goodnight=false
if [ "$1" = "goodnight" ]; then
goodnight=true
shift
fi
if (( $# < 1)); then
groups=(all)
else
groups=("$@")
fi
. $HOME/.config/youtoday.channels
is_known_group() {
[[ "$1" =~ ^[a-zA-Z_][a-zA-Z0-9_]*$ ]] || return 1 # invalid name
[[ -z ${!1+x} ]] && return 1 # doesn't exist
return 0 # \o/
}
channels=()
for group in "${groups[@]}"; do
if is_known_group "$group"; then
array="${group}[@]"
channels+=(${!array})
fi
done
previous=0
update_list() {
now=$(date +%s)
if [ "$1" == "force" ] || ((now - previous > 30*60)); then
echo "Hold on, fetching videos…"
items="$((for channel in ${channels[@]}; do
curl -s "https://www.youtube.com/feeds/videos.xml?channel_id=$channel" | \
xmlstarlet sel -T -N atom="http://www.w3.org/2005/Atom" -t -m \
'/atom:feed/atom:entry[starts-with(atom:published,"'$(date +%Y-%m-%d)'") or
starts-with(atom:published,"'$(date +%Y-%m-%d -d yesterday)'")]' \
-v atom:published -o " %'" -v atom:link/@href -o "%' %'\Z1" -v atom:author/atom:name -o "\Zn " -v atom:title -o "%' off" -n
done) | sort -r | cut -d " " -f 2- | sed "s/\([^%]\)'/\1'\\\''/g; s/%'/'/g")"
IFS=$'\n' items=( $(xargs -n1 <<<"$items") )
fi
previous=$now
}
trap "exit 1" 10
PROC="$$"
while true; do
update_list
exec 3< <(dialog --no-hot-list --stderr --colors --no-lines --reorder --no-tags --no-shadow --ok-label Play --single-quoted --buildlist '\ZbYou\Zn \Z1\Zr Today \Zn' 0 0 0 ${items[@]} 3>&1 1>&2 2>&3 || kill -10 $PROC)
read -u3 vids
3>&-
IFS=$'\n' vids=( $(xargs -n1 <<<"$vids") )
(( ${#vids[@]} > 0 )) && mpv $fullscreen -ytdl-raw-options="$ytdlp_format" --no-terminal ${vids[@]}
$goodnight && break
(( ${#vids[@]} > 0 )) || update_list force
done
You need ~/.config/youtoday.channels which will look like
news=(
UCxxxxxxxxxxxxxxxxx
UCyyyyyyyyyyyyyyyyy
UCzzzzzzzzzzzzzzzzz
)
tech=(
UCxxxxxxxxxxxxxxxxx
UCyyyyyyyyyyyyyyyyy
UCzzzzzzzzzzzzzzzzz
)
science=(
UCxxxxxxxxxxxxxxxxx
UCyyyyyyyyyyyyyyyyy
UCzzzzzzzzzzzzzzzzz
)
games=(
UCxxxxxxxxxxxxxxxxx
UCyyyyyyyyyyyyyyyyy
UCzzzzzzzzzzzzzzzzz
)
sports=(
UCxxxxxxxxxxxxxxxxx
UCyyyyyyyyyyyyyyyyy
UCzzzzzzzzzzzzzzzzz
)
nerdstuff=(${tech[@]} ${games[@]} ${science[@]})
And then can run
youtoday sports
or
youtoday nerdstuff
to filter the channels.
To query the channel id from any video run
youtoday channelid 'https://www.youtube.com/watch?v=AbCDefG-HiJKlmn'
w/ the videos url.
Last edited by seth (2023-02-18 17:07:25)
Online
LinAmp / mpv.a
--------------
An attempt to turn mpv into a usable minimal mp3 player
Supports rc
- mpv.a play|pause|playpause|next|prev
- mpv.a file.mp3 # will replace the current file,
- mpv.a directory # works likewise (but you get a playlist out of it)
#!/bin/bash
IPC_SOCK=/tmp/.$USER.mpv.a.socket
case "$1" in
play)
COMMAND='{ "command": ["set_property", "pause", false] }'
;;
pause)
COMMAND='{ "command": ["set_property", "pause", true] }'
;;
playpause)
COMMAND='cycle pause'
;;
next)
COMMAND='playlist-next'
;;
prev)
COMMAND='playlist-prev'
;;
esac
if [ -n "$COMMAND" ]; then
echo "$COMMAND" | socat - $IPC_SOCK > /dev/null 2>&1
exit $?
fi
real_path="$1"
[ -e "$real_path" ] && real_path="$(realpath "$real_path")"
if socat -u OPEN:/dev/null UNIX-CONNECT:$IPC_SOCK >/dev/null 2>&1 && [ -e "$real_path" ] || [[ "$real_path" = *"://"* ]]; then
echo -e 'loadfile "'$real_path'"\n{ "command": ["set_property", "pause", false] }' | socat - $IPC_SOCK > /dev/null && exit 0
fi
mpv --stop-screensaver=no --input-ipc-server=$IPC_SOCK --ontop --no-border --geometry=360x120-64+32 \
--title='${metadata/by-key/title:${filename/no-ext}}' \
--video-unscaled=yes --force-window=yes --osd-on-seek=no --idle=yes --keep-open=yes --x11-name="linAMP" \
--script-opts=osc-layout=box,osc-scaleforcedwindow=4.0,osc-visibility=always,osc-vidscale=no,osc-windowcontrols_alignment=left,osc-title='${metadata/by-key/title:${filename/no-ext}}' \
"$@" >/dev/null 2>&1 &
disown
Online
A script to convert WireGuard connection configuration from wireguard-indicator style to native NetworkManager style:
#! /usr/bin/env python3
"""Convert NetworkManager profiles for WireGuard."""
from argparse import ArgumentParser, Namespace
from configparser import ConfigParser
from pathlib import Path
from sys import stdout
from typing import Sequence
__all__ = ['wireguard_indicator_to_network_manager']
def wireguard_indicator_to_network_manager(
profile: ConfigParser,
interface_name: str | None
) -> ConfigParser:
"""Convert a wireguard-indicator profile
into a native NetworkManager profile.
"""
nm_profile = ConfigParser()
copy_sections(profile, nm_profile, exclude={'wireguard', 'vpn-secrets'})
nm_profile.set('connection', 'type', 'wireguard')
nm_profile.set(
'connection',
'interface-name',
interface_name or profile.get('connection', 'id')
)
nm_profile.add_section('wireguard')
nm_profile.set(
'wireguard',
'private-key',
profile.get('wireguard', 'local-private-key')
)
peer_public_key = profile.get('wireguard', 'peer-public-key')
peer_section = f'wireguard-peer.{peer_public_key}'
nm_profile.add_section(peer_section)
nm_profile.set(
peer_section,
'endpoint',
profile.get('wireguard', 'peer-endpoint')
)
allowed_ips = filter(None, map(
str.strip,
profile.get('wireguard', 'peer-allowed-ips').split(',')
))
nm_profile.set(peer_section, 'allowed-ips', ';'.join(allowed_ips) + ';')
nm_profile.set(
peer_section,
'preshared-key',
profile.get('wireguard', 'peer-preshared-key')
)
nm_profile.set(peer_section, 'preshared-key-flags', '0')
return nm_profile
def copy_sections(
src: ConfigParser,
dst: ConfigParser,
*,
exclude: Sequence[str] = frozenset()
) -> None:
"""Copy sections from one config parser to another."""
for section in filter(
lambda section: section not in exclude,
src.sections()
):
dst.add_section(section)
for key, value in src.items(section):
dst.set(section, key, value)
def parse_args(description: str = __doc__) -> Namespace:
"""Return the parsed command line arguments."""
parser = ArgumentParser(description=description)
parser.add_argument(
'src',
type=Path,
help='wireguard-indicator-style source file'
)
parser.add_argument(
'-i',
'--interface-name',
help='the name of the network interface'
)
return parser.parse_args()
def main():
"""Run the script."""
args = parse_args()
src = ConfigParser()
src.read(args.src)
out = wireguard_indicator_to_network_manager(src, args.interface_name)
out.write(stdout, space_around_delimiters=False)
if __name__ == '__main__':
main()
Last edited by schard (2023-03-23 09:31:22)
Inofficial first vice president of the Rust Evangelism Strike Force
Offline
yt transcript downloader
// MIT+NoAI License
//
// Copyright (c) 2023 ugjka <ugjka@proton.me>
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights///
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in all
// copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
// SOFTWARE.
//
// This code may not be used to train artificial intelligence computer models
// or retrieved by artificial intelligence software or hardware.
//
// *******************
// https://github.com/ugjka/X/blob/main/ytext/main.go
//
// About:
// Get transcriptions of youtube videos by IDs or URLs
//
// Needs yt-dlp
// Tested only on Linux
//
// To build this you need the Go compiler:
// go build -o ytext main.go
//
package main
import (
"bytes"
"encoding/json"
"flag"
"fmt"
"io"
"os"
"os/exec"
"os/signal"
"path"
"strings"
"syscall"
)
const USAGE = `Usage: %s [OPTIONS] [Youtube IDs or URLs]
Get transcriptions of youtube videos by IDs or URLs
(default language: 'en')
Options:
-lang string
language iso code,
`
const YTCMD = "yt-dlp -4 -i --skip-download --write-auto-subs --sub-format json3 --sub-langs %s --"
func main() {
// check for yt-dlp
if _, err := exec.LookPath("yt-dlp"); err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
exe := path.Base(os.Args[0])
// trap ctrl+c and kill
cmd := &exec.Cmd{}
tmp, err := os.MkdirTemp("", exe)
if err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
sig := make(chan os.Signal, 1)
signal.Notify(sig, os.Interrupt, syscall.SIGTERM)
go func() {
<-sig
if cmd.Process != nil {
cmd.Process.Kill()
}
os.RemoveAll(tmp)
fmt.Fprintf(os.Stderr, "error: %s aborted\n", exe)
os.Exit(1)
}()
lang := flag.String("lang", "en", "language (iso code)")
flag.Usage = func() {
fmt.Fprintf(os.Stderr, USAGE, exe)
}
flag.Parse()
if len(flag.Args()) == 0 {
fmt.Fprintln(os.Stderr, "error: no youtube IDs or URLs given")
os.Exit(1)
}
ids := flag.Args()
curdir, err := os.Getwd()
if err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
cmdarr := strings.Split(fmt.Sprintf(YTCMD, *lang), " ")
err = os.Chdir(tmp)
if err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
cmdarr = append(cmdarr, ids...)
cmd = exec.Command(cmdarr[0], cmdarr[1:]...)
cmd.Stderr = os.Stderr
cmd.Stdout = os.Stdout
err = cmd.Run()
if err != nil {
fmt.Fprintln(os.Stderr, "error: there were some errors")
}
dir, err := os.ReadDir("./")
if err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
if len(dir) == 0 {
fmt.Fprintln(os.Stderr, "error: transcriptions not found")
os.RemoveAll(tmp)
os.Exit(1)
}
for _, file := range dir {
name := file.Name()
jsonfile, err := os.Open(name)
if err != nil {
fmt.Fprintln(os.Stderr, err)
os.RemoveAll(tmp)
os.Exit(1)
}
err = os.Chdir(curdir)
if err != nil {
fmt.Fprintln(os.Stderr, err)
os.RemoveAll(tmp)
os.Exit(1)
}
newname := name[:len(name)-(4+len(*lang))] + ".txt"
err = os.WriteFile(newname, json3toText(jsonfile), 0644)
if err != nil {
fmt.Fprintln(os.Stderr, err)
os.RemoveAll(tmp)
jsonfile.Close()
os.Exit(1)
}
jsonfile.Close()
fmt.Fprintln(os.Stderr, newname+" written...")
err = os.Chdir(tmp)
if err != nil {
fmt.Fprintln(os.Stderr, err)
os.RemoveAll(tmp)
os.Exit(1)
}
}
err = os.RemoveAll(tmp)
if err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
}
func json3toText(r io.Reader) []byte {
var data JSON3
var b = bytes.NewBuffer(nil)
json.NewDecoder(r).Decode(&data)
for _, v := range data.Events {
for _, v := range v.Segs {
b.WriteString(v.UTF8)
}
}
b.WriteString("\n")
return b.Bytes()
}
type JSON3 struct {
Events []struct {
Segs []struct {
UTF8 string `json:"utf8"`
}
}
}
https://ugjka.net
paru > yay | vesktop > discord
pacman -S spotify-launcher
mount /dev/disk/by-...
Offline
Uninstalling a package does not remove the systemd symlinks created by systemctl enable.
for target in getty multi-user sysinit timers; do
for link in $(readlink -v /etc/systemd/system/$target.target.wants/*); do
printf "$target: "; pacman -Qo $link || pacman -F $link
done
done
The pacman -F part helps to identify remains of formerly installed packages; a manually added service or timer will not spit out anything. However, pacman -F will produce false negatives if the sync db is not up to date, so I personally replace pacman -F with pkgfile. In addition, pkgfile is still faster, the whole thing runs in 9 seconds on my machine with pkgfile, versus 12 seconds with pacman -F.
Offline
Couldn't you acheive the same goal by just looking for broken links:
find -L /etc/systemd/ -type l
"UNIX is simple and coherent" - Dennis Ritchie; "GNU's Not Unix" - Richard Stallman
Offline
Script to find wifi passwords from networkmanager
#!/bin/sh
# Works with networkmanager only
if id | grep "root" >/dev/null; then
grep -e ssid -e "psk=" /etc/NetworkManager/system-connections/* | sed 's/\/etc.*://g; s/ssid=//g; s/psk=/ /g'
else
nmcli d wifi show-password
echo -e "$(tput setaf 7; tput setab 6; tput bold) For all SSID, run as superuser $(tput sgr 0)" && exit
fi
Arch is home!
https://github.com/Docbroke
Offline