You are not logged in.
I was fed up with navigating snownews so I made my own RSS aggregator: rainss, that puts all new entries in one file, opens it in an editor so you can delete lines of entries you don't want to open, and then opens each remaining link in your browser.
It requires community/xml2.
The feed file is ~/.rainss/url and each line should look like: Feedname<TAB (or any whitespace)>URL
All feeds are checked synchronously. New items are stored in ~/.rainss/news (opened automatically), a history is kept in ~/.rainss/history (so old items are not redisplayed).
#! /bin/bash
#Configure:
BROWSER=firefox
EDITOR='vim -c /@@@/'
#EDITOR opens the 'news' file which separates date+title from link with @@@ so this highlights @@@ by searching for it.
#All the lines left after you close the editor get opened in BROWSER, one at a time.
cd ~/.rainss || { echo No '~/.rainss' dir; exit; }
[[ -f url ]] || { echo No url file. Syntax: '"FEEDNAME <TAB> URL"'; exit; }
#FEEDNAME is only used for quicker identification when you open 'news' in your EDITOR.
date=
title=
link=
latest=$(while read feed url; do
curl -s "$url" | \
xml2 | \
grep -o -e 'item/pubDate=.*' -e 'item/title=.*' -e 'item/link=.*' -e 'item/dc:date.*' | \
while read; do
#We're only interested in the date, title and link. Once all three are found for an entry, print them.
if [[ "${REPLY%%=*}" == item/pubDate ]]; then
date=${REPLY#*=}
fi
if [[ "${REPLY%%=*}" == item/dc:date ]]; then
#RDF format uses dc:date
date=${REPLY#*=}
fi
if [[ "${REPLY%%=*}" == item/title ]]; then
#In rare cases plaintext titles are supplemented by HTML titles
title=${title:+$title,}${REPLY#*=}
fi
if [[ "${REPLY%%=*}" == item/link ]]; then
#If a tag is missing, then the link should not be overwritten with a new link, but be printed anyway until you can find out why a tag was missing.
[[ $link ]] && echo ERROR: duplicate link on feed: $feed: $link >&2
link=${REPLY#*=}
fi
if [[ -n "$date" && -n "$title" && -n "$link" ]]; then
printf "%s %s【%s】@@@%s\n" "$(date -d "$date" +"%Y-%m-%d %H:%M:%S")" "$feed" "$title" "$link"
date=
title=
link=
fi
#Every feed check is backgrounded / synchronized
done &
done < url)
wait
#The 'history' file keeps all seen links. Only check the URL for changes (the text after @@@). Don't re-report an entry just because the title was changed.
#The 'news' file contains the new entries.
[[ -s history ]] &&
echo "$latest" | grep -Fv "$(sed 's/.*@@@//' history 2>/dev/null)" | tee -a history | tee news >/dev/null ||
echo "$latest" | tee -a history | tee news >/dev/null
#Tell stdout how many items were found and what time it is.
echo -ne "\r$(date): "
wc -l news
#Show today's date as 'TODAY' and yesterday's date as 'YESTERDAY' for easier identification.
sed -i "s/$(date +%Y-%m-%d)/TODAY/; s/$(date +%Y-%m-%d -d yesterday)/YESTERDAY/" news
#Open news in EDITOR so you can choose which items you want to open by deleting the lines of the entries you don't want to open, saving and quitting.
[[ -s news ]] && $EDITOR news
#Open the links to all those entries with BROWSER, one at a time, not too quickly.
[[ -s news ]] && for url in $(sed 's/.*@@@//' news); do $BROWSER "$url"; sleep 0.6; done &>/dev/null
Last edited by Procyon (2013-09-19 16:44:13)
Offline
Good idea.
But the implementation will need some more work.
Running the script for the first time - no existing news and history files - results in empty news and history file.
I am using 'ArchLinux https://planet.archlinux.org/rss20.xml' as a single entry in ~/.rainss.url.
I was expecting all entries to show up in history and in tne news file.
Offline
I suppose that's because grep errors because there is no history file. Does it work the second time you run it now that the history file exists?
Offline
No.
After a second run, both news and history files exist, but are empty.
The problem is with 'grep -Fv ...'. Since the history file is empty, "$(sed 's/.*@@@//' history)" results in the empty string and grep -Fv '' will yield nothing.
Offline
This patch gets me a little further.
--- rainss.orig 2013-09-18 18:15:16.022304312 +0200
+++ rainss 2013-09-18 19:00:45.814189513 +0200
@@ -1,7 +1,10 @@
#! /bin/bash
+debug=yes
+set -e
#Configure:
BROWSER=firefox
-EDITOR='vim -c /@@@/'
+# Use EDITOR from the environment - I am using emacs and dont have vim installed :)
+# EDITOR='vim -c /@@@/'
#EDITOR opens the 'news' file which separates date+title from link with @@@ so this highlights @@@ by searching for it.
#All the lines left after you close the editor get opened in BROWSER, one at a time.
@@ -35,7 +38,7 @@
link=${REPLY#*=}
fi
if [[ -n "$date" && -n "$title" && -n "$link" ]]; then
- printf "%s %s【%s】@@@%s\n" "$(date -d "$date" +"%Y-%m-%d %H:%M:%S")" "$feed" "$title" "$link"
+ printf "%s %s[ %s ] @@@%s\n" "$(date -d "$date" +"%Y-%m-%d %H:%M:%S")" "$feed" "$title" "$link"
date=
title=
link=
@@ -47,8 +50,14 @@
wait
#The 'history' file keeps all seen links. Only check the URL for changes (the text after @@@). Don't re-report an entry just because the title was changed.
#The 'news' file contains the new entries.
-echo "$latest" | grep -Fv "$(sed 's/.*@@@//' history)" | tee -a history | tee news >/dev/null
-
+[[ -f history && -n "$(sed -n ' /.*@@@/ {p;q}' history)" ]] &&
+echo "$latest" | grep -Fv "$(sed 's/.*@@@//' history)" | tee -a history | tee news >/dev/null ||
+echo "$latest" | tee -a history | tee news >/dev/null
+news_lines=$(wc -l news | awk '{print $1}')
+hist_lines=$(wc -l history | awk '{print $1}')
+[[ debug ]] && {
+ printf 'News lines: %d - history lines: %d\n' $news_lines $hist_lines
+}
#Tell stdout how many items where found and what time it is.
echo -ne "\r$(date): "
wc -l news
@@ -57,6 +66,7 @@
sed -i "s/$(date +%Y-%m-%d)/TODAY/; s/$(date +%Y-%m-%d -d yesterday)/YESTERDAY/" news
#Open news in EDITOR so you can choose which items you want to open by deleting the lines of the entries you don't want to open, saving and quitting.
-[[ -s news ]] && $EDITOR news
+(( $news_lines )) && $EDITOR news
#Open the links to all those entries with BROWSER, one at a time, not too quickly.
-[[ -s news ]] && for url in $(sed 's/.*@@@//' news); do $BROWSER "$url"; sleep 0.6; done &>/dev/null
\ No newline at end of file
+[[ debug ]] && exit 0
+(( $news_lines )) && for url in $(sed 's/.*@@@//' news); do $BROWSER "$url"; sleep 0.6; done &>/dev/null
Now the initial run stores all items in both news and history.
And a subsequent run sets news to empty when no new items are added, and retains history.
Remains to test if new feeds are added correctly.
Offline
OK
Added 'PlanetDebian http://planet.debian.org/rss20.xml' to ~/.rainss,and the new Debian entries appear in the news file, and are added to the history file. Seems OK.
Last edited by jds1307 (2013-09-18 19:05:11)
Offline
All right, thanks. I updated the above script with the " || echo "$latest" | tee -a history | tee news >/dev/null" line.
Offline
Note that the following 3 lines go together:
+[[ -f history && -n "$(sed -n ' /.*@@@/ {p;q}' history)" ]] &&
+echo "$latest" | grep -Fv "$(sed 's/.*@@@//' history)" | tee -a history | tee news >/dev/null ||
+echo "$latest" | tee -a history | tee news >/dev/null
You left out thefirst line.
Offline
I fixed it now and tested it.
Offline
dehdr() {
CFLAGS="-Werror -Wfatal-errors" deheader "$@" | sed -n 's/.*: *remove *\(.*\) *from *\(.*\)/\2: \1/p'
}
In a C project source directory, it will list the unneeded header inclusions (dehdr -r will actually remove the related lines).
It requires deheader and a sensible Makefile.
Last edited by bloom (2013-09-20 09:55:38)
Offline
dehdr() { CFLAGS="-Werror -Wfatal-errors" deheader "$@" | sed -n 's/.*: *remove *\(.*\) *from *\(.*\)/\2: \1/p' }
In a C project source directory, it will list the unneeded header inclusions (dehdr -r will actually remove the related lines).
It requires deheader and a sensible Makefile.
Ooh, nice!
Offline
My little alias for using feh to randomly assign a wallpaper (needs to be all on one line):
alias randwp='find /home/jdarnold/data/wallpapers -type f \( -name "*.jpg" -or -name "*.png" -or -name "*.JPG" \) -print0 | shuf -z -n2 |xargs -0 feh --bg-scale'
I have a dual monitor setup so I grab the first two (-n2).
Offline
@ jdarnold, are the name options necessary? I suppose you only put valid filetypes in the wallpapers directory.
In case you have gifs in there, something like
! -iname "*.gif"
would look cleaner.
Offline
Or just
find /path/to/imgs -iregex '.*\.\(jpg\|png\)' -print0 ...
"UNIX is simple and coherent" - Dennis Ritchie; "GNU's Not Unix" - Richard Stallman
Offline
Thanks for that Trilby, didn't even know about that usage. And here I thought I knew a bit about "find".
Offline
Persistent installation of Archlinux to a USB Drive - Arch 2 Go
Dependencies: arch-install-scripts
You will need an internet connection.
The following script allows you to easily make a persistent installation of Archlinux on a USB drive.
By default, the script is configured to install the base system with multiple graphics card drivers,
X11, Gnome, Chromium and some more handy tools. You can configure the package selection
and setup as well as other parameters in the "Settings" paragraph.
The script must be run as root. It will display a list of devices and allow you to pick a drive.
It will then continue by partitioning (THIS WILL ERASE ALL YOUR DATA ON THE SELECTED DRIVE), downloading & installing packages, configuring and finally if everything went fine it will ask you to choose a password for the root account and for the user account.
Duration: ~30minutes
You should try different USB drives, some work really fast so that you can't tell difference to an HDD installation, others are just horribly slow, no matter if USB 3.0 or 2.0, fast speed ratings or slow ones.
#!/bin/bash
######################
###### Settings ######
######################
NAME="archusb" # Label as well as hostname
TIMEZONE=/usr/share/zoneinfo/Europe/Berlin
LOCALE="de_DE.UTF-8 UTF-8"
USERNAME="usbuser"
USER_FULLNAME="USB User"
# CHANGE THESE PACKAGES ACCORDING TO YOUR NEEDS
ADD_PACKAGES_XORG="xorg-server xorg-xinit xorg-utils xorg-server-utils"
ADD_PACKAGES_DRIVERS="xf86-input-synaptics"
ADD_PACKAGES_VID="xf86-video-nouveau xf86-video-vesa mesa xf86-video-ati xf86-video-intel xf86-video-nv"
ADD_PACKAGES_ENV="gnome gdm networkmanager"
ADD_PACAKGES_APP="chromium make gedit bash-completion file-roller p7zip openssh"
SUDO_CONFIG="%wheel ALL=(ALL) NOPASSWD: ALL"
enable_desktopmanager="systemctl enable gdm.service"
enable_networkmanager="systemctl enable NetworkManager.service"
#######################
###### Resources ######
#######################
fstab="# \n
# /etc/fstab: static file system information\n
#\n
# <file system> <dir> <type> <options> <dump> <pass>\n
LABEL=$NAME / ext4 rw,relatime,data=ordered 0 1"
archlinuxfr="[archlinuxfr]
Server = http://repo.archlinux.fr/$arch"
ADD_PACKAGES_DEF="base-devel syslinux sudo wget"
PACKAGES="base $ADD_PACKAGES_DEF $ADD_PACKAGES_XORG $ADD_PACKAGES_DRIVERS $ADD_PACKAGES_VID $ADD_PACKAGES_ENV $ADD_PACAKGES_APP"
chr="arch-chroot /mnt "
#########################
###### Preparation ######
#########################
# Abort if not run as root
if [ $(id -u) -ne 0 ]
then
echo "This script must be run as root. Aborting."
exit 1
fi
# Show available disks and let the user choose one
fdisk -l
echo "Enter disk /dev/sdX and press [ENTER]: "
read DISK
# Check if user input is sane
if ! [ -b "$DISK" ]
then
echo "$DISK is no valid block device."
exit 1
fi
echo "This script will delete everything on $DISK. Are you sure?"
echo "Press [ENTER] to continue or abort with [Ctrl]+[C]"
read -p "$*"
########################
###### Formatting ######
########################
# Unmount and partition the drive
for n in $DISK* ; do umount $n ; done
echo "Partitioning...";
(echo o; echo n; echo p; echo 1; echo ; echo; echo a; echo w) | sudo fdisk $DISK
# Format /dev/sdX1
PARTNUM="1"
PARTITION="$DISK$PARTNUM"
echo "Formatting $PARTITION"
mkfs.ext4 $PARTITION
e2label $PARTITION $NAME
##########################
###### Installation ######
##########################
# Mount /dev/sdX1 at /mnt
umount /mnt
mount $PARTITION /mnt
# Install base system
pacstrap /mnt $PACKAGES
###########################
###### Configuration ######
###########################
# Write /etc/fstab
echo -e $fstab > /mnt/etc/fstab
echo $NAME > /mnt/etc/hostname
sed -i "/${LOCALE}/ s/#*//" /mnt/etc/locale.gen
# Chroot into the system
$chr ln -s $TIMEZONE /etc/localtime
$chr locale-gen
$chr mkinitcpio -p linux
$chr syslinux-install_update -i -a -m
umount /mnt/dev
sed -i "s/APPEND root=.*/APPEND root=LABEL=$NAME rw/g" /mnt/boot/syslinux/syslinux.cfg
$chr $enable_desktopmanager
$chr $enable_networkmanager
####################
###### Yaourt ######
####################
echo $archlinuxfr >> /mnt/etc/pacman.conf
$chr "pacman -S yaourt"
# THE FOLLOWING IS A DEMO ON HOW TO MANUALLY ADD
# COMPILED SOFTWARE TO THE DRIVE
######################
###### DVSwitch ######
######################
#$chr wget http://mesecons.net/dvswitch_bundle.tar.gz -P /opt
#$chr tar -xvf /opt/dvswitch_bundle.tar.gz -C /opt
#$chr /opt/dvswitch_bundle/build.sh
#$chr /opt/dvswitch_bundle/build.sh
#$chr make install -C /opt/dvswitch_bundle/dvswitch/
#############################
###### Users/Passwords ######
#############################
sed -i "/${SUDO_CONFIG}/ s/# *//" /mnt/etc/sudoers
$chr useradd -G "wheel" -m -k /etc/skel $USERNAME
$chr chfn -f "$USER_FULLNAME" $USERNAME
echo "Root Password:"
$chr passwd
echo "User Password:"
$chr passwd $USERNAME
##############################
###### Unmount / Finish ######
##############################
umount /mnt
echo "USB Archlinux Creation script finished."
I hope this works for you and helps you!
Last edited by Jeija (2013-09-21 17:02:24)
Offline
YELLOW="\e[0;33m"
WHITE="\e[0;37m"
echo -e "\${YELLOW}_____${WHITE}/"
How can I get this to show a white backslash, yellow underscores, and a white slash?
The backslash before the color keeps screwing up the code...
Offline
This works:
b='\\'; echo -e "$b${YELLOW}___$WHITE/"
Last edited by steve___ (2013-09-22 12:23:27)
Offline
This works:
b='\\'; echo -e "$b${YELLOW}___$WHITE/"
doing that made this happen :
\e[0;33m____/
all white.
Offline
yellow=$'\e[0;33m'
white=$'\e[0;37m'
end=$'\e[0m'
printf '%s\n' "${white}\\${yellow}_____${white}/${end}"
Offline
Thank you, I got it to work now. Appriciate it.
Offline
It's true I could just be more rigorous about what files end up in the wallpapers folder, but sometimes files just show up in there (thanks, Windows). So I figure better safe than sorry. And thanks for the heads up on the -iregex Trilby
Offline
It could be me getting old, but I never seem to be able to locate rss-feeds on homepages, so I made a script for it:
#!/bin/bash
# Locate rss-feed of blogs/sites
# Usage: rss_finder <url>
#-------------------------------------------------------------------------------
# Variables & arrays
#-------------------------------------------------------------------------------
# Temporary local location of downloaded feed
rss_holder=/tmp/.rss_holder
# Array of feed-names {
feed[0]="feed"
feed[1]="feed.xml"
feed[2]="atom.xml"
feed[3]="rss"
feed[4]="rss.xml"
feed[5]="updates.xml"
# Wordpress
feed[6]="wp-rss.php"
feed[7]="wp-rss2.php"
feed[8]="wp-rdf.php"
feed[9]="wp-atom.php"
# Blogger
feed[10]="feeds/posts"
feed[11]="feeds/posts/default"
#}
check_if_twitter(){
if [ -n "$(echo "$1" | grep -oE "twitter.com/.*")" ]; then
# Yep, it's twitter
# Get twitter name
twitter_user=$(echo $1 | cut -d/ -f4)
# Print url
echo "http://www.twitter-rss.com/user_timeline.php?screen_name=${twitter_user}"
exit 1
fi
}
check_for_feed(){
# See if page exists
curl -fail -silent "${url}/${1}" > "${rss_holder}"
# If exit code is 0, page exists
if [ "$?" = "0" ]; then
# Check if page is indeed rss/atom feed
egrep -q "xml|rss version" "${rss_holder}"
if [ $? = 0 ]; then
# Is a feed, print url
echo "${url}/${feed_url}"
# Set variable to avoid sad error
feed_found="yes"
fi
fi
}
#-------------------------------------------------------------------------------
# Script
#-------------------------------------------------------------------------------
# Print usage if no URL was given
[[ -z "$1" ]] && echo -e "Usage:\t$(basename $0) <url>" && exit 1
# Strip "/" if given url has that at the end
url="$(echo "$1" | sed 's_/$__g')"
# Special function to check for twitter
# (Needs to be before brute-forcing)
check_if_twitter "$1"
# "Bruteforce" rss-location
for feed_url in "${feed[@]}"; do
# See if page exists
check_for_feed "${feed_url}"
done
# Print sad error if no feeds where found
if [ -z "${feed_found}" ]; then
echo "No feeds where found"
exit 1
else
rm "${rss_holder}"
exit 0
fi
Last edited by graph (2013-10-06 18:40:40)
Offline
One liner to remind yourself which packages you have not voted on in the AUR:
while read -r pkg _; do printf "%s\t" "${pkg}:" && aurvote -c "$pkg"; done < <(pacman -Qm)
Offline
This thread makes me realise just how specific most of my scripts are. I just don't think anybody else would be at all interested.
Don't know if anybody would find this useful:
$ cat bin/antiwordx
#!/bin/bash -
PATH=/usr/local/bin:/bin:/usr/bin; export PATH
allan=0
for i
do
j="${i/\.docx/.zip}"
d="${i/\.docx/}"
cp -p "$i" "$j"
if [ $? != 0 ]
then
((allan++))
continue
else
mkdir "$d"
if [ $? != 0 ]
then
((allan++))
continue
else
unzip -d "$d" "$j" > /dev/null
if [ $? != 0 ]
then
((allan++))
continue
else
sed 's/<w:p[^>]*>/\n/g' "$d/word/document.xml" | sed -e 's/<[^>]*>//g' -e '/^$/d' -e 's/$/\n/'
if [ $? != 0 ]
then
((allan++))
continue
else
rm -r "$d" "$j"
if [ $? != 0 ]
then
((allan++))
continue
fi
fi
fi
fi
fi
done
exit $allan
# vim: set nospell:
This was written before OpenOffice could read .docx files and I just desperately needed to be able to read the damn things somehow. It is no prettier than the output of antiword but it is useful for extracting information quickly when that's all that's required.
CLI Paste | How To Ask Questions
Arch Linux | x86_64 | GPT | EFI boot | refind | stub loader | systemd | LVM2 on LUKS
Lenovo x270 | Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz | Intel Wireless 8265/8275 | US keyboard w/ Euro | 512G NVMe INTEL SSDPEKKF512G7L
Offline