You are not logged in.
I wonder, is it legal to have Googlebot as a user agent?
Never gave that much thought. Report yourself as a browser if you wish. User agent isn't everything anyway. You can still be fingerprinted to what you probably are even with a fake user agent. It will get you a page made for a specific platform if the website serves up pages like that. If you want a page with .m3u8 info then a page for an iphone will have that, the windows page may not.
Got my browser user agent set to Win 10 Firefox 56
https://panopticlick.eff.org
Knows that I am a linux box.
Since you ask about user agent, and this is a scripting thread
If you want to check what your scripts/browsers are reporting then:
netcat -l -p 8000 -v
or
ncat -k -l -p 8000 -v
Then open web browser or script to http://127.0.0.1:8000
And read your UA in the netcat terminal.
Or just go to http://ipchicken.com
And there is a good example. Set you user agent to googlebot and
Sorry, you have been blocked
You are unable to access ipchicken.com
Offline
Output from ncat (the same is also shown by ipchicken):
Host: localhost:13000
User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:45.0) Gecko/20100101 Firefox/45.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
And I'm running links2 Also, panopticlick doesn't work withou javascript.
Last edited by Leonid.I (2018-02-05 02:31:16)
Arch Linux is more than just GNU/Linux -- it's an adventure
pkill -9 systemd
Offline
Read-only mode for Vim:
vim -RM <filename>
Vim is my favorite text editor. I love the keyboard controls and the syntax highlighting, so naturally I also want to use it as a pager somewhat, to read text files. At home I have "vimpager" installed and it's great. But there are many Linux computers I deal with on a daily basis where vimpager is just not available.
First I learned about "-R". Read-only mode! Perfect! ...until I accidentally bumped a key and found out, it's not truly read-only mode. I can still make changes to the text file, and I can even save it with a ":w!" command. Lame.
So then I learned about "-M". No modify mode! That's what I was looking for. ...until I tried to edit a file in a new instance of Vim that I was already viewing in another terminal. Turns out it still created a hidden "dot" file to prevent editing, even though it couldn't be edited.
I was in despair. Was there really a feature that Vim DIDN'T have??
But THEN, I just recently realized that I can combine the two!
So now I get the "no hidden lock dot file" feature from "-R" and the "no modify" feature from "-M". Just what I needed.
Offline
At home I have "vimpager" installed and it's great...
I liked vimpager too. Until I read it's code.
Then I learned I could do the same with a trivial custom vimrc, so now I have this:
export MANPAGER='col -b | vim -u ~/.vimmanrc --not-a-term -'
(note to BASH users, this will not work for you without an extra subshell: e.g. '/bin/bash -c "col -b | ..."')
And the meat of .vimmanrc (minus some personal touches)
noremap q :q!
syntax on
set ft=man
set ts=8
set nomod
set noma
And you could have a general (non man) pager with just `vim -u ~/.vimmanrc`.
Last edited by Trilby (2018-02-07 21:49:40)
"UNIX is simple and coherent" - Dennis Ritchie; "GNU's Not Unix" - Richard Stallman
Offline
In case anyone of you needs to validate an e-mail address some day:
I made a library for that. (Mostly a command-line tool... yet.)
Offline
I wanted to do something like this...
[ugjka@archee dumb-mp3-streamer]$ arecord - -f cd| lame - - | dumb-mp3-streamer
Recording WAVE '-' : Signed 16 bit Little Endian, Rate 44100 Hz, Stereo
2018/02/25 22:02:51 Starting Streaming on http://127.0.0.1:8080/stream
2018/02/25 22:02:51 Starting Streaming on http://192.168.123.100:8080/stream
2018/02/25 22:02:51 Starting Streaming on http://192.168.1.3:8080/stream
2018/02/25 22:02:51 Starting Streaming on http://[::1]:8080/stream
2018/02/25 22:03:01 Buffer created...
So I made this https://github.com/ugjka/dumb-mp3-streamer
https://ugjka.net
paru > yay | vesktop > discord
pacman -S spotify-launcher
mount /dev/disk/by-...
Offline
I've been using this script that, combined with a crontab job, checks the battery level and notifies the user through libnotify. I tried to improve it so that it doesn't dumb things such as yelling "Battery is low!" even if the AC is plugged in. Also, it warns you if the battery is full.
This is my first piece of code ever, and it was more an excuse to get acquainted with vim, so it's not going to make you go "wow".
#!/bin/sh
#crontab job */5 * * * * /home/angelo/.bin/battery
export XAUTHORITY=/home/angelo/.Xauthority
export DISPLAY=":0"
battery_level=`acpi -b | cut -d ' ' -f 4 | grep -o '[0-9]*'`
battery_charge=`acpi -b | cut -d ' ' -f 3 | grep -o '[Aa-Zz]*'`
battery_time=`acpi -b | awk -F '[ :]' '{print $7}'`
if [ "$battery_charge" = "Full" ];
then
notify-send "Charged" "Battery level is 100%"
fi
if [ \( "$battery_charge" != "Charging" \) -a \( "$battery_level" -le 15 \) ];
then
notify-send "Battery level is low" "${battery_level}%, ${battery_time} minutes left"
fi
if [ "$battery_level" -le 6 ];
then
notify-send "Critical battery level" "Suspending in 10 seconds..."
fi
sleep 10
if
[ `cat /sys/class/power_supply/BAT1/status` = "Discharging" ];
then
systemctl suspend
fi
EDIT: it was still doing one dumb thing (not allowing you to plug the charger during the sleep), now it just does the job.
EDIT 2: if using the same variable in different conditions this script was still dumb enough to suspend without checking the second 'if then' statement (and suspending even if you insert the plug within the 10 seconds time span).
Lot of road ahead before I learn this stuff.
Last edited by lo1 (2018-02-28 18:02:15)
Offline
Bash Multi Tool ver 1.1 - Bash Shell library for commonly used functions to be imported into scripts .
I write a lot of bash scripts so I made a library to import frequently used functions, rather than keep copying the same code snippets
into each script.
Not sure if there is much demand for this type of project.
This is just first version. v1.1
sample test output
Last edited by glyons (2018-02-28 23:35:32)
Offline
https://github.com/ugjka/zipper
Messing around in Go
https://ugjka.net
paru > yay | vesktop > discord
pacman -S spotify-launcher
mount /dev/disk/by-...
Offline
This script downloads pages of current local (local for me) newspaper and merges them.
#!/bin/sh
news_date=$(date --date=yesterday +%d | sed 's/^0//')
curl --fail http://digitalimages.bhaskar.com/gujarat/epaperpdf/$(date +%d%m%Y)/$(news_date)VAPI%20CITY-PG[1-25]-0.PDF --create-dirs --output $HOME/db/$(date +%d%m%Y)/Vapi#1.pdf
cd $HOME/db/$(date +%d%m%Y)
parts="$(find * | sort -V)"
pdfunite $parts $HOME/db/$(date +%d%m%Y).pdf && /usr/bin/rm -r $HOME/db/$(date +%d%m%Y)
Last edited by Docbroke (2018-03-14 11:29:55)
Arch is home!
https://github.com/Docbroke
Offline
I made a script to manage config files. It moves files into a repository and symlinks them back to their original location.
It has currently 7 commands: install, uninstall, add, rm, mv, cp , ln. For usage info run "conf --help". It uses stow for installing/uninstalling, and like stow it has support for multiple "packages" so that you can install only the relevant packages on each system.
The script was created because I was tired of using stow manually. Especially adding local files to the repository, moving files out of the repository or deleting files are much easier with this tool.
Just run "conf add [-p PACKAGE] FILE1 FILE2" to move FILE1 and FILE2 into PACKAGE in repository and symlinking them back into place.
Not all options are currently working 100%, -d, -v, -q, -b and -f are only partly implemented as of now, so dont rely on them
Here it is if anyone wants to give it a try: https://github.com/rstnd/config-manager, Feedback is greatly appreciated
Last edited by rstnd (2018-03-16 16:40:15)
Offline
Usage: systemcreate.py [OPTIONS]
Simple program for making systemd units:
* Starts after network
* Restarts on failure
Options:
-n, --name TEXT Unit name.
-t, --type TEXT Application/Script type (bash/python2/python3).
-p, --path TEXT Full path to Application/Script.
-a, --args TEXT Aplication/Script arguments "one two three".
--version Show the version and exit.
--help Show this message and exit.
#!/usr/bin/python
#python 3
import click
import os
import sys
global bashUnit
global pythonUnit
bashUnit = '''[Unit]
Description=%s
Documentation=Generated by systemcreate.py
After=network.target
[Service]
Type=oneshot
ExecStart=%s %s
Restart=on-failure
RestartSec=15
[Install]
WantedBy=multi-user.target
'''
pythonUnit = '''[Unit]
Description=%s
Documentation=Generated by systemcreate.py
After=network.target
[Service]
Type=simple
ExecStart=%s %s %s
Restart=on-failure
RestartSec=15
[Install]
WantedBy=multi-user.target'''
@click.command()
@click.option('-n', '--name', help='Unit name.')
@click.option('-t', '--type', default='bash', help='Application/Script type (bash/python2/python3).')
@click.option('-p', '--path', help='Full path to Application/Script.')
@click.option('-a', '--args', help='Aplication/Script arguments "one two three".')
@click.version_option('0.1')
def systemcreate(name, type, path, args):
'''Simple program for making systemd units:
* Starts after network\n
* Restarts on failure'''
invalid = 'You have entered an invalid '
success = 'Unit creation successfull.'
if name == None:
print(invalid + '--name.')
elif type not in ('bash', 'python2', 'python3'):
print(invalid + '--type.')
elif os.path.isfile(path) == False:
print(invalid + '--path.')
else:
file = open('/etc/systemd/system/' + name + '.service', 'w+')
python2 = '/usr/bin/python2'
python3 = '/usr/bin/python'
if args == None:
args = ''
if type == 'bash':
file.write(bashUnit % (name, path, args))
file.close()
print(success)
if type == 'python2':
file.write(pythonUnit % (name, python2, path, args))
file.close()
print(success)
if type == 'python3':
file.write(pythonUnit % (name, python3, path, args))
file.close()
print(success)
if __name__ == '__main__':
if os.geteuid() != 0:
print('You need root privilages to run systemcreate.py')
sys.exit(0)
systemcreate()
Scan for ftp servers with a known password:
#!/usr/bin/python
#python3
from ftplib import FTP
#gather information
subnet = input('IP subnet <x.x.x>.x: ')
ipStart = input('IP Range start x.x.x.<x>: ')
ipStop = input('IP Range stop x.x.x.<x>: ')
username = input('Username to try: ')
passwd = input('Password to try: ')
#clean output file
f = open('iplist.txt', 'w')
f.close()
#execute login attempts
for ip in range(int(ipStart), int(ipStop) + 1):
try:
host = str(subnet) + '.' + str(ip)
ftp = FTP(host)
feedback = ftp.login(username, passwd)
ftp.quit()
if feedback == '230 User logged in, proceed.':
f = open('iplist.txt', 'a')
f.write(host + '\n')
f.close()
print('Succsessful login.')
except TimeoutError:
print('Timeout: Moving on')
except ConnectionRefusedError:
print('connection refused.')
Last edited by izzno (2018-03-17 09:47:52)
Offline
Here's little handy thing that gives simplified overview of the system load situation. Interpreting (and troubleshooting) a system load can be a bit cumbersome at times, and there are many moving parts to consider.
The script here tries to create summary of the system load taking most common scenarios into account, so you may have better idea about where to start looking for possible culprits when your system starts to slow down. It also features intuitive colour scheme.
# Show most memory intensive processes
function psmem () {
echo "MSIZE: PID: USER: COMMAND:"
echo
ps -eo size,pid,user,command | sort -rn | head -10 | awk '
{·
hr[1024**2]="GB"; hr[1024]="MB";
for (x=1024**3; x>=1024; x/=1024) {
if ($1>=x) {·
printf ("%-6.2f %s ", $1/x, hr[x]);·
break·
}
}·
}·
{·
printf ("%-6s %-10s ", $2, $3)·
}
{·
for ( x=4 ; x<=NF ; x++ ) {·
printf ("%s ",$x)·
}·
print ("\n")·
}
'
}
# Show most CPU intensive processes
function pscpu(){
ps -eo pid,comm,user,%cpu --sort=-%cpu | head -n 10
}
# Show stalled mount points.
function mounthealth() {
stale=0
out=$(mount | awk '{print $3}' |·
while read mount_point ; do·
timeout 10 ls $mount_point >& /dev/null || echo "Stale mount point at $mount_point"
done | grep Stale)
echo "$out"
if [[ "$out" != "" ]]; then
stale=1
fi
if [[ "$stale" == "0" ]]; then
echo "All mountpoints are okay."
fi
}
# Show processes in a state of uninterruptable sleep
function dps () {
ps aux | egrep " D| Z" | egrep -v "egrep D| Z"
}
# I'm not sure about the final verdict scheme. If you have something to add/change/subtract, feel free to do so.
function get_verdict() {
if [[ $m5 -le $m10 ]]; then
if [[ $m1 -le $m5 ]]; then
verdictNo=0
else
verdictNo=1
fi
else
if [[ $m1 -le $m5 ]]; then
verdictNo=1
else
verdictNo=2
fi
fi
if [[ $m1 -ge $vcpus ]]; then
verdictNo=2
fi
if [[ $m1 -lt $vcpus ]] && [[ $m5 -lt $vcpus ]]; then
verdictNo=10
fi
if [[ "$iowaitnumber" -gt "0" ]]; then
if [[ $m1 -ge $m5 ]]; then
verdictNo=2
fi
fi
}
# Not really all that useful without some persistent metrics. But it does the job in short term.
function get_trend() {
trend=`expr $2 - $1`
if [[ "$trend" -gt "0" ]]; then
trendmsg="$Blue Falling$Color_Off"
else
trend=$((-1*$trend))
if [[ "$trend" -lt "100" ]]; then
trendmsg="$Green Rising$Color_Off"
elif [[ "$trend" -lt "500" ]]; then
trendmsg="$Yellow Rising$Color_Off"
else
trendmsg="$Red Rising$Color_Off"
fi
fi
echo -e "$trendmsg"
}
function verdict() {
Color_Off="\033[0m"
Red="\033[0;31m"
Green="\033[0;32m"
Yellow="\033[0;33m"
Blue="\033[0;36m"
m1=${m1/.}
m1=${m1#0}
m5=${m5/.}
m5=${m5#0}
m10=${m10/.}
m10=${m10#0}
vcpus=$(($cpus * 100))
verdictNo=1
get_verdict·
if [[ "$verdictNo" == "0" ]]; then
verdictmsg="$Green Good$Color_Off"
fi
if [[ "$verdictNo" == "1" ]]; then
verdictmsg="$Yellow Mixed$Color_Off"
fi
if [[ "$verdictNo" == "2" ]]; then
verdictmsg="$Red Warning$Color_Off"
fi
if [[ "$verdictNo" == "10" ]]; then
verdictmsg="$Blue Relaxed$Color_Off"
fi
echo -e "Verdict: $verdictmsg"
echo -e "Trend: $(get_trend $m1 $m5)"
}
# Free memory percentage
memory_percentage() {
free | grep Mem | awk '{print $4/$2 * 100.0}'
}
# Function name is probably wrong. It checks for D processes; I just assume those are waiting for the IO.
get_iowait(){
iowait=`top -b -n 1 | awk '{if (NR <=7) print; else if ($8 == "D") {print; count++} } END {print "Total status D (I/O wait, probably): "count}' | grep Total `
iowaitnumber=`echo "$iowait" | awk '{print $7}'`
}
function load () {
echo
echo "System load:"
cpus=`nproc --all`
echo "CPUs: $cpus"
m10=`cat /proc/loadavg | awk '{print $3}'`
m5=`cat /proc/loadavg | awk '{print $2}'`
m1=`cat /proc/loadavg | awk '{print $1}'`
echo -e "Last 10m: $m10";·
echo -e "Last 5m: $m5";·
echo -e "Last 1m: $m1";·
echo -e "Running processes: `cat /proc/loadavg | awk '{print $4}'`"
echo -e "Total processes: `cat /proc/loadavg | awk '{print $5}'`"
get_iowait
if [[ "$iowaitnumber" -gt "0" ]]; then
echo -e "$Red\b$iowait$Color_Off"
fi
cpuline=`top -b -n 1 |grep ^Cpu`
echo -e "Time spent idle: ` echo $cpuline| awk '{print $5}' | sed 's/%.*//g'`%"
echo -e "Time spent in wait on IO: ` echo $cpuline| awk '{print $6}' | sed 's/%.*//g'`%"
echo -e "Time spent in user space: ` echo $cpuline| awk '{print $2}' | sed 's/%.*//g'`%"
echo -e "Time spent in kernel space: ` echo $cpuline| awk '{print $3}' | sed 's/%.*//g'`%"
echo -e "Time spent on low priority processes: ` echo $cpuline| awk '{print $4}' | sed 's/%.*//g'`%"
echo -e "Time spent servicing hardware interrupts: `echo $cpuline| awk '{print $7}' | sed 's/%.*//g'`%"
echo -e "Time spent servicing software interrupts: `echo $cpuline| awk '{print $8}' | sed 's/%.*//g'`%"
echo
verdict
get_iowait
if [[ "$iowaitnumber" -gt 0 ]]; then
echo
echo "--------------------------------------------------------------------------------------------"
echo -e "$Yellow Processes in a state of uninterruptable sleep have been detected. Possible culprits:$Color_Off"
echo
dps
if [[ "$?" != "0" ]]; then
echo -e "$Green All gone now. $Color_Off"
fi
echo
mounthealth
echo
echo "--------------------------------------------------------------------------------------------"
echo
fi
mem=$(memory_percentage)·
mem=${mem/.}
mem=${mem#0}
if [[ $mem -gt 800000 ]]; then
echo
echo "--------------------------------------------------------------------------------------------"
echo -e "$Yellow High memory usage of $(memory_percentage)% has been detected. Possible culprints:$Color_Off"
echo
psmem
echo "--------------------------------------------------------------------------------------------"
echo
fi
if [[ $m1 -gt $vcpus ]]; then
echo
echo "--------------------------------------------------------------------------------------------"
echo -e "$Yellow Most CPU intensive processes:$Color_Off"
echo
pscpu
echo "--------------------------------------------------------------------------------------------"
echo
fi
}
I'm not an original author of some of the functions and I can't really remember who wrote them in the first place, so if you see your work in this post - thank you! All due credit goes to original scripts authors.
Hopefully someone will find it useful.
Example usage:
[~] ≽ load
System load:
CPUs: 32
Last 10m: 8.07
Last 5m: 8.50
Last 1m: 14.55
Running processes: 10/2986
Total processes: 21697
Time spent idle: 78.6%
Time spent in wait on IO: 0.0%
Time spent in user space: 9.7%
Time spent in kernel space: 11.0%
Time spent on low priority processes: 0.0%
Time spent servicing hardware interrupts: 0.0%
Time spent servicing software interrupts: 0.7%
Verdict: Relaxed
Trend: Rising
Cheers!
0x29a
Offline
Simple web page to pdf printer with PyQt5/QtWebEngine
Prints <url> to <file.pdf> and every link that you click in the
browser window to separate pdf files after the page loads. Or
you can print just one page and quit by closing the browser
window. Settings to change page layout/margins/paper size/font/
user agent/scripts-images on/off.
You can open a wiki for example and click on the links related
to the first page and every page loaded will be printed to
file1.pdf, file2.pdf file3.pdf etc. Logs to terminal which
titles go to what file and keeps a WebToPdf_log file.
Needs python3, python-pyqt5, qt5-webengine
#! /usr/bin/env python
# Python QtWebEngine Web page to pdf printer.
# Adjustable for content, paper size, font size, layout, user agent
# Makes a pdf from <url> and every page load (clicked links)
# Outputs log to terminal and WebToPdf_log
# Usage: script.py <url> <pdfname> or script.py and answer prompts
import sys, os
from PyQt5.QtGui import QPageLayout, QPageSize
from PyQt5.QtCore import QUrl, QSizeF, QMarginsF, pyqtSignal
from PyQt5.QtWidgets import QApplication
from PyQt5.QtWebEngineWidgets import (QWebEngineView,
QWebEngineSettings, QWebEngineProfile)
agent = ('Mozilla/5.0 (Windows NT 10.0; WOW64; rv:57.0)'
' Gecko/20100101 Firefox/57.0')
class PdfPrint():
def __init__(self, url, out_file):
super(PdfPrint, self).__init__()
#Set a browser user agent
self.agent = QWebEngineProfile()
self.agent.defaultProfile().setHttpUserAgent(agent)
#Set scripts or images on/off, font size for .pdf
self.printer = QWebEngineView()
self.printer.settings().setAttribute(
QWebEngineSettings.JavascriptEnabled, False)
self.printer.settings().setAttribute(
QWebEngineSettings.AutoLoadImages, False)
self.printer.settings().globalSettings().setFontSize(
QWebEngineSettings.MinimumFontSize, (18))
#Set page layout/margins/paper size for .pdf in mm
#Letter 216×279, Legal 216×356, Ledger 279×432, Tabloid 432×279
#A4 210x297, A3 297x420, A2 420x594, A1 594x841
margins = QMarginsF(5, 5, 5, 5)
layout = QPageLayout(QPageSize(QSizeF(216, 279),
QPageSize.Millimeter),QPageLayout.Portrait,
margins, QPageLayout.Millimeter, margins)
#Check if filename exists, if so add +1 to name, no overwrite
def pdf_name():
num = 1
while True:
f_name = ((out_file)+str(num)+'.pdf')
if os.path.exists(f_name):
num = int(num) + 1
else:
break
return f_name
def print_pdf():
self.printer.show()
self.printer.page().printToPdf(pdf_name(), layout)
print (self.printer.title())
print ('Saved to: '+''+(pdf_name())+'\n')
#Keep a log file
with open('WebToPdf_log', 'a') as f:
f.write('\nTitle: %s\nSaved to: %s\n'
% (self.printer.title(), pdf_name()))
self.printer.load(QUrl(url))
self.printer.loadFinished.connect(print_pdf)
self.printer.setZoomFactor(1.2) #page zoom
if __name__ == '__main__':
app = QApplication([])
#Open with arguments or prompt for input
if len(sys.argv) > 2:
url = (sys.argv[1])
out_file = (sys.argv[2])
else:
url = input('Enter/Paste url: ')
out_file = input('Enter pdf name: ')
PdfPrint(url, out_file)
sys.exit(app.exec_())
Offline
For quite a while I've been using my own tiny address book program for mutt. It replaces what many online examples have 'abook' doing: you could store the address(es) from a current message into an address book, and call out to that address book to complete addresses in the To field of new mail. This worked well, but I continually have to add addresses. I've now found a much better and simpler way courtesy of 'notmuch' which I've been using for ages, just never for this purpose. Now in my muttrc I have this:
set query_command = "rolo %s"
And I removed my binding for "saving" addresses as 'rolo' is the following script:
#!/bin/sh
notmuch address --deduplicate=address --output=count \
$(printf ' and from:%s' $* | sed 's/^ and //') | sort -nr | \
sed '
1 iSearch results for "'"$*"'"
s/^[0-9]*\s*\(.*\) <\([^>]*\)>/\2\t\1/
'
This takes any number of search terms and passes them to 'notmuch' for an address search of the senders of all of my email and formats it for display in mutt's listing. The 'sort' ensures most active addresses are at the top as these are most likely what I'd want.
Now I never have to worry about adding addresses to an address book: my maildir is my address book.
"UNIX is simple and coherent" - Dennis Ritchie; "GNU's Not Unix" - Richard Stallman
Offline
alias ?=time sleep inf
Offline
alias ?=time sleep inf
What do you use it for?
Offline
This is my bash function to download AUR packages from git. It grew organically, maybe I should clean it up a bit.
aurgitget() {
local AUR4GIT_STORAGE="$HOME/build/aur4get"
local AUR4GIT_TEMP="/tmp/aur4git"
local AUR4GIT_EDIT="EDIT"
mkdir -p $AUR4GIT_STORAGE $AUR4GIT_TEMP
cd $AUR4GIT_STORAGE
if [[ -e $1 ]]; then
if [[ $1/.git ]]; then
cd $AUR4GIT_STORAGE/$1
if [[ $(git pull) ]]; then
cd $AUR4GIT_STORAGE
else
echo "$AUR4GIT_STORAGE/$1 exists, but pull failed"
return 1
fi
else
echo "$AUR4GIT_STORAGE/$1 exists, but .git is missing"
return 1
fi
else
git clone $AUR/$1.git
fi
rsync -az $AUR4GIT_STORAGE/$1 $AUR4GIT_TEMP
cd $AUR4GIT_TEMP/$1
if [[ $AUR4GIT_EDIT == "EDIT" ]]; then
if [[ -e PKGBUILD ]]; then
$EDITOR PKGBUILD
pwd
else
echo "No PKGBUILD in $(pwd)!"
return 1
fi
fi
}
For some reason, I'm a sucker for rsync and distrust cp. I have no idea, where this comes from.
This is my AUR update preparation script based on aurgitget and cower:
aurupdate() {
local AUR4GIT_EDIT="FALSE"
for i in $(cower -qu); do
aurgitget $i
done
cd $AUR4GIT_TEMP
}
After I called aurupdate, I check the PGKBUILD files and manually type some for loop to build and install all packages with makepkg. I did not automate this, because a) split packages need to be handled with care, b) automation makes people careless and c) I don't do much admin work these days, so remembering how to write a simple for-loop is a way of keeping the bash brain cells active.
EDIT: I forgot, I got...
export AUR="https://aur.archlinux.org"
set in my .bashrc, the function requires it. You can also replace the $AUR call with the AUR address, but I set it up in case I really MUST play around with aurweb myself one day.
EDIT: Cleaned up some code, see Trilby's post below.
Last edited by Awebb (2018-04-11 12:01:21)
Offline
Awebb, this is rather clumsy:
if [[ ! -e $AUR4GIT_STORAGE ]]; then
mkdir -p $AUR4GIT_STORAGE
fi
if [[ ! -e $AUR4GIT_TEMP ]]; then
mkdir -p $AUR4GIT_TEMP
fi
Why not just this:
mkdir -p $AUR4GIT_STORAGE $AUR4GIT_TEMP
"UNIX is simple and coherent" - Dennis Ritchie; "GNU's Not Unix" - Richard Stallman
Offline
Awebb, this is rather clumsy:
if [[ ! -e $AUR4GIT_STORAGE ]]; then mkdir -p $AUR4GIT_STORAGE fi if [[ ! -e $AUR4GIT_TEMP ]]; then mkdir -p $AUR4GIT_TEMP fi
Why not just this:
mkdir -p $AUR4GIT_STORAGE $AUR4GIT_TEMP
It's an artifact. I initially did more than just mkdir in case the folder did not exist, but the rest of the code was purged.
Anything else caught your eye?
Offline
Not much, though I think the following can be trimmed quite a bit:
if [[ $1/.git ]]; then
cd $AUR4GIT_STORAGE/$1
if [[ $(git pull) ]]; then
cd $AUR4GIT_STORAGE
else
echo "$AUR4GIT_STORAGE/$1 exists, but pull failed"
return 1
fi
else
echo "$AUR4GIT_STORAGE/$1 exists, but .git is missing"
return 1
fi
There should be no need to return to the upper directory upon a successful clone as the next steps all use full/absolute paths anyways (so the cwd doesn't matter). So if I'm reading it right, this would do the same:
if [[ ! -e $1/.git ]]; then
echo "$AUR4GIT_STORAGE/$1 exists, but .git is missing"
return 1
fi
cd $AUR4GIT_STORAGE/$1
if [[ ! $(git pull) ]]; then
echo "$AUR4GIT_STORAGE/$1 exists, but pull failed"
return 1
fi
First I got rid of one branch of the conditional in the deepest nesting (no need to cd if git succeeds, just exit if it fails). Then I flipped the outter conditional to reduce the level of nesting (instead of 'if A (if B ...) else quit' use 'if ! A quit if B ...') This is somewhat a stylistic choice, but I think the reduction in lines of code, removal of a level of nesting, and the ability to keep all actions closer to the conditional test they depend on are concrete criteria in favor of this change.
Also, I think you may have been missing the "-e" or other test flag in the conditional for the $1/.git.
Last edited by Trilby (2018-04-11 12:15:39)
"UNIX is simple and coherent" - Dennis Ritchie; "GNU's Not Unix" - Richard Stallman
Offline
Indeed! That's what you get for growing code organically. Thank you.
Also, I think you may have been missing the "-e" or other test flag in the conditional for the $1/.git.
Copying must have swallowed it. It's in my file. Strange.
Offline
What is either
if [[ $(git pull) ]]; then
*or*
if [[ ! $(git pull) ]]; then
doing?
looking at the length of the stdout for this is, well, rather clumsy (and I as a user would prefer to see it anyway). Use proper git plumbing for this:
if ! git fetch; then
echo "something went horribly wrong, check your network"
return 1
fi
if (( $(git rev-list --count ..@{u}) > 0 )); then
git merge --ff-only
else
echo "already up-to-date"
return 1
fi
Last edited by eschwartz (2018-04-11 21:37:16)
Managing AUR repos The Right Way -- aurpublish (now a standalone tool)
Offline
Ah! Yes. I was doing so many tests for variables and folders, so the bash tests came automatically.
I'm not sure about "proper git plumbing", though. Why do you think clone and pull are improper and why is this fixed with fetch and merge?
Last edited by Awebb (2018-04-12 09:58:26)
Offline
It's not improper, unless you actually want to tell when the last pull succeeded because there was nothing to pull, or because it successfully pulled something. Currently you're checking that by seeing whether anything is emitted to stdout. -_-
git pull is porcelain designed to combine the fetch and merge commands into one, for no purpose other than to save the user some typing and be a bit more intuitive. It works *great* at this... but it does so by throwing away the scriptable status of the separate commands it runs internally.
Unless you'd like to propose some other way of telling how many commits were merged in the last merge, when a merge that doesn't do anything does not (by design) leave any traces? I guess you could save `git rev-parse HEAD` to a variable beforehand, then compare it to the results after a git pull to see if anything changed. This (separate pull && merge) seems a bit more natural to me though.
Last edited by eschwartz (2018-04-12 13:19:02)
Managing AUR repos The Right Way -- aurpublish (now a standalone tool)
Offline