You are not logged in.
I call this pmls
#!/usr/bin/zsh
echo "\n\n"
echo "Would you like to see (b)inaries, (d)ocs, e(x)amples, or Enter for all?"
read CHOICE
echo "\n\n"
if [ $CHOICE = "b" ] ; then
pacman -Ql $1 | grep --color=auto 'bin'
else
if [ $CHOICE = "d" ] ; then
pacman -Ql $1 | grep --color=auto 'doc'
else
if [ $CHOICE = "x" ] ; then
pacman -Ql $1 | grep --color=auto -i 'sample\|example'
else
pacman -Ql $1 | more
fi
fi
fi
usage: pmls <package_name>
I use it to see only the binaries, docs, or examples. Pressing enter/return at the prompt displays everything.
Offline
Any particular reason to use nested 'if's? Wouldn't 'case' do? getopt/s?
Edit: typo.
Last edited by karol (2014-12-06 20:28:42)
Offline
or at least "elif" if you are going to stick with ifs.
"UNIX is simple and coherent" - Dennis Ritchie; "GNU's Not Unix" - Richard Stallman
Offline
"Self-made"
If someone wants' to "make it better" then I won't criticize.
Offline
I'd go with 3 separate shellscript functions e.g.
$ type pmlsb
pmlsb is a function
pmlsb ()
{
pacman -Ql $1 | grep --color=auto 'bin'
}
Offline
Made a basic pacman CLI. Probably not going to use often.
#! /bin/bash
clear
echo "pacSH"
echo "----------"
echo " "
echo "What do you want to do?"
echo " "
echo " [1] Install A Package"
echo " [2] Remove A Package"
echo " [3] Update System"
echo " [4] Install A AUR Package (requires yaourt)"
echo " [5] Remove A AUR Package (requires yaourt)"
echo " [6] Update AUR packages (requires yaourt)"
echo " "
read sel
if [[ $sel == "1" ]]; then
echo "Please enter a package name:"
read pkgn
sudo pacman -S $pkgn
elif [[ $sel == "2" ]]; then
echo "Please enter a package name:"
read pkgn
sudo pacman -Rs $pkgn
elif [[ $sel == "3" ]]; then
sudo pacman -Syu
elif [[ $sel == "4" ]]; then
echo "Please enter a package name:"
read pkgn
yaourt -S $pkgn
elif [[ $sel == "5" ]]; then
echo "Please enter a package name:"
read pkgn
yaourt -Rs $pkgn
elif [[ $sel == "6" ]]; then
yaourt -Syu
fi
echo "pacSH is done. Press Enter to exit"
read
Offline
Note that there is no need to use yaourt to remove AUR packages. Pacman doesn't care where a package originally came from, once it is installed, it is in pacman's database.
Also, unless yaourt has changed, the update aur packages doesn't actually update aur packages. You need to use -Syua to update all package including aur packages.
"UNIX is simple and coherent" - Dennis Ritchie; "GNU's Not Unix" - Richard Stallman
Offline
I appreciate the feedback. I also just want 1 script vice more. Here's my update:
#!/usr/bin/zsh
echo "Would you like to see (b)inaries, (d)ocs, e(x)amples, or Enter for all?"
read CHOICE
if [ $CHOICE = "b" ] ; then
pacman -Ql $1 | grep --color=auto 'bin'
elif [ $CHOICE = "d" ] ; then
pacman -Ql $1 | grep --color=auto 'doc'
elif [ $CHOICE = "x" ] ; then
pacman -Ql $1 | grep --color=auto -i 'sample\|example'
else
pacman -Ql $1 | more
fi
Offline
Look at using a case statement...
Offline
OK, a case version
#!/usr/bin/zsh
echo "Would you like to see (b)inaries, (d)ocs, e(x)amples, or Enter for all?"
read CHOICE
case $CHOICE in
b)
pacman -Ql $1 | grep --color=auto 'bin'
;;
d)
pacman -Ql $1 | grep --color=auto 'doc'
;;
x)
pacman -Ql $1 | grep --color=auto -i 'sample\|example'
;;
*)
pacman -Ql $1 | more
esac
Offline
Note that there is no need to use yaourt to remove AUR packages. Pacman doesn't care where a package originally came from, once it is installed, it is in pacman's database.
Also, unless yaourt has changed, the update aur packages doesn't actually update aur packages. You need to use -Syua to update all package including aur packages.
Thanks for this. Gonna update my script when im at my computer.
Offline
i use this when im not sure of the name of a program ...
#!/usr/bin/env perl
use strict;
use warnings;
use feature 'say';
use File::Find;
find sub{ say $File::Find::name if /$ARGV[0]/ }, $_ for split ':', $ENV{PATH};
i call it fip (find in path) ...
example:
fip python
=>
/usr/bin/python2
/usr/bin/python3-config
/usr/bin/python2.7-config
/usr/bin/ipython
/usr/bin/python3
/usr/bin/python2.7
/usr/bin/ipython2
/usr/bin/python2-config
/usr/bin/python3.4m
/usr/bin/python
/usr/bin/python-config
/usr/bin/python3.4-config
/usr/bin/python3.4m-config
/usr/bin/ipython3
/usr/bin/python3.4
Offline
Is it different from
$ type python
python is /usr/bin/python
$ which python
/usr/bin/python
? Do you want auto-completion?
Depending on the shell you're using and the settings you enabled, you can e.g. <tab> twice:
$ python <tab> <tab>
python python2-config python3 python3.4-config
python-config python2.7 python3-config python3.4m
python2 python2.7-config python3.4 python3.4m-config
Offline
Yeah, i guess my script is mostly a bad copy of autocompletion .
It will give more results than autocompletion though, for example with python as the search string it will show ipython, while autocompletion will only show strings that match /^python/
Last edited by juiko (2014-12-26 05:31:22)
Offline
Take a ride on a ttycycle
#!/bin/bash
[[ -n "${1}" ]] && INTERVAL="${1}" || INTERVAL="10"
while true; do
for (( i=2; i<7; i++ )) do sleep "${INTERVAL}"; chvt "$i" & done
done
Note, this ttycycle has no brakes (while true), you might want to set a timeout:
timeout "${TIMEOUT}" ttycycle
EDIT: Added an INTERVAL setting (number of seconds)
Last edited by quequotion (2017-12-22 02:23:32)
makepkg-optimize · indicator-powersave · pantheon-{3d,lite} · {pantheon,higan}-qq
Offline
$ for (( i=2; <i<7; i++ )) do sleep 10; chvt "$i" & done
-bash: ((: <i<7: syntax error: operand expected (error token is "<i<7")
Even when I fixed it, I get
Couldn't get a file descriptor referring to the console
Offline
I've still to test it, but this is my poor man's dropbox versus my brand new raspberry pi after i've set up ssh key authentication.
Beware that the first sync cannot be done in batch mode, but probably in manual mode.
#!/bin/bash
LDIR="/home/koko/Dropbox/"
RDIR="/root/unison"
RHOST="pi"
RPORT="40022"
while true ; do
unison -batch -prefer newer -times $LDIR ssh://root@$RHOST/$RDIR -sshargs "-p $RPORT -o cipher=arcfour"
timeout 60 inotifywait -r -e modify,move,create,delete $LDIR
done
Another thing comes to mind.
Any reason why the unison version in arch is so outdated?
the new 2.48 unison version has a wonderful copyonconflict option missing in the arch package
Help me to improve ssh-rdp !
Retroarch User? Try my koko-aio shader !
Offline
<i<7
typo; fixed.
Perhaps you have to actually initiate the ttys (ctrl+alt+f2~f7) before you can cycle between them?
I'm using this in a live session on a laptop which is monitoring several aspects of a very large backup to a desktop.
On tty1 I initiated a large backup (30GB disk image) from my old winblow$ laptop (arch live session) to my arch desktop over ssh (over wifi).
On tty6 I started sensorsweep (over ssh) for the desktop.
On tty5 I started powertop (over ssh) for the desktop.
On tty4 I started top (over ssh) for the desktop.
On tty3 I started powertop for the laptop (no sensors available).
On tty2 I started top for the laptop and ttycycle in the background:
ttycycle & top
for (( i=2; i<7; i++ )) specifies to cycle tty2 through tty6; tty1 is skipped intentionally (nothing to see there).
Last edited by quequotion (2014-12-29 13:20:52)
makepkg-optimize · indicator-powersave · pantheon-{3d,lite} · {pantheon,higan}-qq
Offline
I'm using this in a live session on a laptop which is monitoring several aspects of a very large backup to a desktop.
Subsequently, sshbeam
#!/bin/bash
if [ -n "$3" ]; then
if [ ! -d ~/sshfs-beam.in ]; then
mkdir ~/sshfs-beam.in
sshfs "$2"@"$1":/home/"$2"/Public ~/sshfs-beam.in
cp "$3" ~/sshfs-beam.in/
sleep 1
fusermount -u ~/sshfs-beam.in
rm -rf ~/sshfs-beam.in
else
if [ -n "$(mount | grep ~/sshfs-beam.in)" ]; then
cp "$3" ~/sshfs-beam.in/
else
sshfs "$2"@"$1":/home/"$2"/Public ~/sshfs-beam.in # sshfs "$1":/home/"$2"/Public ~/sshfs-beam.in
cp "$3" ~/sshfs-beam.in/
fi
fi
else
printf "\n #$0 ip user file\n"
fi
For example, I tested how long it takes to sshbeam a large file over a zero-distance connection.
time sshbeam 127.0.0.1 quequotion ~/path\ to\ dvd.iso
quequotion@127.0.0.1's password:
real 0m34.718s
user 0m4.149s
sys 0m0.480s
Minus the password entry (~4sec), sshbeam was only about two seconds slower than an ordinary copy.
time cp ~/path\ to\ dvd.iso ./
real 0m28.997s
user 0m0.723s
sys 0m1.845s
::UPDATE:: Cleanup and help, tested over a short hop (local wifi).
time sshbeam 192.168.1.8 someone /path/to/file/with/size/699986824.mp4
someone@192.168.1.8's password:
real 5m11.709s
user 0m0.631s
sys 0m0.066s
::edit:: the chinese charater was probably confusing; even I can't really read that but I think it was the "downloads" folder.
Last edited by quequotion (2015-01-12 01:08:04)
makepkg-optimize · indicator-powersave · pantheon-{3d,lite} · {pantheon,higan}-qq
Offline
Perhaps you have to actually initiate the ttys (ctrl+alt+f2~f7) before you can cycle between them?
They were initialized, some of them had an app or two running <shrugs>
Offline
KDE "control panel" without systemsettings:
#!/usr/bin/env bash
_kcm=($(kcmshell4 --list | cut -d'-' -f1 | grep -v "available"))
select _mod in "${_kcm[@]}"; do
kcmshell4 "$_mod" &> /dev/null
break
done
Last edited by Alad (2015-01-04 20:25:59)
Mods are just community members who have the occasionally necessary option to move threads around and edit posts. -- Trilby
Offline
Link manager for ptpb.pw and transfer.sh (https://github.com/ShadowKyogre/pb_manager <- decided to put it here in case I actually need to make it a full fledged project)
It works, but, I'm a little stuck on where I should be storing the *.tsv files since I'm using this to help manage public links since I'm using SyncThing. So I'm stuck between being specified via commandline args or via a config file.
import requests
import csv
import atexit
import sys
import argparse
import os
from datetime import datetime
#import dateutil.parser as dtparser
#eventually we'll need this for informing user of date expiry?
TSH_URL="https://transfer.sh"
FMT_URL="https://ptpb.pw/{}"
ALIAS_URL=FMT_URL.format("u")
PASTE_URL=FMT_URL.format("")
DB={}
TDB={}
PTPB_DB_STORE='ptpb.tsv'
TSH_DB_STORE='tsh.tsv'
def tsh_paste(*args, same_link=False):
mfiles = []
for f in args:
if same_link:
mfiles.append(('filedata', (os.path.basename(f), open(f, 'rb'))))
else:
fpayload={'filedata': open(f, 'rb')}
r = requests.post(TSH_URL, files=fpayload)
url = r.content.decode('utf-8').strip()
dt = datetime.now().isoformat()
TDB[f]=[url, dt]
if len(mfiles) > 0:
r = requests.post(TSH_URL, files=mfiles)
data = r.content.decode('utf-8').splitlines()
dt = datetime.now().isoformat()
for i in range(len(data)):
url = data[i].strip()
TDB[args[i]]=[url, dt]
def pb_paste(*args, alias=False, private=False):
for f in args:
if alias:
payload = {'c':f}
r = requests.post(ALIAS_URL, data=payload, allow_redirects=False)
else:
fpayload = {'c':open(f, 'rb')}
if private:
payload = {'p': '1'}
else:
payload={}
r = requests.post(PASTE_URL, data=payload, files=fpayload, allow_redirects=False)
data = r.content.decode('utf-8').splitlines()
url = data[0].replace('url: ','').strip()
if alias:
uuid = "redacted"
else:
uuid = data[1].replace('uuid: ', '').strip()
DB[f]=[url, uuid, int(private)]
print('{}\t{}'.format(f, url))
def pb_update(*args):
for f in args:
if f not in DB.keys():
print("Huh, we don't have one...")
continue
if DB[f][1] != 'redacted':
fpayload = {'c':open(f, 'rb')}
r = requests.put(FMT_URL.format(DB[f][1]), files=fpayload)
if DB[f][2] == 1:
data = r.content.decode('utf-8').splitlines()
url = data[0].replace(' updated.','').strip()
DB[f][1] = url
print('{} updated, new url'.format(DB[f][0], DB[f][1]))
else:
print('{} updated'.format(DB[f][0]))
def pb_delete(*args):
for f in args:
if f not in DB.keys():
print("Huh, we don't have one...")
continue
if DB[f][1] != 'redacted':
requests.delete(FMT_URL.format(DB[f][1]))
print('{} deleted, removing obsolete data'.format(DB[f][0]))
del DB[f]
def pb_db_write():
with open(PTPB_DB_STORE, 'w') as tsvfile:
writer = csv.writer(tsvfile, delimiter='\t')
for k,v in DB.items():
writer.writerow([k, v[0], v[1], v[2]])
def tsh_db_write():
with open(TSH_DB_STORE, 'w') as tsvfile:
writer = csv.writer(tsvfile, delimiter='\t')
for k,v in TDB.items():
writer.writerow([k, v[0], v[1]])
if os.path.exists(PTPB_DB_STORE):
with open(PTPB_DB_STORE) as tsvfile:
reader = csv.reader(tsvfile, delimiter='\t')
for row in reader:
DB[row[0]]=[row[1], row[2], row[3]]
if os.path.exists(TSH_DB_STORE):
with open(TSH_DB_STORE) as tsvfile:
reader = csv.reader(tsvfile, delimiter='\t')
for row in reader:
TDB[row[0]]=[row[1], row[2]]
if __name__ == "__main__":
def upload(args):
pb_paste(*args.fnames, alias=args.alias, private=args.private)
def tupload(args):
tsh_paste(*args.fnames, same_link=args.same_link)
def update(args):
pb_update(*args.fnames)
def delete(args):
pb_delete(*args.fnames)
def urls(args):
for f in args.fnames:
if f in DB.keys():
print(DB[f][0])
elif f in TDB.keys():
print(TDB[f][0])
else:
print("Huh, we don't have one...")
parser = argparse.ArgumentParser(description='Manage your ptpb.pw and transfer.sh pastes')
sparsers = parser.add_subparsers()
parser_upload = sparsers.add_parser('upload')
parser_upload.add_argument('--alias', action='store_true')
parser_upload.add_argument('--private', action='store_true')
parser_upload.set_defaults(func=upload)
parser_upload.add_argument('fnames', metavar='N', type=str, nargs='+',
help='Files to put on ptpb.pw')
parser_tupload = sparsers.add_parser('tupload')
parser_tupload.add_argument('--same-link', action='store_true')
parser_tupload.add_argument('fnames', metavar='N', type=str, nargs='+',
help='Files to put on ptpb.pw')
parser_tupload.set_defaults(func=tupload)
parser_update = sparsers.add_parser('update')
parser_update.add_argument('fnames', metavar='N', type=str, nargs='+',
help='Files to put on ptpb.pw')
parser_update.set_defaults(func=update)
parser_delete = sparsers.add_parser('delete')
parser_delete.add_argument('fnames', metavar='N', type=str, nargs='+',
help='Files to put on ptpb.pw')
parser_delete.set_defaults(func=delete)
parser_urls = sparsers.add_parser('urls')
parser_urls.add_argument('fnames', metavar='N', type=str, nargs='+',
help='Files to put on ptpb.pw')
parser_urls.set_defaults(func=urls)
#parser.add_argument('--action', type=str, choices=['upload', 'tupload',
# 'update','delete','url'], default='upload')
atexit.register(pb_db_write)
atexit.register(tsh_db_write)
args = parser.parse_args(sys.argv[1:])
args.func(args)
For every problem, there is a solution that is:
Clean
Simple and most of all...wrong!
Github page
Offline
Albumdetails, tool to generate details from music album.
Can be found from github: https://github.com/causes-/albumdetails
Example:
Artist: VA
Album: Neuro Squees
Genre: Psytrance
Year: 2014
Quality: 320kbps / 44.1kHz / 2 channels
1. Gojja - Tentacles Towards The Sun (05:34)
2. Gubbology - Freestyle (08:14)
3. Katastrof - Beatle 25 (07:12)
4. Oliveira & Myzgon - Off Topic (07:46)
5. Scum Unit - Anti Offline Part 2 (08:34)
6. Calamar Audio - Inverter (07:20)
7. Squees vs Gubbology - Coffeespider (07:29)
8. Bechamel Boyz - Liikaa Loylyy (06:32)
9. Toxic Anger Syndrome - Bergaporten (06:07)
10. Gubbology - Sekt (08:11)
11. Trance-Ingvars - Anuminaki (05:15)
Playing time: 01:18:14
Total size: 180.95 MiB
Offline
https://scontent-a-dfw.xx.fbcdn.net/hph … e=5522A2EF
#!/bin/bash
# mirrup.sh
# MIRRorlist UPgrade for Archlinux
# Backup, uncomment, rank, and set
# Reasign these as necessary
PATH_BACKUP="~/mirrup"
PATH_SYSTEM="/etc/pacman.d"
# But not this o.0
function ekko { echo ; echo ${0##*/}": "$@ ; }
echo "=== "$0" ==="
echo ${0##*/}": PATH_BACKUP="$PATH_BACKUP
echo ${0##*/}": PATH_SYSTEM="$PATH_SYSTEM
# Check for paths
for p in ${!PATH_*} ; do
eval $p=$(echo ${!p%/})
if ! [[ -d ${!p} ]] ; then
ekko ${!p}" does not exist" ; exit 1
fi
done
# Check for update
if [[ -e $PATH_SYSTEM/mirrorlist.pacnew ]] ; then
NEW=$PATH_SYSTEM/mirrorlist.pacnew
if [[ -e $PATH_BACKUP/mirrorlist.pacnew ]] ; then
OLD=$PATH_BACKUP/mirrorlist.pacnew
if [[ $NEW -ot $OLD ]] ; then
ekko $PATH_SYSTEM"/mirrorlist.pacnew not updated" ; exit 1
fi
fi
else
ekko $PATH_SYSTEM"/mirrorlist.pacnew does not exist" ; exit 1
fi
# Clear temp files
ekko "Clearing temp files..."
rm -v $PATH_BACKUP/mirrorlist*
# Backup files
ekko "Backing up files..."
cp -v $PATH_SYSTEM/mirrorlist $PATH_BACKUP/mirrorlist.bak
cp -v $PATH_SYSTEM/mirrorlist.pacnew $PATH_BACKUP
# Uncomment servers
ekko "Uncommenting .pacnew entries..."
cat $PATH_SYSTEM/mirrorlist.pacnew |\
sed 's/^#S/S/g' > $PATH_BACKUP/mirrorlist.sed
# Rank servers
ekko "Asking 'rankmirrors' to do its magic..."
time rankmirrors $PATH_BACKUP/mirrorlist.sed > $PATH_BACKUP/mirrorlist
# Might work?
ekko "Moving files (requires 'sudo')..."
sudo rm -v $PATH_SYSTEM/mirrorlist
sudo cp -v $PATH_BACKUP/mirrorlist $PATH_SYSTEM
# Syncing mirror database
ekko "Finally syncing mirror database..."
sudo pacman -Syu
exit 0
-- mod edit: converted image to url. Please see forum rules for image sizes. Or just post the text rather than a huge image of text. Trilby --
Offline
Merging with command line utilites
Offline