You are not logged in.
@the topic with the searches for files..
ogion@Gont ~/programming % time ll **/**(.) | wc -l
2432
ls -lh --color=auto **/**(.) 0,21s user 0,09s system 87% cpu 0,338 total
wc -l 0,01s user 0,00s system 4% cpu 0,328 total
This one is powered by zsh, **/** are all files recursively, and (.) specifies that i only want regular files.
ogion@Gont ~/programming % time find . -type f -exec ls -lh --color=auto {} \; | wc -l
2607
find . -type f -exec ls -lh --color=auto {} \; 20,36s user 7,45s system 80% cpu 34,336 total
wc -l 0,02s user 0,10s system 0% cpu 34,335 total
This is GNU find…
GNU find 'found more' because zsh ignored all hidden files, in this case all and completely .hg / .git or .cvsignore files, while gnu find included hidden files. (They weren't that much though, in comparison to the number of non-hidden files).
The numbers tell me to use zsh
Ogion
EDIT: I just noticed though, doing this in for example my home dir results in an argumentlist that is too long for ls..
Last edited by Ogion (2010-09-28 09:03:35)
(my-dotfiles)
"People willing to trade their freedom for temporary security deserve neither and will lose both." - Benjamin Franklin
"Enlightenment is man's leaving his self-caused immaturity." - Immanuel Kant
Offline
The numbers tell me to use zsh
...
I just noticed though, doing this in for example my home dir results in an argumentlist that is too long for ls..
And that's exactly why find is slower.
The zsh example is a scaled up version of:
ls -lh /foo /bar /baz /bat
Whereas the find example is a scaled up version of:
ls -lh /foo
ls -lh /bar
ls -lh /baz
ls -lh /bat
which do you think will be slower?
The find command is built to handle an arbitrarily deep directory, but that comes with a performance hit. You can only access the performance of shell globbing or xargs if the directory will be small enough for the tool you're handing the arguments off to.
Anyway, a fun little example of "the right tool for the right job" I suppose.
//github/
Offline
A looping clock for the terminal, I got the redraw string from someone here on the forums (sorry I forgot the name right now):
#!/bin/bash
# loop time from the command line
while true; do
echo -n "$(date "+%T") "
sleep 1
echo -ne "\x0d\E[2K"
done
Setting Up a Scripting Environment | Proud donor to wikipedia - link
Offline
A looping clock for the terminal, I got the redraw string from someone here on the forums (sorry I forgot the name right now):
#!/bin/bash # loop time from the command line while true; do echo -n "$(date "+%T") " sleep 1 echo -ne "\x0d\E[2K" done
That's nice. I've been using urxvt's built-in perl clock, but there are times when that covers some application's display as you can't seem to move it.
Having that instead could be useful.
"...one cannot be angry when one looks at a penguin." - John Ruskin
"Life in general is a bit shit, and so too is the internet. And that's all there is." - scepticisle
Offline
while true; do
printf "\r$(date "+%T")"
sleep 1
done
Offline
Was about to recommend karol's version. Gives less flicker. (I think it entirely eliminates flicker as we don't "clear" the screen but just write the new time onto it.)
Shortest version; might wanna make it a function by the way:
while sleep 1
do printf '\r%s ' "$(date +%T)"
done
``Common sense is nothing more than a deposit of prejudices laid down by the mind before you reach eighteen.''
~ Albert Einstein
Offline
Was about to recommend karol's version. Gives less flicker. (I think it entirely eliminates flicker as we don't "clear" the screen but just write the new time onto it.)
Shortest version; might wanna make it a function by the way:
while sleep 1 do printf '\r%s ' "$(date +%T)" done
The echo approach has it's benefits. You can follow Mega-G33k's script evolution here.
Offline
I'm not used to programming bash, so this code looks ugly as hell... I just made it, because I got tired of looking for possible dependencies when creating aur packages.
Assuming that you already compiled the program and that it is working:
ldd the executable file
pacman -Qo to find all packages needed
remove the packages required by other needed packages
I don't handle libraries that aren't owned by any package because the error message is good enough for me.
#!/bin/bash
p=`pacman -Qo \`ldd $1 | cut -d' ' -f3\` | cut -d' ' -f5 | sort | uniq`
for i in $p; do
req=0;
for j in $p; do
if [ $i != $j -a `pacman -Qi $i | grep Required | grep -c $j` -eq 1 ]; then
req=1;
fi
done
if [ $req -eq 0 ]; then
echo NEED $i
fi
done
Last edited by jlcordeiro (2010-10-14 21:01:00)
Offline
Replace those backticks with $(), they make eyes bleed. Also try to avoid [, use [[ or (( instead depending on the context. If the program is part of a package, just run namcap on it and it will do all this for you. You shouldn't be installing anything unpackaged anyway
[git] | [AURpkgs] | [arch-games]
Offline
It's rather large: http://pastebin.org/194774
Lets me check my dA messages without opening my browser, whether my menu is splintered or not. I definitely need to make the output more customizable for conky and/or a zenity notification icon.
For every problem, there is a solution that is:
Clean
Simple and most of all...wrong!
Github page
Offline
Assuming that you already compiled the program and that it is working:
ldd the executable file
pacman -Qo to find all packages needed
remove the packages required by other needed packages
I would go to the trouble of using "readelf -d" as "ldd" sometimes gives false dependencies.
6EA3 F3F3 B908 2632 A9CB E931 D53A 0445 B47A 0DAB
Great things come in tar.xz packages.
Offline
Extract contents of rar files in a directory with recursive directory walk.
Useful for extracting multiple TV shows at once:
- Looks for *.rar files WITHOUT partXY.rar extension and extract ONLY that one, thus extracting the archives that are named *.rar, *.r01, *.r02...
- Looks for part1.rar, part01.rar, part001.rar files and extracts ONLY those, thus extracting partX.rar - partY.rar files.
- Skips subs archives.
- Defined output directory for extracted files.
#!/usr/bin/python2
import os
import sys
import re
import signal
import subprocess
#signal handler function
def handler(signum, frame):
sys.stdout.write (" \033[1;31mBye!\033[0m\n")
sys.exit(0)
sys.stdout.write ("* Recursive unrar script 1.1 - by karabaja4\n")
argValid = False
#define signal
signal.signal(signal.SIGINT, handler)
#if argument path is defined
if (len(sys.argv) == 2):
argPath = os.path.normpath(sys.argv[1])
if os.path.exists(argPath): argValid = True
#dir is defined and valid
if (argValid):
sys.stdout.write ("\n\033[1;32m==>\033[0m Root directory successfully read from command line!\n")
rootDir = argPath + "/"
extractDir = argPath + "/../"
#or else use default dirs
else:
sys.stdout.write ("\n\033[1;31m==>\033[0m Warning: Arguments invalid or not defined, defaulting to hardcoded!\n")
rootDir = "/home/igor/Downloads/"
extractDir = "/home/igor/"
#print dirs
sys.stdout.write ("\033[1;33m==>\033[0m Root directory : \033[0;32m%s\033[0m\n" % rootDir)
sys.stdout.write ("\033[1;33m==>\033[0m Extract directory : \033[0;32m%s\033[0m\n\n" % extractDir)
#walk the dir
for root, subFolders, files in os.walk(rootDir):
for filename in files:
#join filename with path
filePath = os.path.join(root, filename)
#format the string with backslashes
filePath = filePath.replace(" ", "\ ")
filePath = filePath.replace("!", "\!")
filePath = filePath.replace(r"'", r"\'") #fix the goddamn quotation mark
#filter all except part1.rar, part01.rar, part001.rar, and rar without part substring
if re.match("^.*\.rar$", filePath):
if not re.search("part[2-9]\.rar|part0[2-9]\.rar|part[1-9][0-9]\.rar|part00[2-9]\.rar|part0[1-9][0-9]\.rar|part[1-9][0-9][0-9]\.rar|Subs|subs|subpack", filePath):
sys.stdout.write ("\033[1;33m==>\033[0m Extracting: \033[0;33m%s\033[0m ... " % (filename))
sys.stdout.flush()
#run unrar
p = subprocess.Popen("unrar " + "x " + filePath + " " + extractDir + " > /dev/null", shell=True)
sts = os.waitpid(p.pid, 0)[1]
sys.stdout.write ("\033[1;32mDone!\033[0m\n")
#end
sys.stdout.write ("\033[1;32m==>\033[0m Nothing else to do!\n")
Improvement suggestions welcome!
EDIT: updated #1
Last edited by karabaja4 (2010-10-18 22:10:13)
Offline
I made something like that in bash a while back, but I don't try to look for part1.rar or part001.rar because I never got episodes with more than 99 parts.
#!/bin/bash
dir='/dados/series'
if [ $# -eq 0 ]; then
dir='.'
else
if [ $# -eq 1 ]; then
dir=$1
else
echo "Too many input arguments."
exit 1
fi
fic1=`find $dir -name '*.part01.rar' 2>/dev/null`
c2=`find $dir -name '*.rar' 2>/dev/null | grep -v .part\.\..rar$`for i in $c1 $c2; do
unrar e $i
done
exit
And I'm looking for suggestions as well. I wanted to be able check/uncheck files as a way to learn ncurses, but I don't have the time right now.
Last edited by jlcordeiro (2010-10-18 10:24:09)
Offline
Move the cursor with the mouse in the bash prompt.
Inside the code you have to change ps=4 to how long your PS1 is. If it's multi-line, just the last line.
The y-axis and button are ignored, so clicking anywhere in the terminal will move the cursor to the respective x-axis coordinate.
#! /bin/false
#! Source with bash
# bind -x seems to only work indirectly in this case.
# http://unix.derkeiler.com/Newsgroups/comp.unix.shell/2003-11/0926.html
bind -x $'"\201":mousemove.fn'
bind '"\e[M":'$'"\201"'
# Turn on mouse reporting
# Hold shift to get normal behavior
echo -ne '\e[?9h'
# Turn it off with
# echo -ne '\e[?9l'
function mousemove.fn() {
local b x y ps
# Only works if you know how long your PS1 is.
# bind -x clears the line so it can't be derived from the current cursor position.
ps=4
# http://myfreebsd.homeunix.net/freebsd/mouse_events_shell.html
read -s -n 3 -t 0.01
printf -v b "%d" "'${REPLY:0:1}"
printf -v x "%d" "'${REPLY:1:1}"
printf -v y "%d" "'${REPLY:2:1}"
case $b in
96) b=scrollup;;
97) b=scrolldown;;
32) b=left;;
34) b=right;;
33) b=middle;;
*) b=unknown;;
esac
[[ $x < 0 ]] && x=$((x + 255))
[[ $y < 0 ]] && y=$((x + 255))
x=$((x - 32))
y=$((y - 32))
READLINE_POINT=$(( x - ps - 1 ))
}
Offline
Extract contents of rar files in a directory with recursive directory walk.
Useful for extracting multiple TV shows at once:- Looks for *.rar files WITHOUT partXY.rar extension and extract ONLY that one, thus extracting the archives that are named *.rar, *.r01, *.r02...
- Looks for part1.rar, part01.rar, part001.rar files and extracts ONLY those, thus extracting partX.rar - partY.rar files.
- Skips subs archives.
- Defined output directory for extracted files.#!/usr/bin/python2 import os import sys import re import signal import subprocess #signal handler function def handler(signum, frame): sys.stdout.write (" \033[1;31mBye!\033[0m\n") sys.exit(0) sys.stdout.write ("* Recursive unrar script 1.1 - by karabaja4\n") argValid = False #define signal signal.signal(signal.SIGINT, handler) #if argument path is defined if (len(sys.argv) == 2): argPath = os.path.normpath(sys.argv[1]) if os.path.exists(argPath): argValid = True #dir is defined and valid if (argValid): sys.stdout.write ("\n\033[1;32m==>\033[0m Root directory successfully read from command line!\n") rootDir = argPath + "/" extractDir = argPath + "/../" #or else use default dirs else: sys.stdout.write ("\n\033[1;31m==>\033[0m Warning: Arguments invalid or not defined, defaulting to hardcoded!\n") rootDir = "/home/igor/Downloads/" extractDir = "/home/igor/" #print dirs sys.stdout.write ("\033[1;33m==>\033[0m Root directory : \033[0;32m%s\033[0m\n" % rootDir) sys.stdout.write ("\033[1;33m==>\033[0m Extract directory : \033[0;32m%s\033[0m\n\n" % extractDir) #walk the dir for root, subFolders, files in os.walk(rootDir): for filename in files: #join filename with path filePath = os.path.join(root, filename) #format the string with backslashes filePath = filePath.replace(" ", "\ ") filePath = filePath.replace("!", "\!") filePath = filePath.replace(r"'", r"\'") #fix the goddamn quotation mark #filter all except part1.rar, part01.rar, part001.rar, and rar without part substring if re.match("^.*\.rar$", filePath): if not re.search("part[2-9]\.rar|part0[2-9]\.rar|part[1-9][0-9]\.rar|part00[2-9]\.rar|part0[1-9][0-9]\.rar|part[1-9][0-9][0-9]\.rar|Subs|subs|subpack", filePath): sys.stdout.write ("\033[1;33m==>\033[0m Extracting: \033[0;33m%s\033[0m ... " % (filename)) sys.stdout.flush() #run unrar p = subprocess.Popen("unrar " + "x " + filePath + " " + extractDir + " > /dev/null", shell=True) sts = os.waitpid(p.pid, 0)[1] sys.stdout.write ("\033[1;32mDone!\033[0m\n") #end sys.stdout.write ("\033[1;32m==>\033[0m Nothing else to do!\n")
Improvement suggestions welcome!
EDIT: updated #1
I use this on zsh instead:
for file in */*.rar; do x $file; done;
where x is an alias for another shellscript in my zshrc:
# {{{ Unpack-Script cuz me is tooo layzaaaaay
x () {
if [ -f $1 ] ; then
case $1 in
*.tar.bz2) tar xvjf $1 ;;
*.tar.gz) tar xvzf $1 ;;
*.bz2) bunzip2 $1 ;;
*.rar) unrar x $1 ;;
*.gz) gunzip $1 ;;
*.tar) tar xvf $1 ;;
*.tbz2) tar xvjf $1 ;;
*.tgz) tar xvzf $1 ;;
*.zip) unzip $1 ;;
*.Z) uncompress $1 ;;
*.7z) 7z x $1 ;;
*.xz) tar -xvf $1 ;;
*) echo "don't know how to extract '$1'..." ;;
esac
else
echo "'$1' is not a valid file!"
fi
}
I didnt wrote that, I got that too from the forums here.
Offline
@ portwolf
http://wiki.archlinux.org/index.php/Cor … es#extract
http://wiki.archlinux.org/index.php/Bashrc_helpers
Last edited by karol (2010-10-18 23:01:29)
Offline
Read man pages for software that might not be installed, courtesy of http://man.cx.
#!/bin/bash
#
# http://man.cx manpage extractor
# requires: xmllint (libxml2)
#
BASEURL='http://man.cx'
XMLLINT='xmllint --html'
XPATH_EXPR='--xpath //*[@id="manpage"]/pre'
usage() {
echo "Usage: ${0##*/} [section] manpage" >&2
}
case $# in
1) PAGE=$1 ;;
2) SECTION="($1)"
PAGE=$2 ;;
*) usage; exit 1 ;;
esac
curl -s $BASEURL/$PAGE$SECTION | $XMLLINT $XPATH_EXPR - 2>/dev/null | sed 's|</\?[^>]\+>||g;s|<\;|<|g;s|>\;|>|g' | ${PAGER:-less}
Offline
Read man pages for software that might not be installed, courtesy of http://man.cx.
aw....sum
< Daenyth> and he works prolifically
4 8 15 16 23 42
Offline
This script switches between /usr/bin/python2 and /usr/bin/python (python3) in the "transition-phase", for example if you need to build chromium-dev (it needs python2).
#!/bin/bash
if [ `whoami` != "root" ]
then
echo "Please run as root!"
exit 1
fi
if ! [ -f /opt/pyswitch/pp ]
then
# Create python backup directory
mkdir -p /opt/pyswitch/
# Get current Python location
PP=`which python`
# Save old destination to file pp in backup dir
echo -ne $PP > /opt/pyswitch/pp
# Move python to backup destination
mv $PP /opt/pyswitch/python.bak
# Make symlink to python2
ln -s `which python2` $PP
echo "$PP is now python2"
else
# Get old Path
PP=`cat /opt/pyswitch/pp`
# Remove the file
rm /opt/pyswitch/pp
# Remove symlink
rm $PP
# mv python3 back
mv /opt/pyswitch/python.bak $PP
echo "$PP is now python3"
fi
Offline
Read man pages for software that might not be installed, courtesy of http://man.cx.
You could use lynx -dump, but the output of this one is a nicer format.
"...one cannot be angry when one looks at a penguin." - John Ruskin
"Life in general is a bit shit, and so too is the internet. And that's all there is." - scepticisle
Offline
I wrote some little script to render math formulas using xelatex. The original idea and prototype comes from some guy on the irc. He wrote a php script doing the same thing with latex, i just simplified the concept.
#!/bin/zsh
function quiterr {
cat eq.log | tail -n20
exit 1
}
mkdir -p /tmp/rendereq
cd /tmp/rendereq
cat >eq.tex << EOF
\documentclass{scrartcl}
\pagestyle{empty}
\usepackage{amsmath}
\begin{document}
\begin{equation*}
\begin{split}
$@
\end{split}
\end{equation*}
\end{document}
EOF
xelatex -halt-on-error eq.tex >/dev/null || quiterr
pdfcrop --margins "2 2 2 2" eq.pdf eq1.pdf >/dev/null
filename="${$(md5 eq.tex | tr '[:upper:]' '[:lower:]')%% *}.png"
convert -density 300 eq1.pdf /srv/http/tmp/$filename
echo loc: /srv/http/tmp/$filename
echo url: http://$yoururl/tmp/$filename
Something annoying is that the shell tends to escape all your backslashes away before they reach the tex file. so you have to write the arguments in single quotes. but it does the trick for now
Last edited by laochailan (2010-10-23 18:02:25)
Offline
I wrote some little script to render math formulas using xelatex. The original idea and prototype comes from some guy on the irc. He wrote a php script doing the same thing with latex, i just simplified the concept.
Something annoying is that the shell tends to escape all your backslashes away before they reach the tex file. so you have to write the arguments in single quotes. but it does the trick for now
Here's a similar thing I made. I have a separate script to upload to imgur (it also records the links in a history file).
#!/bin/bash
##
# tex2img -- quickly get png output from latex code (for math)
#
# usage -- tex2img [SIZE] STRING
# -SIZE is standard LaTeX size command name
# without any substring 'size'
# -Few measures preventing stupidity
# -Single-quoting is recommended
#
# notes -- requires tclip
# -- uses $$ environment only
#
# todo -- add safeguards
#
# written -- 25 August, 2010 by Egan McComb
#
# revised --
##
size="\LARGE"
usage()
{
echo "Usage: $(basename $0) [SIZE] STRING" >&2
echo -e "\t-SIZE is standard LaTeX size command name" >&2
echo -e "\t without any substring 'size'" >&2
echo -e "\t-Few measures preventing stupidity" >&2
echo -e "\t are in place" >&2
echo -e "\t-Single-quoting is recommended" >&2
}
compile()
{
latex -interaction=batchmode "$@"
}
convert()
{
dvipng -q -T tight -o "$@"
}
chkargs()
{
if (( ! $# ))
then
echo "Error: Too few arguments" >&2
usage
exit $ERR_NARGS
elif (( $# == 2 ))
then
case "$1" in
tiny)
size="\tiny";;
script)
size="\scriptsize";;
footnote)
size="\footnotesize";;
small)
size="\small";;
normal)
size="\normal";;
large)
size="\large";;
Large)
size="\Large";;
LARGE)
size="\LARGE";;
huge)
size="\huge";;
Huge)
size="\Huge";;
*)
echo "Error: Invalid argument '$1'" >&2
usage
exit $ERR_VARGS
esac
text="$2"
elif (( $# > 2 ))
then
echo "Error: Too many arguments" >&2
usage
exit $ERR_NARGS
else
text="$@"
fi
}
clean()
{
rm -f ${file/%tex/aux} ${file/%tex/dvi} ${file/%tex/log} $file
}
##----MAIN----##
chkargs "$@"
file=$(mktemp $(basename $0).XXX.tex)
cat > $file <<- EOF
\documentclass{article}
\usepackage[american]{babel}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{amsfonts}
\pagestyle{empty}
\newcommand{\abs}[1]{\lvert#1\rvert}
\newcommand{\s}{\Rightarrow}
\begin{document}
$size
\$\$${text}\$\$
\end{document}
EOF
compile $file > /dev/null || { echo "Error: Unable to compile LaTeX code" >&2; clean; exit 2; }
convert ${file/%tex/png} ${file/%tex/dvi} > /dev/null || { echo "Errors occurred in conversion" >&2; clean; exit 2; }
clean
echo "${file/%tex/png}" | tclip
exit 0
#!/bin/bash
##
# tclip -- tee stdin to clipboard and stdout
#
# usage -- tclip
#
# notes -- not safe for large data
#
# todo -- remove intermediate variable
#
# written -- 7 July, 2010 by Egan McComb
#
# revised --
##
if [[ -t 0 ]]
then
echo "Error: No input given" >&2
exit
fi
read -r input
echo -n $input | xclip
echo $input
exit 0
Last edited by egan (2012-01-05 23:53:45)
Offline
Imgur script. It pops the URL onto STDOUT and to the X clipboard, as well as recording it with the delete URL in the file denoted by $HFILE. You need an API key.
#!/bin/bash
##
# imgur -- upload image to imgur.com
#
# usage -- imgur FILE
#
# notes -- requires tclip
# -- stores history in $hfile
#
# todo -- thumbnail link?
#
# written -- 25 August, 2010 by Egan McComb
#
# revised --
##
apikey="XXXAPIKEYXXX"
hfile="$HOME/bin/.imgur_history"
usage()
{
echo "Usage: $(basename $0) FILE" >&2
}
chkargs()
{
if (( $# != 1 ))
then
if (( ! $# ))
then
echo "Error: Too few arguments" >&2
usage
exit $ERR_NARGS
else
echo "Error: Too many arguments" >&2
usage
exit $ERR_NARGS
fi
elif [[ ! -f "$1" ]]
then
echo "Error: Invalid file '$1'" >&2
usage
exit $ERR_VARGS
else
image="$1"
fi
}
##----MAIN----##
if ! netbool.sh
then
echo "Error: Internet connectivity poor" >&2
exit 1
fi
chkargs "$@"
response=$(curl -sF "key=$apikey" -F "image=@$image" http://imgur.com/api/upload.xml)
if (( $? != 0 ))
then
echo "Error: Upload failed" >&2
exit 1
elif (( $(grep -c "<error_msg>" <<< $response) > 0 ))
then
echo "Error: imgur says:" >&2
echo $response | sed -r 's/.*<error_msg>(.*)<\/error_msg>.*/\1/' >&2
exit 2
fi
iurl=$(sed -r 's/.*<original_image>(.*)<\/original_image>.*/\1/' <<< $response)
durl=$(sed -r 's/.*<delete_page>(.*)<\/delete_page>.*/\1/' <<< $response)
date +%T\ on\ %D >> $hfile
echo "$iurl" | tee -a $hfile | tclip
echo "$durl" >> $hfile
exit 0
#!/bin/bash
##
# netbool.sh -- check for proper Internet connectivity
#
# usage -- netbool.sh
#
# notes -- designed for use in other scripts
#
# todo -- more elegant way to do this?
#
# written -- 29 December, 2011 by Egan McComb
#
# revised --
##
if ping -c 1 example.com &> /dev/null
then
exit 0
else
exit 1
fi
#!/bin/bash
##
# tclip -- tee stdin to clipboard and stdout
#
# usage -- tclip
#
# notes -- not safe for large data
#
# todo -- remove intermediate variable
#
# written -- 7 July, 2010 by Egan McComb
#
# revised --
##
if [[ -t 0 ]]
then
echo "Error: No input given" >&2
exit
fi
read -r input
echo -n $input | xclip
echo $input
exit 0
Last edited by egan (2012-01-05 23:52:06)
Offline
$$ … $$ isn't that good. it's the old plain TeX notation for math stuff and not recommended in latex afaik. \[ \] or the equation/equation* environments looks a bit better sometimes and you have the advantage that you can align your equations using some subenvironment like align or split.
Offline
You've probably seen much of it before, but I've posted my DeiMenu scripts online at Apps To Go. These provide a dynamic application menu framework based on the great Suckless dmenu program. DeiMenu has been designed to make it easier to administer menus for several users and/or those in some sort of a hierarchy, be it in an office or in the family!
Comments most welcome.
Offline