You are not logged in.
function addpkg()
{
pacbool=0
aurbool=0
foreach i ($@)
sudo pacman -S $i
if [ $? -eq 0 ]; then
pacbool=1
else
if [ ! -d ~/.pkgbuilds ]; then
mkdir ~/.pkgbuilds
fi
cd ~/.pkgbuilds && aurget --deps -S $i
if [ $? -eq 0 ]; then
aurbool=1
fi
fi; end
# cleaning up
if [ $pacbool -eq 1 ]; then
sudo pacman -Scc
fi
if [ $aurbool -eq 1 ]; then
rm -rf ~/.pkgbuilds/*
fi
}
Here's my little funtion for .zshrc
If package is available for installation via pacman, it'll do that, and then it'll clear out /var/cache/pacman/pkg/ (I like to do this automatically after each installation, since that folder ate up like 40% of my / partition some long time ago... ]
If, however, it is not, then it'll create nice hidden directory in ~/ where it'll build packages from AUR, and then - wipe it out.
Depending on your ZSH config, clearing those directories may be optional.
Hope someone will enjoy it
Cheers!
edit:
it occured to me that this one can play along pretty well with the one above:
function searchpkg()
{
foreach i ($@)
pacman -Ss $i
if [ $? -eq 1 ]; then
aurget -Ss $i
fi; end
}
//Thanks, Karol, -Scc is way nicer
Last edited by 0x29a (2014-02-10 13:57:16)
Offline
If the new version of some package has some pesky bugs, you're making it harder for yourself to downgrade.
There are scripts that can manage pacman's cache or you can run 'pacman -Sc' once in a while (before updating, not just after updating).
Offline
As someone once said:
It's not recommended to clean the pacman cache if you wish to downgrade packages, or re-install them, since then they must be re-downloaded again. But this is true for APT, YUM, Zypper, and all other package managers. So it's up to the user to decide if it's worth it.
And I totally agree. For me - it works. Arch is, paradoxically, most stable distribution I've ever used ;-) For the last two years it runs on my desktop no serious problems has ever emerged...
Last edited by 0x29a (2014-02-09 18:41:57)
Offline
Hi,
If you use zx2c4 pass utility to store passwords (like me), you can be annoyed (like me too) to open a terminal and write:
pass -c someLong/NameOfWe<TAB>
each time you need a password when navigating on the web.
I made a pretty short one-lined bash solution based on dmenu that can easily be mapped to a key combination (e.g. in i3, xmonad, ...):
(cd ~/.password-store && find * -type f | while read f; do echo ${f%.gpg}; done) | dmenu -l 10 | xargs pass -c
Then, when surfing on a website, if you need a password you just press the key combination (e.g. WIN+P), dmenu shows-up with the list of stored password. You only have to enter a few letters identifying the website, press enter, and the password is copied for 45 seconds.
No more terminal, no more "pass -c" and no more annoying misspelled website name.
PS: I am assuming you use gpg-agent or something like this which will prompt your GPG password if needed.
Hope you enjoy it.
Fab
Offline
With 93 pages I'm not sure if anyone has put a solution for this problem up here, but here we go.
Print the file with the most recent mtime:
#!/usr/bin/python
import sys
import os.path
import time
from docopt import docopt
__doc__ = """{} - print the most recently updated file
Usage:
{} [options] <file>...
Options:
-h --help Show this message.
-r --recursive Recursively search through subdirectories.
-q --quiet Print the file path only.""".format(
os.path.basename(sys.argv[0]), os.path.basename(sys.argv[0]))
def get_mtimes(path, recursive):
file_list = []
if os.path.isdir(path):
if recursive:
for root, dirs, files in os.walk(path):
for f in files:
joined_path = os.path.join(root, f)
try:
file_list.extend([ joined_path, os.path.getmtime(joined_path) ])
except FileNotFoundError:
# Probably a broken symlink
file_list.extend([ joined_path, os.stat(joined_path, follow_symlinks=False).st_mtime ])
for d in dirs:
joined_path = os.path.join(root, d)
for f in get_mtimes(joined_path, True):
file_list.append(f)
else:
return
else:
try:
file_list = [ path, os.path.getmtime(path) ]
except FileNotFoundError:
# Probably a broken symlink
file_list.extend([ joined_path, os.stat(joined_path, follow_symlinks=False).st_mtime ])
return file_list
if __name__ == "__main__":
args = docopt(__doc__)
recursive = args["--recursive"]
file_list = []
biggest = None
sys.setrecursionlimit(100000)
for path in args['<file>']:
f = get_mtimes(path, recursive)
if f != None:
file_list.append(f)
for entry in file_list:
if entry == []:
# No valid files
break
if biggest == None:
biggest = entry
else:
if entry[1] > biggest[1]:
biggest = entry
if biggest != None:
if args["--quiet"]:
print(biggest[0])
else:
print("{} - {}".format(
biggest[0],
time.strftime("%c", time.localtime(biggest[1]))))
This script depends on docopt. Also, it will probably crash with a sufficiently deep subdirectory tree.
Offline
Have you tried using 'find'?
Offline
Do you know fi...
Have you tried using 'find'?
YCH
Offline
I don't know find forwards and back, but -mtime did come up when I was first searching for a solution for this problem. It's not quite what I was looking for.
Now that you mention it, though, I could probably drop find down in there and simplify things quite a bit...
Offline
I don't know find forwards and back, but -mtime did come up when I was first searching for a solution for this problem. It's not quite what I was looking for.
Now that you mention it, though, I could probably drop find down in there and simplify things quite a bit...
find /tmp -printf "%T+ %p\n"| sort | tail -1
or ad-hoc approach for recent modified files.
find /tmp -mmin -10
You can add some additional tests. Like file type or ... anything that relevant file attributes.
YCH
Offline
here are the functions I use in my zsh:
###functions
#
function sprunge() {
if (($#)); then
if [[ -f $1 && -r $1 ]]; then
curl -F 'sprunge=<-' http://sprunge.us < "$1"
else
printf 'file %s does not exist or is not readable\n' "$1" >&2
return 1
fi
else
curl -F 'sprunge=<-' http://sprunge.us
fi
}
#
#web search tool
function google {
Q="$@";
GOOG_URL='https://www.google.de/search?tbs=li:1&q=';
AGENT="Mozilla/4.0";
stream=$(curl -A "$AGENT" -skLm 10 "${GOOG_URL}${Q//\ /+}" | grep -oP '\/url\?q=.+?&' | sed 's|/url?q=||; s|&||');
echo -e "${stream//\%/\x}";
}
#
function ii() # get current host related info
{
echo -e "\n${RED}Kernel Information:$NC " ; uname -a
echo -e "\n${RED}Users logged on:$NC " ; w -h
echo -e "\n${RED}Current date :$NC " ; date
echo -e "\n${RED}Machine stats :$NC " ; uptime
echo -e "\n${RED}Memory stats :$NC " ; free
echo -e "\n${RED}Disk Usage :$NC " ; df -Th
echo -e "\n${RED}LAN Information :$NC" ; netinfoLAN
echo
}
#netinfo - shows LAN network information for your system (part of ii)
function netinfoLAN (){
echo "---------------------------------------------------"
/sbin/ifconfig eth0 | awk /'inet/ {print $2}'
#/sbin/ifconfig eth0 | awk /'bcast/ {print $3}'
#/sbin/ifconfig eth0 | awk /'inet6 addr/ {print $1,$2,$3}'
#/sbin/ifconfig eth0 | awk /'HWaddr/ {print $4,$5}'
echo "---------------------------------------------------"
}
sorry if these have been posted before.
Hope they help someone
ROG Strix (GD30CI) - Intel Core i5-7400 CPU - 32Gb 2400Mhz - GTX1070 8GB - AwesomeWM (occasionally XFCE, i3)
If everything in life was easy, we would learn nothing!
Linux User: 401820 Steam-HearThis.at-Last FM-Reddit
Offline
AaronBP wrote:I don't know find forwards and back, but -mtime did come up when I was first searching for a solution for this problem. It's not quite what I was looking for.
Now that you mention it, though, I could probably drop find down in there and simplify things quite a bit...
find /tmp -printf "%T+ %p\n"| sort | tail -1
or ad-hoc approach for recent modified files.
find /tmp -mmin -10
You can add some additional tests. Like file type or ... anything that relevant file attributes.
Hey yeah, that works good enough for me, thanks.
Offline
For Archers in London: this prints out the status of the Tube* on console, with pretty colours. It requires ruby, and ruby-rainbow from the AUR to colourise the tube lines.
*The London Underground---what Londoners call the metro/subway system.
#!/usr/bin/ruby
#
require 'rexml/document'
require 'rainbow/ext/string'
class Line
attr_reader :r
attr_reader :g
attr_reader :b
def initialize r, g, b
@r = r
@g = g
@b = b
end
end
lines = {}
lines["Bakerloo"] = Line.new 171, 102, 18
lines["Central"] = Line.new 223, 0, 44
lines["Circle"] = Line.new 247, 220, 0
lines["District"] = Line.new 100, 91, 27
lines["Hammersmith"] = Line.new 245, 166, 179
lines["Jubilee"] = Line.new 118, 123, 127
lines["Metropolitan"] = Line.new 139, 0, 76
lines["Northern"] = Line.new 0, 0, 0
lines["Piccadilly"] = Line.new 0, 45, 115
lines["Victoria"] = Line.new 0, 118, 189
lines["Waterloo"] = Line.new 137, 203, 193
lines["DLR"] = Line.new 137, 203, 193
lines["Overground"] = Line.new 243, 186, 34
url = "http://cloud.tfl.gov.uk/TrackerNet/LineStatus"
xml_file = "/tmp/tube.xml"
title = " Tube Status"
puts
puts title.bright
puts "`" * title.length
system "wget -qO #{xml_file} #{url}"
doc = REXML::Document.new File.open xml_file
doc.elements.each "//LineStatus ID" do |status|
output = ""
line_name = status.elements["Line"].attributes["Name"]
line_name = line_name.sub /(\w+).*/, '\1'
longest_line = "Metropolitan".length
padding = longest_line - line_name.length
r = lines[line_name].r
g = lines[line_name].g
b = lines[line_name].b
output += "#{" " * padding}#{line_name.color r, g, b}"
unless status.elements["Status"].attributes["ID"] == "GS"
output += " #{status.elements["Status"].attributes['Description']}: "
output += " #{status.attributes['StatusDetails']}"
else
output += " Good Service"
end
puts output
end
puts
Output looks like this. (Well, it hardly ever looks like this, there's usually something wrong with one of the lines...)
Tube Status
````````````
Bakerloo Good Service
Central Good Service
Circle Good Service
District Good Service
Hammersmith Good Service
Jubilee Good Service
Metropolitan Good Service
Northern Good Service
Piccadilly Good Service
Victoria Good Service
Waterloo Good Service
Overground Good Service
DLR Good Service
The colours look bad if you're not using a 256-colour terminal. In that case, you could write "Rainbow.enabled = false" at the beginning of the script.
Offline
This is a folder that contains three C source files and a makefile to compile them. The first is "lsip" which just queries icanhazip.com for your external IP address. Next is "isitup" which lets you specify a domain name and tld (e.g., google.com) and it will query isitup.org and return the status (and related info if you'd like). Finally is "qurl" which allows you to create shortened URLs using qurl.org (a service similar to bit.ly et al).
These are all incredibly basic utilities that use libcurl (and the upstream APIs) to do the heavy-lifting, but I find them to be incredibly helpful. Maybe someone else will too. The makefile correctly uses DESTDIR and PREFIX so you can install them to wherever you want (though it does so in the build targets, not in an install target), but the target directory defaults to ".."
Disclaimer: I obviously have no ownership over the names of the services and do not mean to abuse them. I would be happy to change them if it bothers anyone, I kept the names like this only because they're short, sweet, and to the point.
All the best,
-HG
Last edited by HalosGhost (2014-02-19 01:00:06)
Offline
The first is "lsip" which just queries icanhazip.com for your external IP address.
This is not meant to discourage you from making neat and useful little C programs, but I just want to point out that one can simply do:
$ curl ifconfig.me
ifconfig.me can be queried for a bunch of other useful information as well.
Offline
+1
I've seen a bunch of programs that do the same as 'curl ifconfig.me' and apart from a learning experience or satisfaction, I don't get their appeal. If I am missing something, please do tell me.
Offline
This is not meant to discourage you from making neat and useful little C programs, but I just want to point out that one can simply do:
$ curl ifconfig.me
ifconfig.me can be queried for a bunch of other useful information as well.
I am wholly aware of ifconfig.me, and I don't mind your pointing it out. "lsip" is actually essentially just a native C version of an alias to `curl http://icanhazip.com`
For the moment, I'm not terribly interested in the other information that ifconfig.me offers over icanhazip, but I would not be against using it. The real reason I made it was as an exercise as a precursor to include an IP address, fetched natively, in a C statusbar program. It would not be so hard to recraft isitup or qurl (both of which use very similar structures) to make "lsip" more functional.
All the best,
-HG
Offline
The real reason I made it was as an exercise as a precursor to include an IP address, fetched natively, in a C statusbar program.
This is what I suspected since I know you take your lightweight dwm configuration very seriously. I just thought I would point out how us common folk do it.
Offline
This is what I suspected since I know you take your lightweight dwm configuration very seriously.
You know, I really do
I just thought I would point out how us common folk do it.
One of the other reasons I hadn't looked into ifconfig.me was that I didn't want to have to deal with heavy parsing. However, it appears that it actually provides a JSON string as a return option (which is lightyears easier to parse than HTML, even if done manually).
I will contemplate updating lsip to use ifconfig.me (though it appears significantly slower than icanhazip). We'll see how it goes.
All the best,
-HG
Last edited by HalosGhost (2014-02-19 03:42:00)
Offline
Whoa! icanhazip.com does indeed provide a way faster response from here in CA as well. Maybe it might be worth sticking with that...
Offline
After I received a lot of help from two users in the Newbie Corner (slithery and brebs who both helped me to script the if function checking whether my external hard drive is attached or not.) I got a script that backups my system to my external hard drive. I have also now added output so I know what days the script is run (although automated daily, my laptop may not be powered on at the correct time) and whether it completes (second output message) or gets interrupted (no second output message).
#!/bin/bash
echo "Backup started on $(date)" >> /home/storage/Scripts/Output/backup.txt
if [ ! -e /dev/disk/by-uuid/UUID Here ];
then echo "Hard drive not connected on $(date)" >> /home/storage/Scripts/Output/backup.txt ; exit ; fi
mount -o compress=lzo UUID="UUID Here" /mnt/backup
rsync -aAXv --delete /* /mnt/backup/backup --exclude={/dev/*,/proc/*,/sys/*,/tmp/*,/run/*,/mnt/*,/media/*,/lost+found}
umount UUID="UUID Here"
echo "Backup complete on $(date)" >> /home/storage/Scripts/Output/backup.txt
exit
It's not much but it is my own solution to a task. After around 5 years of using Linux this is my first attempt at anything like this and has really opened my eyes to what is possible. I'm hoping to learn more and currently trying to learn more about Bash and hope to produce better scripts in the future. I came to this thread for inspiration what is possible and there is some really great stuff here and hope to post some more in the future.
Last edited by BradPJ (2014-02-23 20:25:40)
Offline
You might want to define variables in the beginning of the script, like $UUID, so you can change the target of the script with only one call of sed/step in the editor.
Offline
Thanks, I'll look into that. That would be a lot more convenient.
Offline
Thank you
Offline
This script allows VirtualBox to boot from usb
#!/bin/bash
# This script allows VirtualBox to boot from usb.
# - put the drive in, say it's /dev/sdd
# - have the vm ready, say it's "Win7"
# - call the script like VBoxUsbBoot Win7 /dev/sdd
#
# it will auto guess if you want to start or stop,
# so once you're done just call it again like before
# PARAMS ===
CTL_TYPE=SATA # "IDE"/"SATA"
CTL_PORT=9 # 0-9
USBFILE=usb.vmdk
VM="$1" # e.g. Win7
DRIVE=$2 # /dev/sdd
# ==========
# SCRIPT =====
if [ "$1" == "" ] || [ "$2" == "" ];then
echo "usage: $0 vmname usbdevice"
echo "e.g. $0 win7 /dev/sdd"
exit 1
fi
function start {
# Creates a file "usb.vmdk" corresponding to the selected drive.
# set it as VirtualBox machine hard disk
VBoxManage internalcommands createrawvmdk -filename $USBFILE -rawdisk $DRIVE
# Attach to some controller/port
VBoxManage storageattach linux --storagectl SATA --port $CTL_PORT --type hdd --medium $USBFILE
}
function stop {
# Detach a previous attached disk from this controller
VBoxManage storageattach $VM --storagectl $CTL_TYPE --port $CTL_PORT --medium none
# Release the disk
VBoxManage closemedium disk $USBFILE
# Delete the file
rm $USBFILE
}
# autoguessing: if the file isn't there you're wanting to start, else you want to quit the whole shebang
[ -f $USBFILE ] && stop || start
Last edited by dummyano (2014-02-27 17:21:55)
Offline
Script to Check Kernel Config Files for Compliance with systemd Requirements
There are a number of kernel settings that systemd requires, either generally or for particular use cases. In addition there are a number of optional but recommended settings. All of this is documented in the /usr/share/docs/systemd/README file.
This script takes as its sole parameter the name of the kernel's .config file. Additionally, you can comment and uncomment one of the definitions of "lennart_needs" to alter the level of review.
#! /bin/bash
systemd_check( ) {
local config=$1
local readme=/usr/share/doc/systemd/README
# The below checks for configurations absolutely necessary
# local lennart_needs=( $(sed -n -e '/REQUIREMENTS:/,/^$/ {s/^[[:blank:]]*\(CONFIG_[^ ]*\).*/\1/ p}' \
# ${readme} | sed -n -e '/=.*$/p' -e '/=.*$/ !s/.*/&=y/p') )
# The below checks for configurations absolutely necessary plus those necessary in some use cases
# local lennart_needs=( $(sed -n -e '/REQUIREMENTS:/,/Optional but strongly recommended:/ {s/^[[:blank:]]*\(CONFIG_[^ ]*\).*/\1/ p}' \
# ${readme} | sed -n -e '/=.*$/p' -e '/=.*$/ !s/.*/&=y/p') )
# The below checks for configurations absolutely necessary, those necessary in some use cases plus those which optional but recommended
local lennart_needs=( $(sed -n -e '/REQUIREMENTS:/,/dbus >= 1.4.0/ {s/^[[:blank:]]*\(CONFIG_[^ ]*\).*/\1/ p}' \
${readme} | sed -n -e '/=.*$/p' -e '/=.*$/ !s/.*/&=y/p') )
local error_total=0
echo ""
echo "Checking ${config}'s systemd-readiness"
echo ""
for i in "${lennart_needs[@]}";
do
if [[ "$(cat ${config} | grep -c $i)" -eq 0 ]]; then
echo "ERROR: ${i%=*} must be set to \"${i#*=}\""
if [[ ${error_total} -eq 0 ]]; then
echo ""
echo "ERRORS--CORRECT SETTINGS ARE BELOW" > error.txt
fi
echo "${i}" >> error.txt
echo "Required settings are saved to error.txt"
echo ""
error_total=$((error_total+1))
fi
done
if [[ ${error_total} -gt 0 ]]; then
if [[ ${error_total} -eq 1 ]]; then
echo "There was ${error_total} error."
else
echo "There were ${error_total} errors."
fi
echo ""
echo "The correct settings are saved in error.txt"
echo "Additional information available in the systemd README:"
echo "${readme}"
else
echo "Congratulations! Your ${config} is Lennart-Approved."
fi
}
if [[ $# -ne 1 ]]; then
echo ""
echo "USAGE: Input the path to your kernel configuration file"
echo ""
else
systemd_check $1
fi
Offline