You are not logged in.
Subtitlers make subtitle files (.srt) in Windows-1254 and some players don't recognise this and assume iso-8859-1. So I made this:
#!/bin/bash
input_file=$1
input_encoding=$2
output_encoding=$3
if [ -z "$input_file" ] || [ "$input_file" = "-h" ]; then
cat <<EOF
Usage: ${0##*/} file_name.srt [input_encoding output_encoding]
EOF
exit
fi
if [ -z "$input_encoding" ]; then
input_encoding=iso-8859-1
fi
if [ -z "$output_encoding" ]; then
output_encoding=UTF-8
fi
output_file="$(sed 's/.\{4\}$//' <<< "$input_file")-$output_encoding.srt"
#AND FINALLY, THE REAL THING:
iconv -f "$input_encoding" -t "$output_encoding" "$input_file" > "$output_file"
sed -i -e 's/Ý/İ/g' -e 's/ý/ı/g' -e 's/ð/ğ/g' -e 's/þ/ş/g' -e 's/Þ/Ş/g' "$output_file"
Last edited by betseg (2016-02-09 07:23:06)
Offline
This has probably been done many times, but I needed a way to strip characters (the hashtags) from a new mirrorlist file on a brand new arch install. So I came up with this:
[dmb@dmb-production-box ~]$ cat cli_tools/stripHashtag.py
#!/usr/bin/python
def main():
with open("mirrorlist", "r+") as ifile:
contents = ifile.read()
for offendingChar in contents:
if offendingChar == "#":
data = contents.replace(offendingChar, "")
with open("mirrorlist_new", "r+") as ofile:
ofile.write(data)
if __name__ == '__main__':
main()
[dmb@dmb-production-box ~]$
It may not be pretty, or extensible, or whatever the word is, but it got the job done for me.
I am diagnosed with bipolar disorder. As it turns out, what I thought was my greatest weakness is now my greatest strength.
Everyday, I make a conscious choice to overcome my challenges and my problems. It's not easy, but its better than the alternative...
Offline
This has probably been done many times, but I needed a way to strip characters (the hashtags) from a new mirrorlist file on a brand new arch install. So I came up with this:
...
It may not be pretty, or extensible, or whatever the word is, but it got the job done for me.
So it is.
sed -i 's/#//' mirrorlist
Waiting for the claim prove that it can be done even better with ed...
EDIT:
sed -i 's/^#//' mirrorlist
Though it does not make any difference with the default mirrorlist.
Last edited by respiranto (2015-12-13 21:00:51)
Offline
Haha, wow. That is awesome. I have no experience with sed/awk/whatever along those lines, but you've just peaked my interest
I am diagnosed with bipolar disorder. As it turns out, what I thought was my greatest weakness is now my greatest strength.
Everyday, I make a conscious choice to overcome my challenges and my problems. It's not easy, but its better than the alternative...
Offline
ED:
ed /path/to/default_mirrorlist
,s/#//
wq
"UNIX is simple and coherent" - Dennis Ritchie; "GNU's Not Unix" - Richard Stallman
Offline
ex:
ex -sc '%s/^#//g|xit' /path/to/mirrorlist
The point of using ed/ex over sed is that it processes the file inplace instead of creating a temp file, and then overwriting the original file. Less I/O...
Offline
sed will do that as well with the -i flag.
The shortest might be with tr:
tr -d '#' < /path/to/default_mirrorlist > /etc/pacman.d/mirrorlist
Of course, I don't know that there is any way to do an "in place" tr without scripting it.
"UNIX is simple and coherent" - Dennis Ritchie; "GNU's Not Unix" - Richard Stallman
Offline
I think tr won't work.
Unknown host: Arch Linux Mirrorlist
Offline
@Trilby: The '-i' flag of sed is non-POSIX and depending on the implementation, it might create a temp file anyways... sed is a Stream EDitor, not a file editor.
Offline
Simple way to transfer files between hosts -- without encryption (when speed matters!)
Just type "recv" on the receiving computer
And of course "send destination files..." on the sending computer
send()
{
if [[ -z "$1" ]]; then
echo "Usage: send host stuff [...]"
return
fi
DEST=$1
shift
BYTES=$(du -csb "${@}" | tail -1 | cut -f1)
tar -cpf - "${@}" | pv -epbrs $BYTES | nc -4q 3 -T throughput $DEST 6502
}
alias recv='nc -4l 6502 | pv -ebr | tar -xvvpf -'
--
George Shearer
doc at lame dot org
Linux Nerd since the MCC Interim Days
Offline
trying the xfce whisker menu, i found it is able to use custom actions, so i made a little calculator using bc and zenity:
#!/bin/bash
exp=$1
res=$(echo $exp | bc -l -q )
while true ; do
exp=$(zenity --entry \
--title="BC Calc" \
--text=$res \
--entry-text "$exp")
if [[ $? == 1 ]] ; then
exit
fi
res=$(echo $exp | bc -l -q )
done
it is started this way (eg:): zbc.sh "(50+1*3/100)^2"
And i've integrated it in xfce whisker menu by adding a custom action (trying to translate):
- Name: Calculator
- Trigger:.*.*.*(\+|\-|\*|\/).*.*.*
(yes, multiple .*, probably because of a bug in whisker not matching things like sqrt(1+1) otherwise)
- Command: /tmp/zbc.sh \0
- [x] regular expression
Probably the regular expression could be better written and there are at least 1 problem with the script; you cannot copy the result.
Last edited by kokoko3k (2015-12-22 16:59:14)
Help me to improve ssh-rdp !
Retroarch User? Try my koko-aio shader !
Offline
Hi guys, this is one of my first posts so hello I decided to put all my dotfiles in a git repository and pretty soon I also wanted to be able to easily keep all the files up to date. Also having multiple machines running Arch (Raspberry Pi, laptop and desktop) there would be some files that I wanted to be the same on multiple machines and also unique versions of some files. And so I spawned this it consists of a filelist looking like this (cube-pc is my desktop, lazarus my laptop and cube-pi my RPi):
.bashrc
.prompt:cube-pc
.prompt:cube-pi
.vimrc:cube-pc,lazarus
Files without a suffix like :cube-pc are files common to all the hosts, files appearing twice but different suffixes are unique to each host and files with comma-separated host-list are copied to multiple hosts. This resulted in two separate scripts, update and restore. Running update will copy all the specified files from the host to the repo, and running restore will copy all files from the repo to the host.
Update:
#! /bin/bash
REPO="$(cd $(dirname $BASH_SOURCE) && pwd)";
OLD="$OLDPWD"
cd $HOME;
echo "Commencing backup...";
for i in $(grep -v "\:" "$REPO/filelist"); do
echo "Copying $i to common files..."
cp --parents $i $REPO/dotfiles/common;
done;
for i in $(grep $HOSTNAME "$REPO/filelist"); do
for k in $(echo $i | rev | sed s/\:.*// | sed s/\,/\ / | rev); do
a=$(echo $i | sed s/\:.*//);
echo "Copying $a to $k files...";
cp --parents "$a" "$REPO/dotfiles/$k"
done;
done;
echo "Commencing cleanups..."
cd "$REPO/dotfiles"
for i in *; do
echo "Entering $i"
cd $i;
if [ $i == "common" ]; then
for k in $(find . -not -type d); do
k=$(echo $k | cut -c 3-)
if [[ ! $(grep -v "\:" "$REPO/filelist" | grep $k) ]]; then
echo "Cleaning up common/$k";
rm "$k";
fi;
done;
else
for k in $(find . -not -type d); do
k=$(echo $k | cut -c 3-)
if [[ ! $(grep $i "$REPO/filelist" | grep $k) ]]; then
echo "Cleaning up $i/$k";
rm "$k";
fi;
done;
fi;
cd ..;
done;
cd $OLD
And restore:
#! /bin/bash
REPO="$(cd $(dirname $BASH_SOURCE) && pwd)";
echo "Restoring dotfiles..."
OLD="$OLDPWD"
cd "$REPO/dotfiles"
cd common;
for i in $(grep -v "\:" "$REPO/filelist"); do
echo "Copying $i from common files...";
cp --parents "$i" $HOME;
done;
cd ..;
cd $HOSTNAME;
for i in $(grep $HOSTNAME "$REPO/filelist" | sed s/\:.*//); do
echo "Copying $i from $HOSTNAME files...";
cp --parents "$i" $HOME;
done;
cd "$OLD"
echo "Finished restoring..."
All the files as well as a practical example are available on my github repo here. The only requirement is that you create a "dotfiles" folder in the same folder as the script, and that the dotfiles folder contain a "common" folder as well as a folder for every intended host you intend to use Hope somebody can use this as often as I do Happy holidays btw! This is what I did on Christmas
Last edited by Cube777 (2015-12-26 10:39:28)
dotgit - A comprehensive solution to managing your dotfiles
Offline
I use udiskie to automount removable media. While it is generally simple and flexible it does insist on mounting drives in /run/media/<username>. Similarly, gvfsd insists on mounting CIFS shares in /run/user/<UID number>. I've configured my other oddball mounting tools (google-drive-ocamlfuse, simple-mtpfs) to mount under ~/mnt and wanted to be able to access my udiskie and gvfs mounted files in the same place.
To solve this for udiskie I created a systemd user .path unit to monitor /run/media/<username> and create (and delete) symlinks when changes occur in that directory.
First I create the following bash script:
#!/bin/bash
AM_DIR=/run/media/$2
LOC_DIR=$1/<location in your home directory for mounts>
ls $AM_DIR | while read i; do
# for i in $(ls $AM_DIR); do
if [[ ! -e $LOC_DIR/$i ]]; then
ln -s $AM_DIR/$i $LOC_DIR/$i
/usr/bin/logger "Adding symlink $i to $LOC_DIR"
fi
done
ls $LOC_DIR | while read i; do
# for i in $(ls $LOC_DIR); do
if [[ -L $LOC_DIR/$i ]] && [[ ! -a $LOC_DIR/$i ]]; then
rm $LOC_DIR/$i
/usr/bin/logger "Deleting symlink $i from $LOC_DIR"
fi
done
Then I create the following systemd .path file in /etc/systemd/user:
[Unit]
Description="Path for udiskie user mounts for %u"
[Path]
PathExists=/run/media/%u/
PathChanged=/run/media/%u/
[Install]
WantedBy=paths.target
Next I create the following systemd .service file in /etc/systemd/user:
[Unit]
Description="Symlink udiskie service for %u"
[Service]
Type=simple
ExecStart=<absolute path to script> %h %u
Then start and enable the systemd path unit:
systemctl --user --now enable <name of your .path file>
Edit: Corrected [Install] from multi-user.target to paths.target.
Last edited by snakeroot (2015-12-27 13:11:31)
Online
Learning or working on computer involves use of a certain resources sometimes it's a combination of ebooks, websites and applications we recurrently use.
I wanted to make a tool that would associate those resources with given subject/profile and would automatically prepare work/learning environment.
This is what i came up with:
#!/usr/bin/env python
import subprocess as sp
import sys
import os
import argparse
import configparser
import getpass
def main():
cfg_parser = configparser.SafeConfigParser()
# Establish configuration file path - use working directory if there's no config at user's home directory.
usr_home_dir = os.path.join('/home', getpass.getuser())
program_name = os.path.basename(sys.argv[0])
config_name = '.' + program_name + '.ini'
config_file = os.path.join(usr_home_dir, config_name)
if not os.path.exists(config_file) or not os.path.isfile(config_file):
config_file = sys.argv[0]
cfg_parser.read(config_file)
known_subjects=[]
# create representation of configuration file in memory using python built-in data type dict
subject = {}
for name in cfg_parser.sections():
subject[name] = {}
if name != 'settings':
known_subjects.append(name)
for option in cfg_parser.options(name):
subject[name][option] = cfg_parser.get(name, option)
args_parser = argparse.ArgumentParser(description='Automatically launch applications for given subject')
args_parser.add_argument('subject', type=str, choices=known_subjects, action='store')
args_parser.add_argument(
'-k', '--kill',
help='kill all programs associated with given subject',
action='store_true', default=False
)
args_parser.add_argument('-s', '--switch', action='store_true', help='switch to given subject')
args = args_parser.parse_args()
current_subject_name = args.subject
kill = args.kill
switch = args.switch
# abbreviations
if switch:
original_subject_name = current_subject_name
current_subject_name = subject['settings']['recent_subject']
kill = True
ebook_reader = subject['settings']['ebook_reader'].split()
browser = subject['settings']['browser'].split()
application_killer = subject['settings']['application_killer'].split()
urls = subject[current_subject_name]['urls'].splitlines()
ebook_path = subject[current_subject_name]['ebook_path']
ebooks = subject[current_subject_name]['ebooks']
ebooks = [os.path.join(ebook_path, ebook) for ebook in ebooks.splitlines()]
auxiliary_applications = subject[current_subject_name]['auxiliary_applications'].splitlines()
if kill:
# separating application name from its potential arguments and forwarding it to the application_killer
sp.Popen(application_killer + [ebook_reader[0]])
sp.Popen(application_killer + [browser[0]])
apps = []
for app in auxiliary_applications:
apps.append(app.split(' '))
for app in apps:
sp.Popen(application_killer + [app[0]])
if switch:
sp.Popen([program_name] + [original_subject_name])
sys.exit()
# launch browser
if urls:
sp.Popen(browser + urls)
# launch auxiliary applications
if auxiliary_applications:
for application in auxiliary_applications:
sp.Popen(application.split(' '))
# launch ebook reader
if ebooks:
for ebook in ebooks:
sp.Popen(ebook_reader + [ebook])
cfg_parser.set('settings', 'recent_subject', current_subject_name)
cfg_parser.write(open(config_file, 'w'))
if __name__ == '__main__':
main()
New subjects are to be added through config file that must have following name (where "program_name" is a name that should match the chosen main program filename):
.program_name.ini
and should be placed in user home directory. Config .ini file content:
; applications of preference - must be in $PATH
; because of the way foxitreader handles its arguments (not straightforward as with a browser) ...
; i was forced to spawn separate process for every single ebook for it to work
[settings]
ebook_reader = foxitreader
browser = chromium
; tested 'killall --ignore-case' and 'pkill -i' both work just fine
application_killer = killall --ignore-case
; example subject entry named 'python'
; IMPORTANT note is that urls, ebooks and auxiliary_applications properties MUST contain newline separated values i am forcing it for sake of readability as showed below.
[python]
urls = pypi.python.org/pypi
https://docs.python.org/3/
ebooks = Think Python.pdf
learnPythontheHardWay.pdf
ebook_path = /home/archer/doc/python
auxiliary_applications = subl3
ipython qtconsole --style="monokai" --ConsoleWidget.font_family="Terminus" --ConsoleWidget.font_size=11
Offline
Some time ago, I wrote a Bash script that runs another script for each file in a directory hierarchy using GNU parallel. I've wrote this to utilize ffmpeg conversions but it can be used for other tasks, too.
Source and example: https://github.com/Martchus/diriterator
Offline
Infinity, how is that different that a find command with the -exec flag running a script that backgrounds ffmpeg?:
find -name '*.mp3' -exec myscript '{}' \;
and myscript:
#!/bin/bash
ffmpeg $SOME_FLAGS $1 &
"UNIX is simple and coherent" - Dennis Ritchie; "GNU's Not Unix" - Richard Stallman
Offline
- convenience: simple syntax, defines environment variables usable from the script (eg. $ITERATOR_TARGET_DIR, $ITERATOR_FILE_NAME_WITHOUT_EXTENSION)
- allows to put output files in a separate directory hierarchy
- usage of GNU Parellel (of course this makes no sense when the encoder already uses multiple cores)
Of course any of this can be easily achieved using find/parallel directly. This script just utilizes the task.
Offline
This one is a GIMP batch script. It applies the colorize filter to all PNG files in a directory.
I've written it when I was working on some themes.
; colorize all PNG files in current directory, original file will be overwritten!
; save this script as ~/.gimp-2.8/scripts/colorize-png.scm
; from the directory with the PNG files run:
; gimp -i -b '(colorize-png "*.png" <hue> <saturation> <lightness>)' -b '(gimp-quit 0)'
(define (colorize-png pattern hue saturation lightness)
(let* ((filelist (cadr (file-glob pattern 1))))
(while (not (null? filelist))
(let* ((filename (car filelist))
(image (car (file-png-load 1 filename filename)))
(drawable (car (gimp-image-active-drawable image))))
(gimp-colorize drawable hue saturation lightness)
(gimp-file-save RUN-NONINTERACTIVE image drawable filename filename)
(gimp-image-delete image))
(set! filelist (cdr filelist))
)
)
)
There are some warnings about missing libs / files, but it does the job...
Offline
@michis: nice!
Offline
Audio Album Builder
There's the setup.py all right, but it is not rigorously tested.
# ln -s .../py-procr/procr/core/pcp.py /usr/bin/pcp
will do.
Offline
esa selfishly blanked his original post: so I have split out all of the subsequent discussion. What a waste...
Offline
Inspired by Treferwynd's runtime script, I ended up writing a little script called timestats. It will rerun a command multiple times and print out basic statistics about the runs. It can also plot the data.
My Arch Linux Stuff • Forum Etiquette • Community Ethos - Arch is not for everyone
Offline
script to mount USBs...
#! /bin/bash
echo "Check Devices:"
lsblk -r | grep ^sd[b-z][1-9] | awk '{ print $1, "\033[33;1m"$7"\033[0m" }' | nl
echo -n "Type the number to mount (or) Enter to exit: "
read ans1
if [[ "$ans1" == "" ]]; then
exit 0
elif [ $ans1 -eq $ans1 2> /dev/null ]; then
count=$(lsblk -r | grep -c ^sd[b-z][1-9])
vol=$(lsblk -r | grep ^sd[b-z][1-9] | awk '{ print $1 }' | nl | grep -w "$ans1" | awk '{ print $2 }')
if [[ "$ans1" -ge 0 && "$ans1" -le "$count" ]]; then
mtest=$(mount | grep -w \/dev\/$vol)
if [[ $mtest == "" ]]; then
mkdir /home/msh/$vol
sleep 1
mount /dev/$vol /home/msh/$vol
echo "Vol is mounted"
sleep 3
elif [[ $mtest != "" ]]; then
echo "Vol is already mounted"
echo -n "Want to unmount Vol? [y/n]: "
read ans2
if [[ "$ans2" == "y" ]]; then
umount /dev/$vol
sleep 1
rmdir /home/msh/$vol
echo "Vol is unmounted"
sleep 3
elif [[ "$ans2" == "n" ]]; then
exit 0
else
echo "Invalid Input."
sleep 1
$0
fi
else
echo "Error Occurs."
sleep 1
$0
fi
else
echo "Invalid Input"
sleep 1
$0
fi
else
echo "Invalid Input"
sleep 1
$0
fi
Offline
lsblk -r | grep ^sd[b-z][1-9] | awk '{ print $1 }' | nl | grep -w "$ans1" | awk '{ print $2 }'
You know awk does pattern matching?
Offline
Its shame, I just know it does but not how to.
Please suggest me how that line should be.
Offline