You are not logged in.
Hey,
I've recently taken up bash programming, and here is my first real script. Just pass it a textfile with rapidshare links and it will download all of them. It supports resumption of downloads, cookie storage, and some sanity checks. I have a lot of features planned for the future, but I would like some feedback on my first version.
Thanks
Ben
UPDATES:
6/19/08
LTSmash has motivated me to add additional features. Hopefully multiple simultaneous downloads and the ability to check progress from other programs via log files will be implemented soon.
2/10/08
Added a rate limit and a few other features.
?????
Fixed a problem where the filesize of the local file was being calculated incorrectly, and therefore causing the completion of a file to not be detected.
Usage:
areget linksfile
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#!/bin/bash
VERSION=.02
LOGINURL=https://ssl.rapidshare.com/cgi-bin/premiumzone.cgi
USERNAME=underpenguin
PASSWORD=`cat pw`
RATE=450k
if [[ ! -z $3 ]]; then
echo "Invalid Arguments"
exit
fi
if [[ ! -z $2 ]]; then
FILELIST=$2
else
FILELIST=$1
fi
NAME=$0
trap "rm $TEMP_FILE; exit" SIGHUP SIGINT SIGTERM
getfile(){
wget --limit-rate=$RATE --load-cookies cookie $2 $1
}
testfile(){
wget $1 --load-cookies cookie --spider -o wget_test.log
filelen=`grep "Length: [0-9,]* " wget_test.log -o | tr -d "," | grep "[0-9]*" -o`
filename=`echo $1 | grep '\/[^\/]*$' -o | tr -d "/"`
echo Current File: $filename
if [[ -f $filename ]]; then
current=`du -b $filename | cut -f1 `
echo $current of $filelen bytes completed
if [[ $current -lt $filelen ]]; then
echo $filename incomplete, finishing download...
getfile $1 -c
elif [[ $current -gt $filelen ]]; then
echo local $filename is larger than on server....
echo something is wrong....
echo exiting....
exit
elif [[ $current -eq $filelen ]]; then
echo $filename is complete
# Do nothing, move on to next file
else
echo something is b0rken.
exit
fi
else
echo starting download...
getfile $1
fi
rm wget_test.log
}
testfiles(){
count= cat ${FILELIST} | grep 'http://.*' | wc -l | tr "\n" " "
echo $count links found in $FILELIST
for link in `cat ${FILELIST}`; do
echo "Getting ${link}"
testfile $link
done
}
getcookie(){
if [[ -e cookie ]] && [[ ! -r cookie ]]; then
echo "cookie file error"
exit
fi
echo "Getting Cookie..."
wget \
--no-check-certificate -q \
--save-cookies cookie \
--post-data "login=$USERNAME&password=$PASSWORD" \
-O login.log \
$LOGINURL \
> /dev/null
echo "Cookie Saved..."
}
startDown(){
echo "AreGet ${VERSION} By Underpenguin"
if [[ ! -r cookie ]]; then
echo "no cookie found, getting cookie..."
getcookie
fi
testfiles
}
startDown
Last edited by underpenguin (2008-06-20 00:19:50)
Offline
hi underpenguin
recently i've got problems downloading from rapidshare.com by wget, it doesn't work anymore. right now it just downloads the links. what i did those past 5 months was :
wget --http-user="username" --http-password="pw" -c -l 'urslist.txt'
but after the end of january it stopped working, and i have no idea why, do you?
Offline
problem is in wget because at the end of january has been released new version of wget. New wget doesn`t send password and username anymore. You can use aria2 instead : aria2c -s4 --http-user=name --http-passwd=password -i urllist.txt
Linux je jako mušketýři "jeden za všechny, všichni za jednoho"
Offline
You need to get a cookie first, you can use (modified from my script)
wget --no-check-certificate -q --save-cookies cookieFile --post-data "login=$USERNAME&password=$PASSWORD" https://ssl.rapidshare.com/cgi-bin/premiumzone.cgi
and then
wget --load-cookies cookieFile -c -l file
I've made some changes recently, so i'm updating the script in the first post, also, try it out and let me know if it works well.
I added a rate limiter, so you can define the max bandwidth that the script uses to download.
Offline
problem is in wget because at the end of january has been released new version of wget. New wget doesn`t send password and username anymore. You can use aria2 instead : aria2c -s4 --http-user=name --http-passwd=password -i urllist.txt
works great, thanks.
have to try underpenguin's script too. tomorrow...i have to go to bed now.
Offline
That's a great script, thank you!
EDIT: auto unrar/unzip would be brilliant.
Last edited by Raisuli (2008-02-23 12:56:31)
Offline
Maybe you'll like this:
http://sourceforge.net/projects/rrapid
Offline
When I try and use this all I get is:
areget: 13: [[: not found
areget: 19: [[: not found
trap: 23: SIGHUP: bad trap
Any ideas?
Last edited by Lazer (2008-03-01 13:46:05)
Offline
did you develop it or did you pay 65 dollars...:lol:
Offline
+1 koch, premium account requirement + 65 dollar surcharge + linux/bsd operating system doesn't equate to a big market, good luck
Archlinux on Compaq Presario v5000 laptop
Offline
Well, this is VERY KISS, but what if we add some "fixes"?
#!/bin/bash
#areget, a nice script which logins to you RapidShare account and downloads the specified file to the current folder#
#Loading some variables
VERSION=.02-1
LOGINURL=https://ssl.rapidshare.com/cgi-bin/premiumzone.cgi
RATE=450k
## Checking if this is first time execution, if it's not then it will run the first run "wizard"
if [ -f ~/.areget ]; then
. ~/.areget
else
echo "Enter your RapidShare username:"
read USERNAME
echo USERNAME=$USERNAME | tee ~/.areget > /dev/null
echo "Now enter your password:"
read PASSWORD
echo PASSWORD=$PASSWORD | tee -a ~/.areget > /dev/null
chmod 400 ~/.areget
fi
##Finished with asking/loading user data
if [[ ! -z $3 ]]; then
echo "Invalid Arguments"
exit
fi
if [[ ! -z $2 ]]; then
FILELIST=$2
else
FILELIST=$1
fi
NAME=$0
trap "rm $TEMP_FILE; exit" SIGHUP SIGINT SIGTERM
##Declaring functions
getfile(){
wget --limit-rate=$RATE --load-cookies cookie $2 $1
}
testfile(){
wget $1 --load-cookies cookie --spider -o wget_test.log
filelen=`grep "Length: [0-9,]* " wget_test.log -o | tr -d "," | grep "[0-9]*" -o`
filename=`echo $1 | grep '\/[^\/]*$' -o | tr -d "/"`
echo Current File: $filename
if [[ -f $filename ]]; then
current=`du -b $filename | cut -f1 `
echo $current of $filelen bytes completed
if [[ $current -lt $filelen ]]; then
echo $filename incomplete, finishing download...
getfile $1 -c
elif [[ $current -gt $filelen ]]; then
echo local $filename is larger than on server....
echo something is wrong....
echo exiting....
exit
elif [[ $current -eq $filelen ]]; then
echo $filename is complete
# Do nothing, move on to next file
else
echo something is b0rken.
exit
fi
else
echo starting download...
getfile $1
fi
rm wget_test.log
}
testfiles(){
count= cat ${FILELIST} | grep 'http://.*' | wc -l | tr "\n" " "
echo $count links found in $FILELIST
for link in `cat ${FILELIST}`; do
echo "Getting ${link}"
testfile $link
done
}
getcookie(){
if [[ -e cookie ]] && [[ ! -r cookie ]]; then
echo "cookie file error"
exit
fi
echo "Getting Cookie..."
wget \
--no-check-certificate -q \
--save-cookies cookie \
--post-data "login=$USERNAME&password=$PASSWORD" \
-O login.log \
$LOGINURL \
> /dev/null
echo "Cookie Saved..."
}
startDown(){
echo "AreGet ${VERSION} By Underpenguin"
if [[ ! -r cookie ]]; then
echo "no cookie found, getting cookie..."
getcookie
fi
testfiles
}
##Functions declared
##Starting program
startDown
I haven't read all of the code because I got all lost, but this seems to be working fine. You'd rather add some --usage option because i couldn't undertand at all how it works, but now I'll try it.
Anyway, since it's not my program I can't change it at all, my mods then are under the GPL if you wan't to licence your program like that it would be great
Last edited by LTSmash (2008-06-17 00:04:26)
Proud Ex-Arch user.
Still an ArchLinux lover though.
Currently on Kubuntu 9.10
Offline
Well, Now I understand the whole program, it need some serious commenting, anyways, it doesn't works; when I used it it downloaded the html page for downloading the file and not the file I wanted, maybe a problem with the cookie?
bash-3.2$ ./areget list
AreGet .02-1 By Underpenguin
1 links found in list
Getting http://rapidshare.com/files/121040432/RR_-_NE.rar
Current File: RR_-_NE.rar
starting download...
--2008-06-16 19:33:39-- http://rapidshare.com/files/121040432/RR_-_NE.rar
Resolviendo rapidshare.com... 195.122.131.9, 195.122.131.10, 195.122.131.11, ...
Connecting to rapidshare.com|195.122.131.9|:80... conectado.
Petición HTTP enviada, esperando respuesta... 200 OK
Longitud: 8325 (8.1K) [text/html]
Saving to: `RR_-_NE.rar'
100%[======================================>] 8,325 16.1K/s in 0.5s
2008-06-16 19:33:42 (16.1 KB/s) - `RR_-_NE.rar' saved [8325/8325]
bash-3.2$
Weird... I couldn't find a fix for this.
Last edited by LTSmash (2008-06-17 00:45:17)
Proud Ex-Arch user.
Still an ArchLinux lover though.
Currently on Kubuntu 9.10
Offline
Thanks for you editing, I like your wizard.
I haven't had that problem, are you sure your account hasn't expired / you haven't exceeded your download limit?
Anyways, I ran into a problem where rapidshare terminates the download and wget can't figure out how to correctly resume it so it leaves me with 2 files, neither of which are complete.
I have in mind a new project:
A python daemon that monitors a sqlite db for updates (submitted via a web interface) and downloads files from rapidshare or other http downloads in the background. I will also make a gui to interface with the daemon.
I think that the reliability of bash areget is as good as its going to get so I thought I might take it in another direction. What do you think, and would anyone out there want to help?
Offline
I haven't had that problem, are you sure your account hasn't expired / you haven't exceeded your download limit?
Well, actually it's still working under thw web browser, so that's not the problem, I guess it has something to do with the cookie instead... maybe a reseting method for getting a new one?
If you licence this under the GPL I will gladly make it more useable and rich.
About the python proyect, I really dunno, I just know bash...
Proud Ex-Arch user.
Still an ArchLinux lover though.
Currently on Kubuntu 9.10
Offline
GPL'd!
Offline
Your forgot the copyright stuff, add this:
one line to give the program's name and an idea of what it does.
Copyright (C) yyyy name of author
Let me begin some working on it
Proud Ex-Arch user.
Still an ArchLinux lover though.
Currently on Kubuntu 9.10
Offline
About this function:
testfiles(){
count= cat ${FILELIST} | grep 'http://.*' | wc -l | tr "\n" " "
echo $count links found in $FILELIST
for link in `cat ${FILELIST}`; do
echo "Getting ${link}"
testfile $link
done
}
How does it changes from the first link to the second one, I coudn't understand it at all... I mean, the "for link" statement works for choosing every link in the file?
Proud Ex-Arch user.
Still an ArchLinux lover though.
Currently on Kubuntu 9.10
Offline
`cat ${FILELIST}`
of course will produce a list of newline delimited links.
When a for..in statement is fed a set of lines like this, each iteration will act on act on a different line, so the statements in the loop will be executed for each line in the $FILELIST where $link is a specific line.
Offline
`cat ${FILELIST}`
of course will produce a list of newline delimited links.
When a for..in statement is fed a set of lines like this, each iteration will act on act on a different line, so the statements in the loop will be executed for each line in the $FILELIST where $link is a specific line.
Oh, didn't knew. guess I need to add that to my almost deprecated bash manual for n00bs xD
I have already improved a bit your script, commenting it out, cleaning the code a bit and working on arguments for it. I'll post advances in a little bit.
Proud Ex-Arch user.
Still an ArchLinux lover though.
Currently on Kubuntu 9.10
Offline
I love "Gambiarras", but where`s the Captcha in the code? The letters and numbers for pass?!?!
Offline
I love "Gambiarras", but where`s the Captcha in the code? The letters and numbers for pass?!?!
You must have a premium user account for useing this script to retrieve anything from rapidshare, else you must use their web interfase.
Proud Ex-Arch user.
Still an ArchLinux lover though.
Currently on Kubuntu 9.10
Offline
testfiles(){
count= cat ${FILELIST} | grep 'http://.*' | wc -l | tr "\n" " "
echo $count links found in $FILELIST
for link in `cat ${FILELIST}`; do
echo "Getting ${link}"
testfile $link
done
}
I'd redo this as..
testfiles(){
count=$(grep -c 'http://.*' ${FILELIST} | tr -d "\n")
echo "$count links found in $FILELIST"
< "$FILELIST" while read link; do
echo "Getting $link"
testfile "$link"
done
Also where you have
if [ -z $3 ];
You need to have
if [ -z "$3" ];
or else bash will see
if [ -z ];
when the variable is blank, which is a syntax error. It will also error if $3 has a space in it.
[git] | [AURpkgs] | [arch-games]
Offline
You must have a premium user account for useing this script to retrieve anything from rapidshare, else you must use their web interfase.
there is RDown for firefox
Sorry for my bad english :-)
Offline
I'll second RDown, I've used it in the past and it actually works really well (I was doubtful at first).
Offline
hello there!
"areget.sh" does not function anymore, cause rapidshare changed some things (http://www.rapidshare.com/news.html). so also "https://ssl.rapidshare.com/cgi-bin/premiumzone.cgi" is not available anymore. are there any solutions to this issue?
thanks in advance, ciaccom
Offline