You are not logged in.
Pages: 1
Welp, I couldn't find a thread for this, just a post in the conky thread. Anyhow, some of you may have noticed that albumart.org is down, which causes a problem for the existing mpd-cover script.
I fixed it to work with coverhunt.com I guess or something.
Credit to the original author, whoever that is.
This script fetches and stores album art from coverhunt.com using your MPD NowPlaying information and then creates/updates /tmp/cover so you can display album art in conky.
I guess you all know where this goes etc.
#!/usr/bin/python2
import os
import shutil
import commands
import urllib
def copycover(currentalbum, src, dest, defaultfile):
searchstring = currentalbum.replace(" ", "+")
if not os.path.exists(src):
url = "http://www.coverhunt.com/index.php?query=" + searchstring + "&action=Find+my+CD+Covers"
cover = urllib.urlopen(url).read()
image = ""
for line in cover.split("\n"):
if "amazon.com" in line:
image = line.partition('src="')[2].partition('"')[0]
image = image.replace("._SL75_", '')
break
if image:
urllib.urlretrieve(image, src)
if os.path.exists(src):
shutil.copy(src, dest)
elif os.path.exists(defaultfile):
shutil.copy(defaultfile, dest)
else:
print("Image not found")
# Path where the images are saved
imgpath = os.getenv("HOME") + "/.covers/"
# image displayed when no image found
noimg = imgpath + "nocover.png"
# Cover displayed by conky
cover = "/tmp/cover"
# Name of current album
album = commands.getoutput("mpc --format %artist%+%album% | head -n 1")
# If tags are empty, use noimg.
if album == "":
if os.path.exists(conkycover):
os.remove(conkycover)
if os.path.exists(noimg):
shutil.copy(noimg, conkycover)
else:
print("Image not found!")
else:
filename = imgpath + album + ".jpg"
if os.path.exists("/tmp/nowplaying") and os.path.exists("/tmp/cover"):
nowplaying = open("/tmp/nowplaying").read()
if nowplaying == album:
pass
else:
copycover(album, filename, cover, noimg)
open("/tmp/nowplaying", "w").write(album)
else:
copycover(album, filename, cover, noimg)
open("/tmp/nowplaying", "w").write(album)
And the relevant .conkyrc parts to add
imlib_cache_flush_interval 30s
imlib_cache_size 0
TEXT
${scroll 36 5 ${mpd_artist} }
${scroll 36 5 ${mpd_title} }
${scroll 36 5 ${mpd_album} }
${mpd_status} $alignr ${mpd_percent}%
${mpd_bar}
${execi 5 mpd-cover}
${image /tmp/cover -s 75x75}
It was shoddily done and stuff I guess, just a temporary fix I figured I'd put up here if anyone wants it. It goes here I think.
Coverhunt gets the art from amazon I guess.
Anyway, figured I'd let y'all have it. Artwork is 500x500 afaik.
Edit: Now Coverhunt appears to be down.
=/
Last edited by sloth (2011-01-31 10:16:42)
cat ~/ > /dev/THEFUTURE
Offline
Conky: desktop window (15a) is root window
Conky: drawing to desktop window
Conky: drawing to double buffer
File "/bin/cover", line 18
break
^
IndentationError: unexpected indent
Offline
Forum software messed up the indentation. Fixed it in the OP.
cat ~/ > /dev/THEFUTURE
Offline
Works great, I love it, thanks
Offline
albumart.org appears to be up again. Is there a link to the mpd-cover script that supports that site? Up until now I only had one from amazon that worked, and it could be improved massively.
Offline
Here's what I use to get the album cover from amazon.com:
#!/bin/bash -e
wget='wget -qO -'
xmllint='xmllint --html --xpath'
encoded="$(perl -MURI::Escape -e 'print uri_escape($ARGV[0]);' "$1")"
results=$($wget "http://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Dpopular&field-keywords=$encoded&x=0&y=0")
$xmllint 'string(//img[@class="" and starts-with(@src, "http://ecx.images-amazon")]/@src)' <(echo $results)
Works like this:
$ coverart "radiohead pablo honey" 2>/dev/null
http://ecx.images-amazon.com/images/I/61ddvFn%2BwRL._SL160_AA115_.jpg
Then wget that url or do whatever you like.
Offline
Welp
That's me
“Great art is horseshit, buy tacos.” - Charles Bukowski
freenode/archlinux: nl-trisk
Offline
Since that simple script of r6 was so helpful for me, I thought it only fair for me to provide my tweak to it.
Mods:
* Fetched from albumart.org
* Uses directory path for search information
* downloads larger image instead of thumbnail
* works with find -exec {} to auto populate all directories
Here is a link to my simple write up
I just run the following in the root of my music directory
find . -type d -exec ./get_coverart {} \;
And a copy of the script
#!/bin/bash -e
# get_coverart.sh
#
# This simple script will fetch the cover art for the album information provided on the command line.
# It will then download that cover image, and place it into the child directory.
# The term "album information" is really the relative path of the final directory.
#
# get_coverart <relative-path>
#
# get_coverart Tonic/Lemon Parade
#
# get_coverart Tonic/Lemon\ Parade
#
# get_coverart Tonic/Lemon_Parade
#
# To auto-populate all directories in the current directory, run the following command
#
# find . -type d -exec ./get_coverart "{}" \;
dpath="$1"
encoded="$(perl -MURI::Escape -e 'print uri_escape($ARGV[0]);' "$dpath")"
# Skip already processed ones
if [ -f "$dpath/cover.jpg" ]
then
echo "$dpath/cover.jpg already exists"
exit
fi
echo ""
echo "Searching for: [$1]"
url="http://www.albumart.org/index.php?srchkey=$encoded&itempage=1&newsearch=1&searchindex=Music"
echo "Searching ... [$url]"
coverurl=`wget -qO - $url | xmllint --html --xpath 'string(//a[@title="View larger image" and starts-with(@href, "http://ecx.images-amazon")]/@href)' - 2>/dev/null`
echo "Cover URL: [$coverurl]"
wget "$coverurl" -O "$dpath/cover.jpg"
Last edited by Skidd (2011-03-10 03:53:35)
Offline
#Skidd
Thanks, It's really useful
Offline
Pages: 1