You are not logged in.

#301 2015-12-30 21:46:05

Inspector Parrot
Registered: 2011-11-29
Posts: 27,270

Re: dzen & xmobar Hacking Thread

At the very least and easiest to implement:

curl $url | $longish_pipeline1
curl $url | $longish_pipeline2
curl $url | $longish_pipeline3
curl $url | $longish_pipeline4

could become:

curl $url > /tmp/some_tempfile
$longish_pipeline1 /tmp/some_tempfile
$longish_pipeline2 /tmp/some_tempfile
$longish_pipeline3 /tmp/some_tempfile
$longish_pipeline4 /tmp/some_tempfile
rm /tmp/some_tempfile

This would save a lot of wasted network bandwidth and downloading time.

But my comment that grep + tr + awk + tr = awk means that your entire "longish
pipeline" of commands could be replaced by a single call to awk.  This would
require learning a bit more about how to use awk well - and for practical
purposes the efficiency benefit would not be huge.

So replacing the multiple calls to curl seems like a more important

"UNIX is simple and coherent..." - Dennis Ritchie, "GNU's Not UNIX" -  Richard Stallman


#302 2015-12-30 23:18:10

Registered: 2012-08-18
Posts: 118

Re: dzen & xmobar Hacking Thread

Thanks a lot for the hints, much appreciated!

Dumping strings from websites with curl is no big thing actually:

mycitytemp=$(curl -s ${myweatherservice} | grep foo | awk -F bar '{print $7}' | tr -d baz)

Depending on the website, it might occasionally be more convenient to use a text web browser though.

Last edited by cameo (2018-02-14 19:42:32)


Board footer

Powered by FluxBB