cat /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_cur_freq
You can also get the information from /proc instead of those commands, like /proc/meminfo, /proc/acpi/ac_adapter/ACAD/state. As for conserving PIDs it doesn't really combine with the above method, because you still have to cat it I think, it's a bit easier on resources though: time free -m &>/dev/null; time cat /proc/meminfo &>/dev/null
Here is an alternative for the CPU percentage script using fewer PIDs and no eval:
read cpu a b c previdle rest < /proc/stat
prevtotal=$((a+b+c+previdle))
sleep 0.5
read cpu a b c idle rest < /proc/stat
total=$((a+b+c+idle))
CPU=$((100*( (total-prevtotal) - (idle-previdle) ) / (total-prevtotal) ))
in awk, maybe this is usable with the altogether method too:
CPU=$(awk 'BEGIN{i=0}
{sum[i]=$2+$3+$4+$5; idle[i++]=$5}
END {printf "%d\n", 100*( (sum[1]-sum[0]) - (idle[1]-idle[0]) ) / (sum[1]-sum[0])}
' <( head -n 1 /proc/stat; sleep 0.5; head -n 1 /proc/stat))
@jasonwryan: in your case, it's fine. Unless the kernel decides to start writing something like '523489;rm -rf /' to /proc/stat you'll be alright.
I suspect I'll be fine even if that does happen.
Thanks falconindy.
http://mywiki.wooledge.org/BashFAQ/048
I use it. It's extremely useful in a select number of cases, but one should be aware of the pitfalls associated with it.
@jasonwryan: in your case, it's fine. Unless the kernel decides to start writing something like '523489;rm -rf /' to /proc/stat you'll be alright.
]]>I have made quite a few changes based on everyone's input. Thank you all.
The finished product? At least for now:
#!/bin/bash
# Status script for wmfs
RED="\\#BF4D80\\"
YEL="\\#C4A1E6\\"
BLU="\\#477AB3\\"
GRN="\\#53A6A6\\"
CYN="\\#6096BF\\"
MAG="\\#7E62B3\\"
GRY="\\#666666\\"
WHT="\\#C0C0C0\\"
# Collect system information
CHG=$(acpi -V | awk '{ gsub(/,/, "");} NR==1 {print $4}')
BAT=$(grep -q "on-line" <(acpi -V) && echo $BLU"AC" || echo $RED$CHG)
MEM=$(awk '/Mem/ {print $3}' <(free -m))
# CPU line courtesy Procyon: https://bbs.archlinux.org/viewtopic.php?pid=661592
CPU=$(eval $(awk '/^cpu /{print "previdle=" $5 "; prevtotal=" $2+$3+$4+$5 }' /proc/stat); sleep 0.4;
eval $(awk '/^cpu /{print "idle=" $5 "; total=" $2+$3+$4+$5 }' /proc/stat);
intervaltotal=$((total-${prevtotal:-0}));
echo "$((100*( (intervaltotal) - ($idle-${previdle:-0}) ) / (intervaltotal) ))")
HD=$(awk '/^\/dev/{print $5}' <(df -P))
PCM=$("$HOME/Scripts/pacman-up.pl")
INT=$(host google.com>/dev/null && echo $GRN"ON" || echo $RED"NO")
DTE=$(date "+%I:%M")
# Pipe to status bar
wmfs -s "$GRY[BAT $BAT$GRY] [CPU $YEL$CPU%$GRY MEM $CYN$MEM$GRY] [HDD $MAG$HD$GRY] [PAC $BLU$PCM$GRY] [NET $INT$GRY] • $WHT$DTE"
I think that the "CPU=`eval ..." stuff is awful. You should avoid using eval.
As an alternative, take a look at the following script. It's an infinite loop, you can C-c to stop it.
#!/bin/bash
previous_total=0
previous_stats=(0 0 0 0)
while :
do
current_total=0
read -a current_stats < <(awk '/^cpu /{print $2" "$3" "$4" "$5}' /proc/stat)
for n in ${current_stats[@]}; do ((current_total+=$n)); done
sleep 2
((idle=${current_stats[3]}-${previous_stats[3]}))
((total=current_total-previous_total))
((cpu=100*(total-idle)/total))
previous_stats=(${current_stats[@]})
previous_total=$current_total
echo "cpu $cpu%"
done
Chippeur: the process substitution pointer is exactly the type of thing I am after - cheers.
hellomynameisphil: apparently the only way to learn bash is to actually write bash scripts, and as tinkering around on my laptop is the only opportunity I have to do this, ditching conky seemed as good a place as any The efficiency I was asking about was within bash, not necessarily the overhead re bash v conky - or some other language (as dmz suggested).
Ashren: I'm tempted to go with your awk line, just to try and make it a clean sweep
falconindy: your bash-fu is formidable. I am in awk.
steve___: again, those pointers are the sorts of things I hoped for when I posted: thank you.
]]>Using the process substituion is no more or less efficient than the pipe. Using the bash method:
1) bash forks, exec's a process (acpi) and writes the data to /dev/fd/XX
2) bash forks, exec's a process (grep) with stdin tied to the FD opened in step 1
Using the pipe:
1) bash forks, exec's a process (acpi) with stdout tied to an unnamed pipe
2) bash forks, exec's a process (grep) with stdin tied to the unnamed pipe
Process substitution is a convenience method of getting around the subshell created in a 'foo | while read...' loop. You could just as easily write the output of foo to a file, redirect the file to the while loop and then discard the file.
]]>temp=$(</sys/class/thermal/thermal_zone0/temp)
temp=$(($temp/1000))
BAT=$({ acpi -V | grep -q "on-line"; } && echo $BLU"AC" || echo $RED$CHG)
Which isn't any better or worse, persay.
Wouldn't using the fd be more efficient? It would save the pipe.
]]>Instead of looping every x seconds, one could use inotifywait or incron to write the values to a named pipe.
]]>Well, here's an awk version of the $BAT line:
BAT=`acpi -V | awk '/on-line/ {if ($3 == "on-line") {print "'"$BLU"'""AC"} else {print "'"$RED$CHG"'"}}'`
I know, I know ... awk isn't bash.
]]>Thanks falconindy!
Does that mean you are giving me a pass on the $BAT line?
It didn't stand out to be as being quite as hideous, but you could condense it as well, either by chippeur's bash-ier method or a more portable solution:
BAT=$({ acpi -V | grep -q "on-line"; } && echo $BLU"AC" || echo $RED$CHG)
Which isn't any better or worse, persay.
]]>First line of this thread:
Yes, thank you, I saw that, but my question still stands. One of his questions was on how to make this more efficient, and while I am no expert on these matters, I am led to understand that using bash scripts for this sort of thing is not the most efficient use of resources. It seems to me that if you want to learn (more about) bash, there are more suitable applications. I don't mean this as a criticism, I'm just curious what the OP's thoughts are on the choice of language/tool. Was conky/higher-level language considered and rejected? Is it simply that this is a pet project to learn bash and the OP doesn't mind wasting a few cpu cycles using (what I understand to be) a less efficient tool? Does the OP not find the difference in resource usage significant? These are the kinds of questions I am driving at.
]]>