You are not logged in.
Hello, I produced this script to clean folders from hidden files
#!/bin/bash
files=$(find . \( -iname '*~' -o -iname '.*' \) -type f)
#count the hidden files
n=$(echo "$files" | wc -l)
if [ $n -eq 0 ]
then
echo "No hidden file found"
exit 0
fi
#prompt to the user the found files
echo "Hidden files found:"
echo "$files"
printf "Remove? [y/N] "
read input
input=$( echo $input | tr '[A-Z]' '[a-z]' )
#take action if confirmed
if [ "$input" == "y" ] || [ "$input" == "yes" ]
then
find . \( -iname '*~' -o -iname '.*' \) -type f -exec rm '{}' \;
echo "Hidden files removed"
else
echo "Nothing was done"
fi
exit 0
As you can see I've not been able to deleting reading from $files, but I have to call a second time the find command. So first improvement is to get rid of it (keeping managing spaces in file names correctly, of course).
Other suggestions are welcomed!
Offline
You could probably run a for loop:
for i in $files;do rm $i;done
Just out of interest though, why would you want this?
All the best,
-HG
Offline
Unfortunately handling white spaces is not so easy.
PS. the interest is in cleaning working folders from old/hidden files before syncing them or copy, or tar or any other thing you can imagine.
Offline
I feel like, if you redid the first `find` and the files declaration so that it's a functional array, the for loop would be the cleanest option.
All the best,
-HG
Offline
Or use an array, this example fills an array with all pdf files under the current directory
while read file; do
files[i++]="$file"
done < <(find . -type f -name '*.pdf')
Then you can loop through that array quite easilly, you can get the count using ${#files[@]} or print the content of the array.
"UNIX is simple and coherent" - Dennis Ritchie; "GNU's Not Unix" - Richard Stallman
Offline
If you want a 'prompt for every removal'
find . \( -iname '*~' -o -iname '.*' \) -type f -exec rm -i '{}' \;
An array and two while loops? Very expensive \s
Offline
The array would remove the need for the while loops.
"UNIX is simple and coherent" - Dennis Ritchie; "GNU's Not Unix" - Richard Stallman
Offline
Ok then he should use quotes
files="$(find . -name '.*' -o -name '~*' -type f)"
for f in "$files"
do
rm -i "$f"
done
But the other find command would do it too.
Offline
Ok then he should use quotes
files="$(find . -name '.*' -o -name '~*' -type f)" for f in "$files" do rm -i "$f" done
But the other find command would do it too.
No, this simply doesn't work because the entirety of the find result will be quoted as a single filename. You really should just do this all in find...
find . -name '.*' ! -type -d -exec rm -i {} +
Offline
I tried with echo...
Offline
If you want a 'prompt for every removal'
This would be very annoying. Imagine that you have hundreds of file.. what my script does now is printing the list, so you can have a glimpse of it, and ask just one ok for the removal of ALL of them.
I still have to try the arrays, I'm not very familiar with them in bash.
Offline
If you need to defend against uncontrolled filenames, for example
touch $'very\nnasty\\filename'
, you should use -print0 in find, then you can fill the array with
while read -r -d $'\0' file; do
array+=("$file")
done < <(find . -print0)
Offline
Calling rm once per file is a bad idea, as it generates too much process overhead. The following code takes advantage of the Internal Field Separator:
files=$(IFS=$'\n';find . -type f)
IFS=$'\n';for i in $files;do echo "$i";done|xargs -d$'\n' rm
The code is safe for all file names, except those which contain a \n character; hopefully you won't need to cover such an insanity
Remember to replace the find command with your expression before you merge it into your code.
Offline
The code is safe for all file names, except those which contain a \n character
It's also unsafe for those which contain globbing chars such as *, ?, or [...].
The amount of bad/uninformed advice in this thread is rather staggering.
Last edited by falconindy (2013-04-25 12:03:38)
Offline
It's also unsafe for those which contain globbing chars such as *, ?, or [...].
Absolutely not, I've tried it in my system, for those who seek material evidence:
Script started on Thu Apr 25 09:56:55 2013
sh-4.2$ mkdir a && cd a
sh-4.2$ pwd
/home/vitor/a
sh-4.2$ touch b c\*c2 d\?d2 "e e2" "f f2"
sh-4.2$ ls -la
total 8
drwxr-x--- 2 vitor vitor 4096 Apr 25 09:57 .
drwx------ 28 vitor vitor 4096 Apr 25 09:57 ..
-rw-r----- 1 vitor vitor 0 Apr 25 09:57 b
-rw-r----- 1 vitor vitor 0 Apr 25 09:57 c*c2
-rw-r----- 1 vitor vitor 0 Apr 25 09:57 d?d2
-rw-r----- 1 vitor vitor 0 Apr 25 09:57 e e2
-rw-r----- 1 vitor vitor 0 Apr 25 09:57 f?f2
sh-4.2$ ls -la|cat -A
total 8$
drwxr-x--- 2 vitor vitor 4096 Apr 25 09:57 .$
drwx------ 28 vitor vitor 4096 Apr 25 09:57 ..$
-rw-r----- 1 vitor vitor 0 Apr 25 09:57 b$
-rw-r----- 1 vitor vitor 0 Apr 25 09:57 c*c2$
-rw-r----- 1 vitor vitor 0 Apr 25 09:57 d?d2$
-rw-r----- 1 vitor vitor 0 Apr 25 09:57 e e2$
-rw-r----- 1 vitor vitor 0 Apr 25 09:57 f^If2$
sh-4.2$ files=$(IFS=$'\n';find . -type f)
sh-4.2$ IFS=$'\n';for i in $files;do echo "$i";done|xargs -d$'\n' rm
sh-4.2$ ls -la
total 8
drwxr-x--- 2 vitor vitor 4096 Apr 25 09:58 .
drwx------ 28 vitor vitor 4096 Apr 25 09:57 ..
sh-4.2$ ls -la|cat -A
total 8$
drwxr-x--- 2 vitor vitor 4096 Apr 25 09:58 .$
drwx------ 28 vitor vitor 4096 Apr 25 09:57 ..$
sh-4.2$ cd ..
sh-4.2$ rmdir a
sh-4.2$ echo $?
0
sh-4.2$ exit
Script done on Thu Apr 25 09:59:23 2013
As you must know /bin/sh is symlinked to /usr/bin/bash, it just doesn't read initialization files.
Offline
falconindy wrote:It's also unsafe for those which contain globbing chars such as *, ?, or [...].
Absolutely not, I've tried it in my system, for those who seek material evidence:
Sure it is. Your for loop "list" is unquoted, and is therefore subject to glob expansion before iteration. It simply doesn't matter what the IFS is. Your contrived example fails to properly illustrate the pitfalls.
http://mywiki.wooledge.org/DontReadLinesWithFor
http://mywiki.wooledge.org/BashFAQ/001
Offline
Hmm I hadn't considered that, indeed my example isn't generic enough. The following should fix that:
IFS=$'\n';for i in $files;do echo "$i";done|sort|uniq|xargs -d$'\n' rm
Now properly tested with files "a a2" "a?a3" "a*a3" "a*" "b".
Or using read, as suggested by the pages you pointed to:
IFS=$'\n';echo "$files"|while read -r i;do echo "$i";done|xargs -d$'\n' rm
Not sure which would be faster, but probably not much of a difference compared to calling rm arbitrarily many times.
Offline
Or really... just use one of the several well-formed examples in this thread. Manipulating IFS is entirely wrong for the purposes of reading a list out of a simple string variable (which is wrong on its own -- this is why arrays exist).
Last edited by falconindy (2013-04-25 15:34:41)
Offline
Or really... just use one of the several well-formed examples in this thread.
Yeah there's one solution using arrays formulated by Trilby, and improved by aesiris. Kind of reminds me of the episode Sheldon Cooper says, "Notify the editors of the Oxford English Dictionary: the word 'plenty' has been redefined to mean 'two'."
As for your earlier post:
No, this simply doesn't work because the entirety of the find result will be quoted as a single filename. You really should just do this all in find...
find . -name '.*' ! -type -d -exec rm -i {} +
The + thing is a good example of saving resources, however the op doesn't want to prompt the user for authorization for every single file (rm -i). Maybe two invocations of find? By the time the user has finished reading the file list, other files may have been created and they will be removed without authorization.
Manipulating IFS is entirely wrong for the purposes of reading a list out of a simple string variable (which is wrong on its own -- this is why arrays exist).
I believe that calling a functioning, standards-compliant solution "entirely wrong" means to call it "not as clean as you would desire". So, back to the drawing board:
files=$(find . -type f)
echo "$files"|xargs -d$'\n' rm
Although my earlier post proved one can work with IFS, you were right after all, it is not necessary. Hope you don't spot any further problems or my code will end up being a single word!
Offline
The find command also has a "-delete" option that would allow to solve the original problem with the following alias:
alias findhidden="find . \( -iname '*~' -o -iname '.*' \) -type f"
One would call findhidden once to see what would be deleted. To delete the files, one could then run "findhidden -delete" after retrieving the findhidden command from the history and adding the "-delete" option for find. If the find process does the deleting, you don't have to work out how to pass weird file names to the shell safely. Two drawbacks I can think of: "-delete" is longer to type than "y". And another process could create more hidden files between the two runs of findhidden, which would then also be deleted (unlike with the shell script solution).
Hope you don't spot any further problems or my code will end up being a single word!
Now we're already down to a single line!
Officer, I had to drive home - I was way too drunk to teleport!
Offline
Nice, I didn't know about that option! Find is one of those all-in-one programs, every day a surprise (not much to the joy of UNIX philosophy). But as I said before,
Maybe two invocations of find? By the time the user has finished reading the file list, other files may have been created and they will be removed without authorization.
Though an unlikely scenario, it is a good idea to take it into account.
Offline