How to loop through a directory recursively to delete files with certain extensions
Asked Answered
H

16

230

I need to loop through a directory recursively and remove all files with extension .pdf and .doc. I'm managing to loop through a directory recursively but not managing to filter the files with the above mentioned file extensions.

My code so far

#/bin/sh

SEARCH_FOLDER="/tmp/*"

for f in $SEARCH_FOLDER
do
    if [ -d "$f" ]
    then
        for ff in $f/*
        do      
            echo "Processing $ff"
        done
    else
        echo "Processing file $f"
    fi
done

I need help to complete the code, since I'm not getting anywhere.

Hilliard answered 9/1, 2011 at 11:23 Comment(4)
I know it's bad form to execute code without understanding it, but a lot of people come to this site to learn bash scripting. I got here by googling "bash scripting files recursively", and almost ran one of these answers (just to test the recursion) without realizing it would delete files. I know rm is a part of OP's code, but it's not actually relevant to the question asked. I think it'd be safer if answers were phrased using a harmless command like echo.Ankney
Similar question here: #41800438Walleyed
@Keith had similar experience, completely agree and changed the titleStockjobber
Warning for noobs like me, wasting hours: In most of the answers, you need to change where it says "/tmp/" directory you want to do it, example: "/home/my folder".Infirmity
Z
183

find is just made for that.

find /tmp -name '*.pdf' -or -name '*.doc' | xargs rm
Zenia answered 9/1, 2011 at 11:33 Comment(7)
Or find's -delete option.Riddance
One should always use find ... -print0 | xargs -0 ..., not raw find | xargs to avoid problems with filenames containing newlines.Androus
Using xargs with no options is almost always bad advice and this is no exception. Use find … -exec instead.Contemporaneous
@Gilles'SO-stopbeingevil': Why is that bad advice?Gorham
@CarlWinbäck Because the syntax of the input to xargs is not the syntax that find (or any other common command) prints. xargs expects a particular kind of quote-delimited input.Contemporaneous
Can I sed on the found files? I think not because I need a file from find..Intine
Using -delete -print with find will cause it to delete the files and print filenames as it does so. That's the behavior I was looking for so figured I'd post.Largeminded
E
292

As a followup to mouviciel's answer, you could also do this as a for loop, instead of using xargs. I often find xargs cumbersome, especially if I need to do something more complicated in each iteration.

for f in $(find /tmp -name '*.pdf' -or -name '*.doc'); do rm $f; done

As a number of people have commented, this will fail if there are spaces in filenames. You can work around this by temporarily setting the IFS (internal field seperator) to the newline character. This also fails if there are wildcard characters \[?* in the file names. You can work around that by temporarily disabling wildcard expansion (globbing).

IFS=$'\n'; set -f
for f in $(find /tmp -name '*.pdf' -or -name '*.doc'); do rm "$f"; done
unset IFS; set +f

If you have newlines in your filenames, then that won't work either. You're better off with an xargs based solution:

find /tmp \( -name '*.pdf' -or -name '*.doc' \) -print0 | xargs -0 rm

(The escaped brackets are required here to have the -print0 apply to both or clauses.)

GNU and *BSD find also has a -delete action, which would look like this:

find /tmp \( -name '*.pdf' -or -name '*.doc' \) -delete
Edging answered 9/3, 2011 at 15:21 Comment(9)
This does not work as expected if there is a space in the file name (the for loop splits the results of find on whitespace).Perfecto
How do you avaoid splitting on whitespace? I'm trying a similar thing and I have a lot of directories with whitespaces that screw up this loop.Cipango
because it's a very helpful answer?Clere
@Cipango Fix the whitespace splitting by using quotes like this: "$(find...)". I've edited James' answer to show.Hebert
@Hebert your edit didn't fix anything at all: it actually made the command only work if there's a unique found file. At least this version works if there are no spaces, tabs, etc. in filenames. I rolled back to the old version. Noting sensible can really fix a for f in $(find ...). Just don't use this method.Fu
@Fu I tested using echo and not rm, which hid the '\n' characters so it looked like it worked. Thanks for checking and fixing.Hebert
@DrewDormann my testing also shows that "$(find...)" makes things worse. I've undone your edit, along with making a long-overdue update of my own.Edging
find has a -delete flag. Why aren't we using it?Depart
See this answer on why this does not work and what to use instead: askubuntu.com/a/343753. Details, why this will fail are given here (also linked from the answer on askubuntu): mywiki.wooledge.org/BashPitfalls#for_f_in_.24.28ls_.2A.mp3.29Pesthouse
Z
183

find is just made for that.

find /tmp -name '*.pdf' -or -name '*.doc' | xargs rm
Zenia answered 9/1, 2011 at 11:33 Comment(7)
Or find's -delete option.Riddance
One should always use find ... -print0 | xargs -0 ..., not raw find | xargs to avoid problems with filenames containing newlines.Androus
Using xargs with no options is almost always bad advice and this is no exception. Use find … -exec instead.Contemporaneous
@Gilles'SO-stopbeingevil': Why is that bad advice?Gorham
@CarlWinbäck Because the syntax of the input to xargs is not the syntax that find (or any other common command) prints. xargs expects a particular kind of quote-delimited input.Contemporaneous
Can I sed on the found files? I think not because I need a file from find..Intine
Using -delete -print with find will cause it to delete the files and print filenames as it does so. That's the behavior I was looking for so figured I'd post.Largeminded
M
108

Without find:

for f in /tmp/* tmp/**/* ; do
  ...
done;

/tmp/* are files in dir and /tmp/**/* are files in subfolders. It is possible that you have to enable globstar option (shopt -s globstar). So for the question the code should look like this:

shopt -s globstar
for f in /tmp/*.pdf /tmp/*.doc tmp/**/*.pdf tmp/**/*.doc ; do
  rm "$f"
done

Note that this requires bash ≥4.0 (or zsh without shopt -s globstar, or ksh with set -o globstar instead of shopt -s globstar). Furthermore, in bash <4.3, this traverses symbolic links to directories as well as directories, which is usually not desirable.

Mephistopheles answered 26/2, 2013 at 11:54 Comment(8)
This method worked for me, even with filenames containing spaces on OSXEdgeways
Worth noting that globstar is only available in Bash 4.0 or newer.. which is not the default version on many machines.Opposition
I dont think you need to specify the first argument. (At least as of today,) for f in /tmp/** will be enough. Includes the files from /tmp dir.Jubilee
Wouldn't it be better like this ? for f in /tmp/*.{pdf,doc} tmp/**/*.{,pdf,doc} ; doWartow
Also, shopt -s dotglob if hidden files should be found; and if you haven an empty directory with one of the extensions, your command would delete that, too - maybe test if f is a file, [[ -f $f ]] && rm "$f"?Ironbound
** is a nice extension but not portable to POSIX sh. (This question is tagged bash but it would be nice to point out that unlike several of the solutions here, this really is Bash-only. Or, well, it works in several other extended shells, too.)Acetophenetidin
misses hidden foldersNonsense
It seems to miss double-nested folders also.Timberland
N
38

If you want to do something recursively, I suggest you use recursion (yes, you can do it using stacks and so on, but hey).

recursiverm() {
  for d in *; do
    if [ -d "$d" ]; then
      (cd -- "$d" && recursiverm)
    fi
    rm -f *.pdf
    rm -f *.doc
  done
}

(cd /tmp; recursiverm)

That said, find is probably a better choice as has already been suggested.

Nasa answered 9/1, 2011 at 11:35 Comment(0)
S
22

Here is an example using shell (bash):

#!/bin/bash

# loop & print a folder recursively,
print_folder_recurse() {
    for i in "$1"/*;do
        if [ -d "$i" ];then
            echo "dir: $i"
            print_folder_recurse "$i"
        elif [ -f "$i" ]; then
            echo "file: $i"
        fi
    done
}


# try get path from param
path=""
if [ -d "$1" ]; then
    path=$1;
else
    path="/tmp"
fi

echo "base path: $path"
print_folder_recurse $path
Spindrift answered 8/3, 2014 at 9:28 Comment(0)
E
19

This doesn't answer your question directly, but you can solve your problem with a one-liner:

find /tmp \( -name "*.pdf" -o -name "*.doc" \) -type f -exec rm {} +

Some versions of find (GNU, BSD) have a -delete action which you can use instead of calling rm:

find /tmp \( -name "*.pdf" -o -name "*.doc" \) -type f -delete
Enwomb answered 9/1, 2011 at 11:32 Comment(0)
C
11

For bash (since version 4.0):

shopt -s globstar nullglob dotglob
echo **/*".ext"

That's all.
The trailing extension ".ext" there to select files (or dirs) with that extension.

Option globstar activates the ** (search recursivelly).
Option nullglob removes an * when it matches no file/dir.
Option dotglob includes files that start wit a dot (hidden files).

Beware that before bash 4.3, **/ also traverses symbolic links to directories which is not desirable.

Charged answered 19/2, 2016 at 9:48 Comment(0)
C
9

This method handles spaces well.

files="$(find -L "$dir" -type f)"
echo "Count: $(echo -n "$files" | wc -l)"
echo "$files" | while read file; do
  echo "$file"
done

Edit, fixes off-by-one

function count() {
    files="$(find -L "$1" -type f)";
    if [[ "$files" == "" ]]; then
        echo "No files";
        return 0;
    fi
    file_count=$(echo "$files" | wc -l)
    echo "Count: $file_count"
    echo "$files" | while read file; do
        echo "$file"
    done
}
Cassirer answered 9/11, 2012 at 4:9 Comment(2)
I think "-n" flag after echo not needed. Just test it yourself: with "-n" your script gives wrong number of files. For exactly one file in directory it outputs "Count: 0"Blaineblainey
This doesn't work with all file names: it fails with spaces at the end of the name, with file names containing newlines and with some file names containing backslashes. These defects could be fixed but the whole approach is needlessly complex so it isn't worth bothering.Contemporaneous
P
2

This is the simplest way I know to do this: rm **/@(*.doc|*.pdf)

** makes this work recursively

@(*.doc|*.pdf) looks for a file ending in pdf OR doc

Easy to safely test by replacing rm with ls

Presentable answered 5/2, 2020 at 1:40 Comment(0)
B
1

The following function would recursively iterate through all the directories in the \home\ubuntu directory( whole directory structure under ubuntu ) and apply the necessary checks in else block.

function check {
        for file in $1/*      
        do
        if [ -d "$file" ]
        then
                check $file                          
        else
               ##check for the file
               if [ $(head -c 4 "$file") = "%PDF" ]; then
                         rm -r $file
               fi
        fi
        done     
}
domain=/home/ubuntu
check $domain
Bertold answered 8/10, 2016 at 18:9 Comment(0)
D
1

There is no reason to pipe the output of find into another utility. find has a -delete flag built into it.

find /tmp -name '*.pdf' -or -name '*.doc' -delete
Depart answered 20/2, 2019 at 22:37 Comment(0)
O
0

The other answers provided will not include files or directories that start with a . the following worked for me:

#/bin/sh
getAll()
{
  local fl1="$1"/*;
  local fl2="$1"/.[!.]*; 
  local fl3="$1"/..?*;
  for inpath in "$1"/* "$1"/.[!.]* "$1"/..?*; do
    if [ "$inpath" != "$fl1" -a "$inpath" != "$fl2" -a "$inpath" != "$fl3" ]; then 
      stat --printf="%F\0%n\0\n" -- "$inpath";
      if [ -d "$inpath" ]; then
        getAll "$inpath"
      #elif [ -f $inpath ]; then
      fi;
    fi;
  done;
}
Octogenarian answered 25/3, 2019 at 7:13 Comment(0)
P
0

I think the most straightforward solution is to use recursion, in the following example, I have printed all the file names in the directory and its subdirectories.

You can modify it according to your needs.

#!/bin/bash    
printAll() {
    for i in "$1"/*;do # for all in the root 
        if [ -f "$i" ]; then # if a file exists
            echo "$i" # print the file name
        elif [ -d "$i" ];then # if a directroy exists
            printAll "$i" # call printAll inside it (recursion)
        fi
    done 
}
printAll $1 # e.g.: ./printAll.sh .

OUTPUT:

> ./printAll.sh .
./demoDir/4
./demoDir/mo st/1
./demoDir/m2/1557/5
./demoDir/Me/nna/7
./TEST

It works fine with spaces as well!

Note: You can use echo $(basename "$i") # print the file name to print the file name without its path.

OR: Use echo ${i%/##*/}; # print the file name which runs extremely faster, without having to call the external basename.

Presumable answered 21/12, 2021 at 17:32 Comment(0)
B
0

Lots of answers here, but I was surprised that I couldn't find this very simple one:

rm -v **/*.pdf **/*.doc

Or add the -i option and rm will prompt you for each file.

Tested in fish, although it should work with most other shells, too.

Update: Also tested in zsh 5.9.

Beezer answered 23/7, 2023 at 17:44 Comment(0)
H
-2

Just do

find . -name '*.pdf'|xargs rm
Halvorson answered 9/1, 2011 at 11:32 Comment(1)
No, don't do this. This breaks if you have filenames with spaces or other funny symbols.Fu
T
-2

If you can change the shell used to run the command, you can use ZSH to do the job.

#!/usr/bin/zsh

for file in /tmp/**/*
do
    echo $file
done

This will recursively loop through all files/folders.

Taradiddle answered 27/1, 2020 at 13:16 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.