How can I find all of the distinct file extensions in a folder hierarchy?
Asked Answered
H

19

341

On a Linux machine I would like to traverse a folder hierarchy and get a list of all of the distinct file extensions within it.

What would be the best way to achieve this from a shell?

Hess answered 3/12, 2009 at 19:18 Comment(0)
D
485

Try this (not sure if it's the best way, but it works):

find . -type f | perl -ne 'print $1 if m/\.([^.\/]+)$/' | sort -u

It work as following:

  • Find all files from current folder
  • Prints extension of files if any
  • Make a unique sorted list
Dilution answered 3/12, 2009 at 19:21 Comment(11)
just for reference: if you want to exclude some directories from searching (e.g. .svn), use find . -type f -path '*/.svn*' -prune -o -print | perl -ne 'print $1 if m/\.([^.\/]+)$/' | sort -u sourceMoan
Spaces will not make any difference. Each file name will be in separate line, so file list delimiter will be "\n" not space.Dilution
On Windows, this works better and is much faster than find: dir /s /b | perl -ne 'print $1 if m/\.([^^.\\\\]+)$/' | sort -uTegan
Note: if you want to make it an alias in .bashrc, you have to escape $1 as \$1. In fact it seems escaping $1 doesn't do harm for console usage either.Agency
git variation of the answer: use git ls-tree -r HEAD --name-only instead of findAgency
This seems to show the string after the first dot, e.g. theme from page_manager.theme.inc.Ci
A variation, this shows the list with counts per extension: find . -type f | perl -ne 'print $1 if m/\.([^.\/]+)$/' | sort | uniq -c | sort -nYpres
As a drawback it find also files like configs-0.1.6 which don't have extensions but have dots in it's name.Amimia
how to ignore hidden files coming into the list?Douala
This is black magic, people. It works incredibly fast. Incredibly!Reginaldreginauld
The perl bit can be shortened perl -ne 's/.+\.// && print'Vandavandal
S
92

No need for the pipe to sort, awk can do it all:

find . -type f | awk -F. '!a[$NF]++{print $NF}'
Sturgeon answered 24/8, 2011 at 5:21 Comment(5)
I am not getting this to work as an alias, I am getting awk: syntax error at source line 1 context is >>> !a[] <<< awk: bailing out at source line 1. What am I doing wrong? My alias is defined like this: alias file_ext="find . -type f -name '.' | awk -F. '!a[$NF]++{print $NF}'"Hames
@Hames the problem is that you are trying to surround the entire one-liner with quotes for the alias command but the command itself already uses quotes in the find command. To fix this I would use bash's literal string syntax as so: alias file_ext=$'find . -type f -name "*.*" | awk -F. \'!a[$NF]++{print $NF}\''Sturgeon
this doesn't work if one subdir has a . in it's name and the file doesn't have file extension. Example: when we run from maindir it will fail for maindir/test.dir/myfileEudemonics
@NelsonTeixeira Add -printf "%f\n" to the end of the 'find' command and re-run your test.Sturgeon
I found what I was looking for. Your command help me list the file types but I wanted a number next to the type. Googled and found this find . -type f | sed -n 's/..*\.//p' | sort | uniq -c Thanks for the helpAmr
G
75

My awk-less, sed-less, Perl-less, Python-less POSIX-compliant alternative:

find . -name '*.?*' -type f | rev | cut -d. -f1 | rev  | tr '[:upper:]' '[:lower:]' | sort | uniq --count | sort -rn

The trick is that it reverses the line and cuts the extension at the beginning.
It also converts the extensions to lower case.

Example output:

   3689 jpg
   1036 png
    610 mp4
     90 webm
     90 mkv
     57 mov
     12 avi
     10 txt
      3 zip
      2 ogv
      1 xcf
      1 trashinfo
      1 sh
      1 m4v
      1 jpeg
      1 ini
      1 gqv
      1 gcs
      1 dv
Gain answered 23/3, 2019 at 18:37 Comment(6)
on mac, uniq doesn't have the full flag --count, but -c works just fineGuck
Very cool, would be nice if this didn't include files that don't have extensions though. Running this at the base of a repo produces a crap load of git files that are extensionless.Needless
@ChrisHayes, easy help: find . -type f -name '*.?* .... ', not fully tested but should work.Rosalynrosalynd
busybox's uniq also lacks --count, but it does have -cHershel
Superb! I've added it to an alias. Thank you!Persistence
find . -name '*.*' -type f | rev | cut -d. -f1 | rev | tr '[:upper:]' '[:lower:]' | sort | uniq -c | sort -rn (to limit results to files that have an extension)Unknowable
C
61

Recursive version:

find . -type f | sed -e 's/.*\.//' | sed -e 's/.*\///' | sort -u

If you want totals (how may times the extension was seen):

find . -type f | sed -e 's/.*\.//' | sed -e 's/.*\///' | sort | uniq -c | sort -rn

Non-recursive (single folder):

for f in *.*; do printf "%s\n" "${f##*.}"; done | sort -u

I've based this upon this forum post, credit should go there.

Celenacelene answered 3/12, 2009 at 19:38 Comment(1)
Great! also works for my git scenario, was trying to figure out which type of files I have touched in the last commit: git show --name-only --pretty="" | sed -e 's/.*\.//' | sed -e 's/.*\///' | sort -uHammer
K
42

Powershell:

dir -recurse | select-object extension -unique

Thanks to http://kevin-berridge.blogspot.com/2007/11/windows-powershell.html

Kilt answered 23/4, 2010 at 14:18 Comment(5)
The OP said "On a Linux machine"Aron
actually there is prowershell for linux out now: github.com/Microsoft/PowerShell-DSC-for-LinuxZephaniah
As written, this will also pick up directories that have a . in them (e.g. jquery-1.3.4 will show up as .4 in the output). Change to dir -file -recurse | select-object extension -unique to get only file extensions.Aguilera
@Forbesmyester: People with Windows (like me) will find this question to. So this is usefull.Kathyrnkati
Thanks for Powershell answer. You don't assume how users search. Lot of people upvoted for a reasonFisherman
E
19

Adding my own variation to the mix. I think it's the simplest of the lot and can be useful when efficiency is not a big concern.

find . -type f | grep -oE '\.(\w+)$' | sort -u
Erythrocyte answered 15/7, 2013 at 5:59 Comment(5)
+1 for portability, although the regex is quite limited, as it only matches extensions consisting of a single letter. Using the regex from the accepted answer seems better: $ find . -type f | grep -o -E '\.[^.\/]+$' | sort -uCoom
Agreed. I slacked off a bit there. Editing my answer to fix the mistake you spotted.Erythrocyte
cool. I chenge quotes to doublequotes, update grep biraries and dependencies(because provided with git is outdated) and now this work under windows. feel like linux user.Kaiserism
I like this approach. Just would change the regex a bit $ find . -type f | grep -Eo '\.(\w+)$' | sort -u. The original one shows files without extension in my case that was not what I needed.Aiden
Nr1, thanks alot for this minimal and elegant exampleEradicate
C
13

Find everythin with a dot and show only the suffix.

find . -type f -name "*.*" | awk -F. '{print $NF}' | sort -u

if you know all suffix have 3 characters then

find . -type f -name "*.???" | awk -F. '{print $NF}' | sort -u

or with sed shows all suffixes with one to four characters. Change {1,4} to the range of characters you are expecting in the suffix.

find . -type f | sed -n 's/.*\.\(.\{1,4\}\)$/\1/p'| sort -u
Cistaceous answered 3/12, 2009 at 21:47 Comment(5)
No need for the pipe to 'sort', awk can do it all: find . -type f -name "." | awk -F. '!a[$NF]++{print $NF}'Sturgeon
@Sturgeon Yours should be a separate answer. It found that command to work the best for large folders, as it prints the extensions as it finds them. But note that it should be: -name "."Perspicuous
@Perspicuous done, posted answer here. Not quite sure about what you mean by the -name "." thing because that's what it already isSturgeon
I meant it should be -name "*.*", but StackOverflow removes the * characters, which probably happened in your comment as well.Perspicuous
It seems like this should be the accepted answer, awk is preferable to perl as a command-line tool and it embraces the unix philosophy of piping small interoperable programs into cohesive and readable procedures.Kilter
U
9

I tried a bunch of the answers here, even the "best" answer. They all came up short of what I specifically was after. So besides the past 12 hours of sitting in regex code for multiple programs and reading and testing these answers this is what I came up with which works EXACTLY like I want.

 find . -type f -name "*.*" | grep -o -E "\.[^\.]+$" | grep -o -E "[[:alpha:]]{2,16}" | awk '{print tolower($0)}' | sort -u
  • Finds all files which may have an extension.
  • Greps only the extension
  • Greps for file extensions between 2 and 16 characters (just adjust the numbers if they don't fit your need). This helps avoid cache files and system files (system file bit is to search jail).
  • Awk to print the extensions in lower case.
  • Sort and bring in only unique values. Originally I had attempted to try the awk answer but it would double print items that varied in case sensitivity.

If you need a count of the file extensions then use the below code

find . -type f -name "*.*" | grep -o -E "\.[^\.]+$" | grep -o -E "[[:alpha:]]{2,16}" | awk '{print tolower($0)}' | sort | uniq -c | sort -rn

While these methods will take some time to complete and probably aren't the best ways to go about the problem, they work.

Update: Per @alpha_989 long file extensions will cause an issue. That's due to the original regex "[[:alpha:]]{3,6}". I have updated the answer to include the regex "[[:alpha:]]{2,16}". However anyone using this code should be aware that those numbers are the min and max of how long the extension is allowed for the final output. Anything outside that range will be split into multiple lines in the output.

Note: Original post did read "- Greps for file extensions between 3 and 6 characters (just adjust the numbers if they don't fit your need). This helps avoid cache files and system files (system file bit is to search jail)."

Idea: Could be used to find file extensions over a specific length via:

 find . -type f -name "*.*" | grep -o -E "\.[^\.]+$" | grep -o -E "[[:alpha:]]{4,}" | awk '{print tolower($0)}' | sort -u

Where 4 is the file extensions length to include and then find also any extensions beyond that length.

Unblown answered 26/5, 2014 at 18:45 Comment(5)
Is the count version recursive?Cutanddried
@Shinrai, In general works well. but if you have some random file extensions which are really long such as .download, it will break the ".download" into 2 parts and report 2 files one which is "downlo" and another which is "ad"Pyridine
@alpha_989, That's due to the regex "[[:alpha:]]{3,6}" will also cause an issue with extensions smaller than 3 characters. Adjust to what you need. Personally I'd say 2,16 should work in most cases.Unblown
Thanks for replying.. Yeah.. thats what I realized later on. It worked well after I modified it similar to what you mentioned.Pyridine
find . -type f -name "*.*" | grep -o -E "\.[^\.]+$" | grep -o -E "[[:alpha:]]{2,16}" | awk '{print tolower($0)}' | sort | uniq -c | sort -rn - this works well - but is there a way to get the total file size of each php extension ?Castorena
R
5

In Python using generators for very large directories, including blank extensions, and getting the number of times each extension shows up:

import json
import collections
import itertools
import os

root = '/home/andres'
files = itertools.chain.from_iterable((
    files for _,_,files in os.walk(root)
    ))
counter = collections.Counter(
    (os.path.splitext(file_)[1] for file_ in files)
)
print json.dumps(counter, indent=2)
Resolved answered 24/8, 2012 at 19:17 Comment(0)
C
4

Since there's already another solution which uses Perl:

If you have Python installed you could also do (from the shell):

python -c "import os;e=set();[[e.add(os.path.splitext(f)[-1]) for f in fn]for _,_,fn in os.walk('/home')];print '\n'.join(e)"
Celenacelene answered 4/12, 2009 at 8:27 Comment(0)
A
4

Another way:

find . -type f -name "*.*" -printf "%f\n" | while IFS= read -r; do echo "${REPLY##*.}"; done | sort -u

You can drop the -name "*.*" but this ensures we are dealing only with files that do have an extension of some sort.

The -printf is find's print, not bash. -printf "%f\n" prints only the filename, stripping the path (and adds a newline).

Then we use string substitution to remove up to the last dot using ${REPLY##*.}.

Note that $REPLY is simply read's inbuilt variable. We could just as use our own in the form: while IFS= read -r file, and here $file would be the variable.

Accidental answered 31/5, 2021 at 15:18 Comment(0)
H
2

None of the replies so far deal with filenames with newlines properly (except for ChristopheD's, which just came in as I was typing this). The following is not a shell one-liner, but works, and is reasonably fast.

import os, sys

def names(roots):
    for root in roots:
        for a, b, basenames in os.walk(root):
            for basename in basenames:
                yield basename

sufs = set(os.path.splitext(x)[1] for x in names(sys.argv[1:]))
for suf in sufs:
    if suf:
        print suf
Haymaker answered 4/12, 2009 at 8:35 Comment(0)
S
2

I think the most simple & straightforward way is

for f in *.*; do echo "${f##*.}"; done | sort -u

It's modified on ChristopheD's 3rd way.

Somehow answered 13/2, 2018 at 8:21 Comment(0)
B
2

I don't think this one was mentioned yet:

find . -type f -exec sh -c 'echo "${0##*.}"' {} \; | sort | uniq -c
Baier answered 21/5, 2018 at 23:1 Comment(1)
This would probably be quite slow due to spawning a new process for each file.Rosalynrosalynd
A
2

The accepted answer uses REGEX and you cannot create an alias command with REGEX, you have to put it into a shell script, I'm using Amazon Linux 2 and did the following:

  1. I put the accepted answer code into a file using :

    sudo vim find.sh

add this code:

find ./ -type f | perl -ne 'print $1 if m/\.([^.\/]+)$/' | sort -u

save the file by typing: :wq!

  1. sudo vim ~/.bash_profile

  2. alias getext=". /path/to/your/find.sh"

  3. :wq!

  4. . ~/.bash_profile

Autotomy answered 4/4, 2020 at 13:2 Comment(0)
W
0

you could also do this

find . -type f -name "*.php" -exec PATHTOAPP {} +
Waziristan answered 25/3, 2013 at 16:12 Comment(0)
A
0

I've found it simple and fast...

   # find . -type f -exec basename {} \; | awk -F"." '{print $NF}' > /tmp/outfile.txt
   # cat /tmp/outfile.txt | sort | uniq -c| sort -n > tmp/outfile_sorted.txt
Aardwolf answered 20/2, 2020 at 14:28 Comment(0)
G
0

If you are looking for answer that respect .gitignore then check below answer.

git ls-tree -r HEAD --name-only | perl -ne 'print $1 if m/\.([^.\/]+)$/' | sort -u 
Grabble answered 16/7, 2023 at 21:20 Comment(0)
E
0

Another version of Ondra Žižka's one:

find . -name '*.?*' -type f | rev | cut -d. -f1 | rev | sort | uniq

On case sensitive file systems different cases should imho not be treated as the same extension. Also I don't think counting files is necessary as an answer to OPs question.

Ebbarta answered 27/3 at 9:54 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.