What is the maximum allowed depth of sub-folders?
Asked Answered
C

4

11

At first I wanted to ask "What is the maximum allowed sub-folder for a windows OS"

But then I figured maybe my web hosting provider isn't on windows but on linux or something else. So I'm asking what are the possible maximum allowed sub-folder for all major OS that a Web Hosting Provider would usually use. (Would it be safe to say Linux, Mac, or Windows?)

Then again, based on your experiences, do web hosting sites create a limit to the number of subfolders we can make?

(Why this? Because I want each user to have their very own folder for easy access to their images. Would that be ok? Or is it bad practice? Still new to programming.)

Cockleboat answered 20/3, 2013 at 6:33 Comment(6)
I believe there no sane hard upper limit. Expect a value in the order of 2^32. Even 2^16 is more than sane. You'll have other problems sooner than that.Scandinavian
@JanDvorak can you pls. explain?Cockleboat
I mean, long before you encounter the "maximum number of sub-folders per folder reached", listing the directory will take insanely long (trust me, listing a folder with hundreds of subfolders takes insanely long over SSH), or you'll run out of disk space.Scandinavian
thats more clearer. thnks!Cockleboat
BTW, the folder terminology is not usual on Unix. Please speak of directoriesArielle
I see. I'll use that the next timeCockleboat
A
9

The limit is not on the depth of the nested subdirectories (you could have dozens of them, even more), but on the file systems and its quotas.

Also having very long file paths is inconvenient (and could be slightly inefficient). Programmatically, a file path of several hundreds or even thousands of characters is possible; but the human brain is not able to remember such long file paths.

Most file systems (on Linux) have a fixed limit to their number of inodes.

Some file systems behave poorly with directories containing ten thousand entries (e.g. because the search is linear not dichotomic). And you have hard time to deal with them (e.g. even ls * gives too long output). Hence, it could be wise to have /somepath/a/0001 ... /somepath/z/9999 instead of /somepath/a0001 ... /somepath/z9999

If you have many thousands of users each with his directory, you might want to e.g. group the users by their initials, e.g. have /some/path/A/userAaron/images/foobar and /some/path/B/userBasile/images/barfoo etc. So /some/path/A/ would have only hundreds of subdirectories, etc...

A convenient rule of thumb might be: avoid having more than a few hundreds entries -either subdirectories or files- in each directory.

Some web applications store small data chunk in individual rows of a SQL databases and use files (whose name might be generated) for larger data chunks, storing the filepath in the database. Having millions of files with only a few dozen bytes in most is probably not efficient.

Some sysadmins are also using quotas on filesystems.

Arielle answered 20/3, 2013 at 6:35 Comment(3)
Thanks @basilestarynkevitch! Your answer explained a lot! And thank you for the grouping tip. I'll try to create a system with that procedure in mind.Cockleboat
Good answer, the word 'dichotomic' does not mean what you think though.Epiglottis
In French it does.Arielle
B
5

In Windows, there is a limit for 260 characters in any path. This includes filenames, so a file cannot have more characters than 260-directory path length.

This means that you could have quite a lot of subdirectories, but as you go deeper, the maximum filename gets shorter.

Br answered 20/3, 2013 at 6:55 Comment(4)
Most web hosting are not Windows based... On Linux, the limit is much bigger (at least 1024, and probably much more).Arielle
@Basile: That is true. Yet, the question was "all major OS". While Linux is probably much more common, I know of dozens of hosting companies that provide WindowsBr
@BasileStarynkevitch: On linux, there are far more available file systems, and it is the filesystem which will dictate this. For instance, you could use NTFS on your linux installation (could, not should). In that case, you'd be back to max 260 character paths... For more common filesystems, such as ext2 or ext3, the limit is actually 32kBr
Funny enough I remember back in the Windows XP days (maybe this is still the case?) the number of Fonts you could install was limited to the combined filename lengths of the font files. Pretty ridiculous way to limit something.Graphite
F
1

Windows has a limit of 260 chars for the length of a path that, for example, could be something like:

C:\dir name 1st nested level\sub dir name 2nd nested level\...deepest nested level\filename.ext

You can use that length at your discretion, for example you can give a very descriptive directory name of 250 chars total and you just have few chars left for you filename.ext. In fact you could for example do one single level of custom directory with a very long name like this:

C:\Your very long and descriptive folder name of 260 char is this one here and you don't have much chars left to be used for the file names that you need to give to the files contained inside this directory with a lot of useful informations you named\file.ext

As you can see above, you have left just enough chars to give a 8 char length name for files like file.ext, including the file name itself the point and the extension of the file.

Now, Considering that the length of the dir name is at least 1 char long, you also can do something like:

C:\Y\o\u\r\v\e\r\y\l\o\n\g\a\n\d\d\e\s\c\r\i\p\t\i\v\e\f\o\l\d\e\r\n\a\m\e\o\f\2\6\0\c\h\a\r\i\s\t\h\i\s\o\n\e\h\e\r\e\a\n\d\y\o\u\d\o\n\t\h\a\v\e\m\u\c\h\c\h\a\r\s\l\e\f\t\t\o\b\e\u\s\e\d\f\o\r\t\h\e\f\i\l\e\n\a\m\e\s\t\h\a\t\y\o\u\n\e\e\d\t\o\g\a\Filena.ext

If you count the "\" char occurrences into the above path, you'll find out that they are 124 in total...: I couldn't go further in path depth because Win OS shows an advice about the path that is too long and blocks the creation of further sub-dir. But you still have left space for a short file name and its extension thought.

I have also noticed tho that Win becomes very slow managing any content inside a very long and nested path. Even deleting that long path took about 7 or 8 seconds: usually it takes just a blink of an eye. Probably the slowness is due to the recursive algorithm that has to update the file system that slows down the operations of creating deleting and managing the content into a so deep path.

Another message that the OS throws out when a path is too long is on its deletion: the OS advises you that the path is too long for the recycle bin and will be deleted completely without passing throughout the recycle bin itself: this could be a bit of a problem in case you need to recover any content later from the deleted directory that is not into the bin.

So IMHO (at least on my system) it is usually better not to use so deep nested paths: better stop at before max 10-15 levels or so. The longest paths that are stored into my hard drive are 23 level deep tho. They still work pretty well for speed but it is an hassle to find something into them just following the path. And for documents I rarely go nesting more than 5-6 directory levels.

Forrestforrester answered 16/7, 2023 at 17:11 Comment(0)
X
0

Something else that's very important is performance. With Windows if you start to get over 5k files it starts to get slow, 10k it is crawling and 50k becomes totally unusable!

Xenon answered 20/2, 2016 at 3:41 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.