Looking for an answer to the same question, I just found this :
http://www.ltr-data.se/opencode.html/
The sizeof
tool does just that.
sizeof, freeware by Olof Lagerkvist. http://www.ltr-data.se e-mail:
[email protected] More info, including distribution permissions and
source code is available on the website.
This tool displays the total allocation size on disk for files and
directories.
Command line syntax: sizeof [-xo] [-presentation] file1 [file2 ...]
-x Do not follow junctions or other reparse points. This makes sure that
sizeof stays within the filesystem where it starts searching.
-o Include sizes of offline files.
-c Count and display number of files in each found directory, instead of
count and display sizes of them.
The presentation determines in which format to display the sizes of
files and directories. The argument can be any of the following:
-b Number of 512-byte blocks.
-h Human readable format.
-k Kilobytes.
-m Megabytes.
-g Gigabytes.
-t Terabytes.
It's better than some other solutions proposed on this page as it provides the correct “size on disk” (which by the way should be renamed to “size on partition” or “size on storage device” as nowadays many storage devices are not “disks”) for compressed or sparse files. I precisely wanted to compare the effect of NTFS compression and “sparseness” for a bunch of large files which are partial downloads from a file sharing software; so I'm going to first use sizeof
to get the actual allocated size of those files which are currently compressed, then I'm going to uncompress them, then convert them to “sparse”, then run sizeof
again.
I've only found one utility which can convert a non-sparse file to sparse, and actually de-allocate its empty sectors : a command line tool called SparseTest
, which was released more than 10 years ago, “for demo purposes only”, and seems to have vanished long ago, but is still available there :
https://web.archive.org/web/20151103231305if_/http://pages.infinit.net/moonligh/eMule/Releases/SparseTest.zip
SparseTest also displays the “size on disk” before applying the “sparse” attribute (but the output is more complex so it wouldn't be easy to use in a batch script if that information is needed for a further purpose). It calculates a checksum before and after to ensure that the file's integrity was preserved in the process (I'd still recommand to make a backup first, and verifying that all files are strictly identical afterwards with another tool, like WinMerge or MD5Checker). Contrary to files with the “C” attribute, which appear in blue color, nothing distinguishes “sparse” files from regular files, except the “P” in the Attributes column (and it does not even appear in the Properties).
The native Windows tool fsutil
can set the sparse attribute, but it does not actually compress the file, its “size on disk” remains unchanged even if it contains a lot of empty sectors; only if the file size is later increased with empty sectors will they be added in a “sparse” manner, i.e. stored as metadata and not actually allocated.
Bytes Per Cluster
(e.g. 4096) rather thanBytes Per Physical Sector
(e.g. 512). (Verified on my tiny SSD). Thenset /a ondisk = (fs / clus +1)* clus)
– Throe