Is it possible to get info about how much space is wasted by changes in every commit — so I can find commits which added big files or a lot of files. This is all to try to reduce git repo size (rebasing and maybe filtering commits)
Forgot to reply, my answer is:
git rev-list --all --pretty=format:'%H%n%an%n%s' # get all commits
git diff-tree -r -c -M -C --no-commit-id #{sha} # get new blobs for each commit
git cat-file --batch-check << blob ids # get size of each blob
--diff-filter
might be used instead of the unreliable grep
but anyways, this answer is much better IMO: https://mcmap.net/q/13859/-git-show-total-file-size-difference-between-two-commits –
Choiseul You could do this:
git ls-tree -r -t -l --full-name HEAD | sort -n -k 4
This will show the largest files at the bottom (fourth column is the file (blob) size.
If you need to look at different branches you'll want to change HEAD to those branch names. Or, put this in a loop over the branches, tags, or revs you are interested in.
Forgot to reply, my answer is:
git rev-list --all --pretty=format:'%H%n%an%n%s' # get all commits
git diff-tree -r -c -M -C --no-commit-id #{sha} # get new blobs for each commit
git cat-file --batch-check << blob ids # get size of each blob
--diff-filter
might be used instead of the unreliable grep
but anyways, this answer is much better IMO: https://mcmap.net/q/13859/-git-show-total-file-size-difference-between-two-commits –
Choiseul All of the solutions provided here focus on file sizes but the original question asked was about commit sizes, which in my opinion, and in my case in point, was more important to find (because what I wanted is to get rid of many small binaries introduced in a single commit, which summed up accounted for a lot of size, but small size if measured individually by file).
A solution that focuses on commit sizes is the provided here, which is this perl script:
#!/usr/bin/perl
foreach my $rev (`git rev-list --all --pretty=oneline`) {
my $tot = 0;
($sha = $rev) =~ s/\s.*$//;
foreach my $blob (`git diff-tree -r -c -M -C --no-commit-id $sha`) {
$blob = (split /\s/, $blob)[3];
next if $blob == "0000000000000000000000000000000000000000"; # Deleted
my $size = `echo $blob | git cat-file --batch-check`;
$size = (split /\s/, $size)[2];
$tot += int($size);
}
my $revn = substr($rev, 0, 40);
# if ($tot > 1000000) {
print "$tot $revn " . `git show --pretty="format:" --name-only $revn | wc -l` ;
# }
}
And which I call like this:
./git-commit-sizes.pl | sort -n -k 1
Personally, I found this answer to be most helpful when trying to find large files in the history of a git repo: Find files in git repo over x megabytes, that don't exist in HEAD
#!/bin/bash
COMMITSHA=$1
CURRENTSIZE=$(git ls-tree -lrt $COMMITSHA | grep blob | sed -E "s/.{53} *([0-9]*).*/\1/g" | paste -sd+ - | bc)
PREVSIZE=$(git ls-tree -lrt $COMMITSHA^ | grep blob | sed -E "s/.{53} *([0-9]*).*/\1/g" | paste -sd+ - | bc)
echo "$CURRENTSIZE - $PREVSIZE" | bc
git fat find N
where N is in bytes will return all the files in the whole history which are larger than N bytes.
You can find out more about git-fat here: https://github.com/cyaninc/git-fat
git cat-file -s <object>
where <object>
can refer to a commit, blob, tree, or tag.
© 2022 - 2024 — McMap. All rights reserved.
git gc
occasionally, possibly asgit gc --aggressive
– Riessgit gc
(andgit gc --prune
);--aggresive
can even give worse results (but usually shouldn't), and is usually not worth it. – Lindley