Repack of Git repository fails
Asked Answered
P

9

69

I have a git repository residing on a server with limited memory. When I try to clone an existing repository from the server I get the following error

hemi@ubuntu:$ git clone ssh://[email protected]/home/hemi/repos/articles
Initialized empty Git repository in /home/hemi/Skrivebord/articles/.git/
[email protected]'s password: 
remote: Counting objects: 666, done.
remote: warning: suboptimal pack - out of memory
remote: fatal: Out of memory, malloc failed
error: git upload-pack: git-pack-objects died with error.
fatal: git upload-pack: aborting due to possible repository corruption on the remote side.
remote: aborting due to possible repository corruption on the remote side.
fatal: early EOF
fatal: index-pack failed
hemi@ubuntu:$ 

To handle this error I have tried to repack the original repository (according to this forum post). But instead of repacking the repository it describes how to use the "git pack-objects" command.

hemi@servername:~/repos/articles$ git repack -a -d --window-memory 10m --max-pack-size 100m
usage: git pack-objects [{ -q | --progress | --all-progress }]
        [--all-progress-implied]
        [--max-pack-size=N] [--local] [--incremental]
        [--window=N] [--window-memory=N] [--depth=N]
        [--no-reuse-delta] [--no-reuse-object] [--delta-base-offset]
        [--threads=N] [--non-empty] [--revs [--unpacked | --all]*]
        [--reflog] [--stdout | base-name] [--include-tag]
        [--keep-unreachable | --unpack-unreachable 
        [<ref-list | <object-list]

Git 1.6.5.7 is installed on the server.

Purplish answered 28/1, 2011 at 9:30 Comment(1)
Your link to a forum post is broken.Antipathetic
H
120

Your solution has got you a working copy locally and remotely, but will cause problems again when the remote repository decides to repack itself again. Fortunately, you can set config options that will reduce the amount of memory needed for repacking in both repositories -- these essentially make the command line parameters that you added into the default options when repacking. So, you should log in to the remote, change into the repository and do:

git config pack.windowMemory 10m
git config pack.packSizeLimit 20m

You may want to do the same on your local repository. (Incidentally I guess that either your repository is very large or these are machines with little memory - these values seem very low to me.)

For what it's worth, when getting malloc failures on repacking very large repositories in the past, I've also changed the values of core.packedgitwindowsize, core.packedgitlimit, core.deltacachesize, pack.deltacachesize, pack.window and pack.threads but it sounds as if you don't need any further options :)

Hankow answered 28/1, 2011 at 15:25 Comment(3)
Thanks for the config options, I was not aware of them before. The repository contains a large set of pdf files. The total size of the repository (including .git directory and the tracked files) is approc 1.1 GB. So I guess it is a large repository ;-)Purplish
@MarkLongair: you saved my day Sir! I was about to run to the store and buy some RAM upgrade :DOvershadow
Specially at Dreamhost I had to use pack.windowMemory 10m pack.packSizeLimit 20m pack.deltacachesize = 20m pack.threads = 2 and core.deltacachesize = 20m core.packedgitlimit = 30m . Thanks @MarkLongairHalutz
L
25

With no direct access to repository and hence being unable to perform a repack, performing a shallow clone and then gradually fetching while increasing depth helped for me.

git clone YOUR_REPO --depth=1
git fetch --depth=10
...
git fetch --depth=100
git fetch --unshallow    //Downloads all history allowing to push from repo

Hope it can still help someone.

Leggat answered 11/8, 2013 at 9:16 Comment(3)
as a last resort for a lot of work, this actually worked. Thanks.Soerabaja
git clone REPO --depth=1 still failed for me with the error remote: aborting due to possible repository corruption on the remote side.Sealskin
bit of a stone age method of doing this, but oh well it helped, figured out a depth of 300 is the max I can do for some reason...Asyut
P
17

I solved the problem using the following steps.

  1. Got repository checked out from the server to my local machine (using a raw copy over ssh)
  2. Repacked the local repository
    git repack -a -d --window-memory 10m --max-pack-size 20m
  3. Created an empty repository on the server
    git init --bare
  4. Pushed the local repository to the server
  5. Checked that it is possible to clone the server repository
Purplish answered 28/1, 2011 at 12:8 Comment(1)
I'm glad to hear you got that sorted, but I should warn you that you'll have the same problem again when the server decides to repack its repository. It would be best to set the config options in the remote repository (e.g. as suggested in my answer) so that when it does automatically repack, you still won't run out of memory.Hankow
L
6

This does not answer the question, but somebody might run into it: repacking might also fail on the server when pack-objects is terminated by some kind of memory killer (such as the one used on Dreamhost):

$ git clone project-url project-folder
Cloning into project-folder...
remote: Counting objects: 6606, done.
remote: Compressing objects: 100% (2903/2903), done.
error: pack-objects died of signal 9284.51 MiB | 2.15 MiB/s   
error: git upload-pack: git-pack-objects died with error.
fatal: git upload-pack: aborting due to possible repository corruption on the remote side.
remote: aborting due to possible repository corruption on the remote side.
fatal: early EOF
fatal: index-pack failed

On Dreamhost this appears to be caused by mmap. The repack code uses mmap to map some files’ contents into memory, and as the memory killer is not smart enough, it counts the mmapped files as used memory, killing the Git process when it tries to mmap a large file.

The solution is to compile a custom Git binary with mmap support turned off (configure NO_MMAP=1).

Lou answered 5/10, 2011 at 10:35 Comment(2)
Do you know if it's possible to add the NO_MMAP=1 option to an existing git install?Tamis
I don’t think so, it looks as a preprocessor macro that leads to different code being produced. But that’s just an opinion, I didn’t research it.Lou
M
1

I am using git version 1.7.0.4 and it accepts this command. It is possible that git version 1.6 doesn't accept this command.

Try creating a new repository with some random commits. Then repack it with this command.

Menander answered 28/1, 2011 at 9:47 Comment(1)
Are you talking about this command? git repack -a -d --window-memory 10m --max-pack-size 100mSealskin
G
1

git config --global pack.window 0

Granth answered 30/3, 2020 at 4:41 Comment(0)
T
0

I had the same problem on ubuntu 14.10 with git 2.1.0 on a private github.com repository. (Entreprise router is suspected! Works on different wifi network, except at workplace)

* GnuTLS recv error (-54): Error in the pull function.
* Closing connection 2jects:  31% (183/589)   
error: RPC failed; result=56, HTTP code = 200
fatal: The remote end hung up unexpectedly
fatal: protocol error: bad pack header

My solution was, to git clone using ssh (I set up ssh keys* beforehand), like this:

git clone https://github.com/USERNAME/REPOSITORYNAME.git

becomes:

git clone [email protected]:USERNAME/REPOSITORYNAME.git

*: (Generating an ssh key)

ssh-keygen -t rsa -C "[email protected]"

Then log into github, in settings, import ssh keys, and import it from ~/.ssh/id_rsa.pub.

Tacheometer answered 28/4, 2015 at 15:3 Comment(2)
I've heard of enterprise routers doing content scanning and dropping connections for HTTP, but never HTTPS - does yours decode and re-encrypt HTTPS traffic too?Cashier
Rup: There are two routers involved before getting out to the internet. Next week, I will be checking exactly how is the setup at that particular company. I verified since, it is not failing anywhere else (any other wifi network), just at that specific company.Tacheometer
N
0

In my case, I had the same error message, but problem was on Github side.

After about an hour of maintenance they fixed it, and problem become resolved on all machines.

Nedra answered 5/12, 2022 at 20:51 Comment(0)
M
0

In my case changing those config values didn't help - GIT was still crashing with slightly different errors.

What helped was a simple server reboot (in my case sudo shutdown -r now). Seems like something was eating lots of RAM on the server, hence GIT was unable to allocate memory.

Hope this helps someone, too.

Misdo answered 17/8, 2023 at 16:1 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.