How can I recover from "fatal: Out of memory? mmap failed: Cannot allocate memory" in Git?
Asked Answered
S

2

13

Let me start with some context:

I had to upgrade a crucial Magento webshop to a new version. To be sure all existing code would still work after the upgrade and make some post-upgrade changes I made a Git repository from the entire Magento installation (excluding obvious content like the 4.5GB of images, ./var directory etc.), pushed it to an origin and cloned it on a dev server. Made a new branch, performed the upgrades, made code changes, committed it all to the dev branch and pushed it back to origin.

Now the time has come to upgrade the 'real' shop, meaning i have to merge the master branch on the production server with the dev branch. And then everyhing goes wrong:

git fetch - works

git branch says: * master

git merge origin/dev goes horribly wrong (only output after some waiting):

fatal: Out of memory? mmap failed: Cannot allocate memory

Same goes for git checkout dev, git rebase master origin/dev etc.

Did some research here on stackoverflow in existing questions and spent an evening of trying suggestions, including (but not limited to):

git gc

Counting objects: 48154, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (37152/37152), done.
fatal: Out of memory, malloc failed (tried to allocate 527338875 bytes)
error: failed to run repack

and:

git repack -a -d --window-memory 10m --max-pack-size 20m

Counting objects: 48154, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (37152/37152), done.
fatal: Out of memory, malloc failed (tried to allocate 527338875 bytes)

In addition to the previous command, i also tried this (which is pretty similar). As the link makes mention of a possible issue with 32-bit systems, perhaps it's wise to mention the specs for the three systems involved:

  • 'dev' server: x86_64 Gentoo 2.6.38-hardened-r6 // 4 cores & 8GB RAM
  • 'origin' server: x86_64 Gentoo 2.6.38-hardened-r6 // 2 cores & 4GB RAM
  • 'live' server: x86_64 Debian 4.3.2-1.1 2.6.35.5-pv1amd64 // (VPS) 2 cores & 3GB RAM

Does anyone know how I can recover from this? Does repacking on origin work? If it does, how can I convince the production server to fetch a new copy of the repository? Any help would be greatly appreciated!

Sweyn answered 21/7, 2011 at 11:23 Comment(1)
Had a similar problem and repacking saved my repo, thanks!Culosio
C
14

The error you're getting comes from the large files in your repository. Git is trying to put the entire contents of the file in memory, which makes it croak.

Try Upgrading Git

Git 1.7.6 was released last month and has this lovely bit in its release notes:

Adding a file larger than core.bigfilethreshold (defaults to 1/2 Gig) using "git add" will send the contents straight to a packfile without having to hold it and its compressed representation both at the same time in memory.

Upgrading to 1.7.6 might enable you to run git gc and maybe even git merge, but I can't verify because it's hard get a repository into that state (the conditions must be just right).

Try Removing the Offending Files

If upgrading Git doesn't help, you can try removing the large files from the repository using git filter-branch. Before you do that, try backing up the large files using git cat-file -p <commit_sha1>:path/to/large/file >/path/to/backup/of/large/file.

You'll want to do these operations on your most beefy machine (lots of memory).

If this works, try re-cloning to the other machines (or simply rsync the .git directory).

Chiba answered 25/7, 2011 at 5:49 Comment(3)
Sounds great! I'm scheduled to work on that particular server again tonight, so I'll have chance to try it out then. I really hope this will do the job! Thanks so far :)Sweyn
It worked out in the end! Upgrading git didn't do the job, but filter-branch, git gc, a local reclone and forcing that slimmed down repo to origin and down to the production server did the trick (that's the short story ofcourse ;)). I already expected that to work, but was too scared to damage anything.. that was before you told me about that cat-file trick, which provided a nice backup before cleaning everything up. thanks!Sweyn
I'm glad you got it working. Too bad it took so much work! Hopefully the Git devs will figure out how to support massive files soon so that others don't have the same problem.Chiba
M
1

I've seen a few reports of this happening when you do "git init --bare" in a non-empty directory.

Are you by any chance working in/with a "bare"/"server" repository that isn't empty (that is, has anything else besides the .git directory in it)?

Meal answered 21/7, 2011 at 12:26 Comment(1)
No, that's not the case here. The repo was created on 'live' using a regular "git init", some .gitignore business and"git add ." afterwards to init the content. Then on 'origin' a "git init --bare" on a newly created dir, "git push origin master" from 'live' to 'origin' after declaring remote origin. And a regular "git pull origin master" on 'dev'. After that, business as usual going back to 'live' from 'dev' through 'origin'. So, no monkey business as far as I know/remember.Sweyn

© 2022 - 2024 — McMap. All rights reserved.