fatal: early EOF fatal: index-pack failed
Asked Answered
P

48

518

I have googled and found many solutions but none work for me.

I am trying to clone from one machine by connecting to the remote server which is in the LAN network.
Running this command from another machine cause error.
But running the SAME clone command using git://192.168.8.5 ... at the server it's okay and successful.

Any ideas ?

user@USER ~
$ git clone  -v git://192.168.8.5/butterfly025.git
Cloning into 'butterfly025'...
remote: Counting objects: 4846, done.
remote: Compressing objects: 100% (3256/3256), done.
fatal: read error: Invalid argument, 255.05 MiB | 1.35 MiB/s
fatal: early EOF
fatal: index-pack failed

I have added this config in .gitconfig but no help also.
Using the git version 1.8.5.2.msysgit.0

[core]
    compression = -1
Powerless answered 22/1, 2014 at 8:32 Comment(11)
I faced this issue for 2-3 day when i was trying to clone from VPN. in my case issue was network bandwidth. i fixed by cloning in high speed network.Seeker
I got this error because my friends not know git so well and push a lot of images into the repository! =))Headrick
I also got the same error. I am using a fiber optic connection (40Mbps download speed). And no large files (like images/videos) in my repository too. Nevertheless still getting the same error.Missy
Seems like a memory issue my workaround was to clone via http instead which doesn't seems to suffer those issues. Example : git clone 192.168.8.5/butterfly025.gitPontormo
@Powerless : You tagged the question with cygwin. I am using Cygwin with git 2.31.1 and never had problems so far, even with large repos on slower networks. You seem to use a pretty old git version - maybe time for an update?Duodenal
I'm using git bash for Windows, and I got rid of this error by upgrading (by reinstalling): git-scm.com/downloadsGoblin
I got this error on Windows WSL2 running Ubuntu. Tried many of the suggestions here. None worked. The only thing that worked for me was to reboot my Windows and restart the WSL2 Ubuntu machine.Raulrausch
git config --global core.compression 0 ...this was sufficient to correct the issue for me.Secunderabad
For what it's worth, I finally got it to work on Windows by using git clone from a Linux VM, in a shared folder.Supersensual
I got this same error while using Git Bash on windows 10. Despite trying multiple time I wasn't able to resolve the problem. Then I tried in vscode (I have my git account added) it worked without a problem.Citizenship
William you can refer to this answer: https://mcmap.net/q/12281/-error-rpc-failed-curl-transfer-closed-with-outstanding-read-data-remaining (work for pull/push/clone)Natividad
M
864

First, turn off compression:

git config --global core.compression 0

Next, let's do a partial clone to truncate the amount of info coming down:

git clone --depth 1 <repo_URI>

When that works, go into the new directory and retrieve the rest of the clone:

git fetch --unshallow 

or, alternately,

git fetch --depth=2147483647

Now, do a regular pull:

git pull --all

I think there is a glitch with msysgit in the 1.8.x versions that exacerbates these symptoms, so another option is to try with an earlier version of git (<= 1.8.3, I think).

Molecule answered 11/3, 2014 at 5:50 Comment(29)
Thank you, this worked great. I had tried changing the http.postbuffer which didn't work, but after doing as stated in this answer, it worked great. I didn't use the "git fetch --depth=2147483647" line, but I used the rest.Isobaric
git clone --depth 1 gives me the error You must specify a repository to clone.Concertmaster
@EthenA.Wilson You need to pass in the remote url for the repository afterwards. E.g. git clone --depth 1 git@host:user/my_project.git.Utilitarianism
git clone --depth 1 works for me but then when doing git fetch --unshallow throws the initial error at the end.. How can i get the remaining files of the rep?Saltigrade
@Jose A. -- I experienced this problem when I was on a newer version of msysgit. If you are on msysgit, try an older version (<=1.8.3). Otherwise, try git fetch --depth 1000 (then 2000, etc., increasing incrementally until all the files are pulled).Molecule
@Jose A. -- Also, have a look at this: #4827139Molecule
Hi, dear friend. Thank you for your great solution. But the last git pull --all not works. Because of git clone --depth 1 will set fetching range only one branch. So we have to edit .git/config first.Oxidase
Be aware that this is not a real solution as it will set fetching to only one branch and you might end up in this situation: #20339000Novelia
@Powerless and pjincz: It's as real as it gets. On initial repo clone, Git doesn't create local branches to match remotes because it works off a lean paradigm. You will have to script local branch creation as I suggest in this post to ensure that all remote branches are created locally. Anyway, I wouldn't add this to a clone if you're having problems -- get the repo pulled first, and then worry about syncing branches.Molecule
Works great, but why is core.compression = 0 such an important factor in ensuring successful transmissions? Shouldn't core.compression = 0 become default then? (I think git internally compresses nicely, so maybe the problem is compression of compressed data which can actually enlarge the data.)Pustulant
Moreover with git clone --depth 1 <repo_URI> followed by git fetch --unshallow followed by git pull --all I did not get all remote branches. git branch -a does not show them, only shows master. How can I get other remote branches registered in the unshallowed repo?Pustulant
@peschü: See my answer here. git branch -r | awk -F'origin/' '!/HEAD|master/{print $2 " " $1"origin/"$2}' | xargs -L 1 git branch -f --trackMolecule
Just disabling compression fixed it for me.Affectionate
I am unable to see all the remote branches when I do git branch -r, whereas I am able to see all those branches in normally cloned folder. How do I get all the remote branches?Tva
Adding to @Oxidase comment, and for us the not so versed on git config files, the actual change required to get the git fetch --all to work is to change fetch = +refs/heads/master:refs/remotes/origin/master to fetch = +refs/heads/*:refs/remotes/origin/* ( master to * ), this line must be below the node [remote "origin"]. Close the file and repeat the git fetch --all command. Tested with git version 2.19.2.windows.1Miscarry
It would be nice if someone could explain the problem and what this solution does to solve it in the answer.Lyndell
I found the SSH protocol is faster than others, E.g. git clone --depth 1 git@host:user/my_project.git.Jolenejolenta
why turn off the compression though?Metsky
The core.compression value is used as a default for all other compression parameters. If there are network issues or server instabilities, while at the same time dealing with massive packs or giant files, compression can be costly in terms of time. When dealing with network instabilities, including retries and potentially timeouts, the extra time cost to compress giant files can presage a timeout. Likewise, delta compression on a misconfigured server with big files can tax memory usage, so disabling compression may rectify misconfiguration by preventing memory bottlenecks.Molecule
This procedure breaks the clone into small, manageable chunks. Git itself (abstracted as a micro filesystem) consists of blobs, trees and maps holding file commit data. A pull is shorthand for fetch then merge. If the fetch part is screwed up the merge (actually an unwind onto the new default branch) will fail. A shallow clone creates a skeleton of the repo without the giant history but including current files. Later steps pull down the larger file history.Molecule
Because this still occurs I thought I'd add that dmesg pointed th way in my case... out of memory (which occured during decompressing delta packs). I added a swp file on my ssd attached to my odroid and rocked away. Yes, their was considerable context switching, bu it worked. The solution in this article works better.Streamway
A minor point, but after the git clone you have to cd into the repo directory before doing the git fetch.Weigela
first line git config --global core.compression 0 worked well for meFlocculent
This works great, but afterwards removing the depth was helpful. This can be done by editing .git/config and changing the fetch line back to fetch = +refs/heads/*:refs/remotes/origin/*, which will have the stars listed as master.Fabulist
After editing .git/config and changing the fetch list to fetch = +refs/heads/*:refs/remotes/origin/* worked on me too. Thanks to @MoleculeNeedham
For me it was enough to just disabling compression. I had over 50k files in the git repositoryFlay
I got fatal: not a git repository (or any of the parent directories): .git after running git fetch --depth=2147483647Intense
I am getting the below error when I run "get fetch --unshallow" fetch-pack: unexpected disconnect while reading sideband packetsB/s fatal: early EOF fatal: fetch-pack: invalid index-pack outputCheryle
Just doing git config --global core.compression 0 solved my problemEquuleus
S
225

This error may occur for memory needs of git. You can add these lines to your global git configuration file, which is .gitconfig in $USER_HOME, in order to fix that problem.

[core] 
packedGitLimit = 512m 
packedGitWindowSize = 512m 
[pack] 
deltaCacheSize = 2047m 
packSizeLimit = 2047m 
windowMemory = 2047m
Steward answered 30/3, 2015 at 20:6 Comment(10)
This worked for me - although I still needed several attempts, but without this change abort came at 30%, afterwards at 75%... and once it went up to 100% and worked. :)Pustulant
still not working for me remote: Enumerating objects: 43, done. remote: Counting objects: 100% (43/43), done. remote: Compressing objects: 100% (24/24), done. error: inflate returned -55/26) fatal: unpack-objects failed Amerigo
Works for me. But set 8096m for all properties.Lawmaker
This problem happened frequently for me on Windows 10 with Git 2.25.0. I found that if I did git pull from the remote machine repeatedly it would occasionally succeed. But what a nuisance. Then I discovered that if you run git daemon from within the built-in Windows Bash prompt it works 100% with no workaround needed.Acoustics
Are these configured on the server machine or the client machine?Grendel
@Grendel It should be added to client's .gitconfig.Steward
This worked for me. Context of my problem was I was running three docker containers that were jenkins runners. They were all doing git init then git fetch at the same time.Identic
Worked for me but not on the first try. I managed to clone fully in my fourth try when noone is using wifi at midnight.Gentianaceous
My 64-bit Git v2.37.1 on Windows 11 doesn't let me set core.packedGitLimit, core.packedGitWindowSize, and pack.packSizeLimit to anything over 4095m, but using this value and 8096m for the pack.deltaCacheSize and pack.windowMemory options fixed the problem. Disabling compression and doing a shallow clone did not work. In my case, Git failed to clone a Pantheon.io repo.Lannie
packSizeLimit = 2047m fixed my problem. git versioon 2.43.0.windows.1Abstriction
G
67

finally solved by git config --global core.compression 9

From a BitBucket issue thread:

I tried almost five times, and it still happen.

Then I tried to use better compression and it worked!

git config --global core.compression 9

From the Git Documentation:

core.compression
An integer -1..9, indicating a default compression level. -1 is the zlib default.
0 means no compression, and 1..9 are various speed/size tradeoffs, 9 being slowest.
If set, this provides a default to other compression variables, such as core.looseCompression and pack.compression.

Genipap answered 29/4, 2018 at 3:2 Comment(4)
Needed to run git repack in combination with this solution and then it worked.Bucharest
This works for me too, through VPN and corporate proxy. --compression 0 did not work nor did all the .gitconfig changes suggested above.Mcquade
Probably changing the config parms here (to reduce size of transferred data) would do the job, alternately.Molecule
git config --global core.compression 9 repack worked.Nowlin
C
50

As @ingyhere said:

Shallow Clone

First, turn off compression:

git config --global core.compression 0

Next, let's do a partial clone to truncate the amount of info coming down:

git clone --depth 1 <repo_URI>

When that works, go into the new directory and retrieve the rest of the clone:

git fetch --unshallow

or, alternately,

git fetch --depth=2147483647

Now, do a pull:

git pull --all

Then to solve the problem of your local branch only tracking master

open your git config file (.git/config) in the editor of your choice

where it says:

[remote "origin"]
    url=<git repo url>
    fetch = +refs/heads/master:refs/remotes/origin/master

change the line

fetch = +refs/heads/master:refs/remotes/origin/master

to

fetch = +refs/heads/*:refs/remotes/origin/*

Do a git fetch and git will pull all your remote branches now

Congo answered 30/10, 2018 at 15:42 Comment(5)
It works, but I left compression to 9 not 0 which failed.Mistaken
You could also do this: git branch -r | awk -F'origin/' '!/HEAD|master/{print $2 " " $1"origin/"$2}' | xargs -L 1 git branch -f --track followed by git fetch --all --prune --tags and git pull --all. It will set all remote tracking branches locally.Molecule
Changing from fetch = +refs/heads/*:refs/remotes/origin/* to fetch = +refs/heads/devel:refs/remotes/origin/devel did it for me. Yes, I did the reverse and at our company we use "devel" for our main branch nameMeningitis
Clone was passed, but the error appears at fetch. Error fixed by set compression to 9.Pave
for i in `seq 100 100 30000`; do git fetch --depth=$i; done did the trick for me (max seq value was chosen about the amount of available commits), since git fetch --unshallow still failed. Let it run until the output only shows: remote: Total 0 (delta 0), reused 0 (delta 0), pack-reused 0 again and again. Was the only way that worked for me on Banana Pi M2+ with armbian cloning neovim.Corey
C
28

I was getting the same error, on my side i resolved by running this command, In windows it has some memory issue.

git config --global pack.windowsMemory 256m
Cordero answered 28/9, 2020 at 6:45 Comment(1)
This worked for meMckinnon
T
19

In my case this was quite helpful:

git clone --depth 1 --branch $BRANCH $URL

This will limit the checkout to mentioned branch only, hence will speed up the process.

Hope this will help.

Tameshatamez answered 4/8, 2017 at 5:57 Comment(0)
I
19

I faced this problem with macOS Big Sur M1 Chip and none of the solutions worked for me.

Edit: Works as a solution for M2 Chip aswell.

I solved it by increasing ulimits below.

ulimit -f 2097152
ulimit -c 2097152
ulimit -n 2097152

Running the commands above, will be valid for only current terminal session, so first run this and then clone the repository.

Induline answered 4/8, 2021 at 14:51 Comment(1)
I confirm, this solution works on my M3 chip also, thanksSelfconsistent
M
15

I tried all of that commands and none works for me, but what works was change the git_url to http instead ssh

if is clone command do :

git clone <your_http_or_https_repo_url> 

else if you are pulling on existing repo, do it with

git remote set-url origin <your_http_or_https_repo_url>

hope this help someone!

Maternity answered 21/11, 2014 at 17:25 Comment(4)
This question is really about the error message in the output above when there's a problem syncing giant chunks of files from a connected repo. You're saying that cutting over to https from ssh allowed the clone to finish?Molecule
Yes! That work for me, I have a 4gb+ repo and the only one solution I got that work was that!Maternity
I'd really like to know why this worked. Is there something in the SSH protocol that chokes on large objects that HTTPS does not? Is this a transport layer issue?Amalee
A long time I did above (switch to HTTPS); today I noticed that there was a middle-man-attack, and if I use VPN the SSH works just fine (without HTTPS need).Virgate
W
10

I got this error when git ran out of memory.

Freeing up some memory (in this case: letting a compile job finish) and trying again worked for me.

Wayward answered 7/1, 2015 at 22:42 Comment(1)
For me, there wasn't much memory available, freeing some up and retrying solved it.Barnet
M
8

In my case it was a connection problem. I was connected to an internal wifi network, in which I had limited access to ressources. That was letting git do the fetch but at a certain time it crashed. This means it can be a network-connection problem. Check if everything is running properly: Antivirus, Firewall, etc.

The answer of elin3t is therefore important because ssh improves the performance of the downloading so that network problems can be avoided

Masculine answered 19/1, 2015 at 15:6 Comment(1)
Switched to a different network, and then it finally worked.Oaten
B
7

Setting below's config doesn't work for me.

[core] 
packedGitLimit = 512m 
packedGitWindowSize = 512m 
[pack] 
deltaCacheSize = 2047m 
packSizeLimit = 2047m 
windowMemory = 2047m

As previous comment, it might the memory issue from git. Thus, I try to reduce working threads(from 32 to 8). So that it won't get much data from server at the same time. Then I also add "-f " to force to sync other projects.

-f: Proceed with syncing other projects even if a project fails to sync.

Then it works fine now.

repo sync -f -j8
Brueghel answered 11/9, 2019 at 2:44 Comment(0)
E
7

It's confusing because Git logs may suggest any connection or ssh authorization errors, eg: ssh_dispatch_run_fatal: Connection to x.x.x.x port yy: message authentication code incorrect, the remote end hung up unexpectedly, early EOF.

Server-side solution

Let's optimize git repository on the server side:

  1. Enter to my server's git bare repository.
  2. Call git gc.
  3. Call git repack -A

Eg:

ssh admin@my_server_url.com
sudo su git
cd /home/git/my_repo_name # where my server's bare repository exists.
git gc
git repack -A

Now I am able clone this repository without errors, e.g. on the client side:

git clone git@my_server_url.com:my_repo_name

The command git gc may be called at the git client side to avoid similar git push problem.


If you are an administrator of Gitlab service - trigger Housekeeping manually. It calls internally git gc or git repack.


Client-side solution

Other (hack, client-side only) solution is downloading last master without history:

git clone --single-branch --depth=1 git@my_server_url.com:my_repo_name

There is a chance that buffer overflow will not occur.

Emmert answered 5/6, 2020 at 14:16 Comment(0)
F
7

I am facing the issue too, this is my solution:

git fetch --refetch

From the git-fetch help:

Instead of negotiating with the server to avoid transferring commits and associated objects that are already present locally, this option fetches all objects as a fresh clone would

Fagen answered 14/8, 2023 at 8:28 Comment(2)
None other solution worked, except this. Saved my day. Thanks!Chandlery
Wow, have an upvote sir, nothing else worked!Incurious
L
6

Note that Git 2.13.x/2.14 (Q3 2017) does raise the default core.packedGitLimit which influences git fetch:
The default packed-git limit value has been raised on larger platforms (from 8 GiB to 32 GiB) to save "git fetch" from a (recoverable) failure while "gc" is running in parallel.

See commit be4ca29 (20 Apr 2017) by David Turner (csusbdt).
Helped-by: Jeff King (peff).
(Merged by Junio C Hamano -- gitster -- in commit d97141b, 16 May 2017)

Increase core.packedGitLimit

When core.packedGitLimit is exceeded, git will close packs.
If there is a repack operation going on in parallel with a fetch, the fetch might open a pack, and then be forced to close it due to packedGitLimit being hit.
The repack could then delete the pack out from under the fetch, causing the fetch to fail.

Increase core.packedGitLimit's default value to prevent this.

On current 64-bit x86_64 machines, 48 bits of address space are available.
It appears that 64-bit ARM machines have no standard amount of address space (that is, it varies by manufacturer), and IA64 and POWER machines have the full 64 bits.
So 48 bits is the only limit that we can reasonably care about. We reserve a few bits of the 48-bit address space for the kernel's use (this is not strictly necessary, but it's better to be safe), and use up to the remaining 45.
No git repository will be anywhere near this large any time soon, so this should prevent the failure.

Legalize answered 16/5, 2017 at 20:27 Comment(1)
I tried most things in this thread and this is the thing that finally allowed me to clone my repos onto a new machine.Temper
S
5

A previous answer recommends setting to 512m. I'd say there are reasons to think that's counterproductive on a 64bit architecture. The documentation for core.packedGitLimit says:

Default is 256 MiB on 32 bit platforms and 32 TiB (effectively unlimited) on 64 bit platforms. This should be reasonable for all users/operating systems, except on the largest projects. You probably do not need to adjust this value.

If you want to try it out check if you have it set and then remove the setting:

git config --show-origin core.packedGitLimit
git config --unset --global core.packedGitLimit

Edit: Having an Ouroboros moment here it should be mentioned that this in combination with solution from @amirreza-moeini-yegane solved this problem for me today.

git config --global core.compression 0
Submerge answered 17/6, 2019 at 7:37 Comment(0)
C
5

I had the same problem, I even tried to download the project directly from the website as a zip file but the download got interrupted at the exact same percent.

This single line fixed my problem like a charm

git config --global core.compression 0

I know other answers have mentioned this but, no one here mentioned that this line alone can fix the problem.

Hope it helps.

Costumier answered 11/7, 2020 at 2:4 Comment(0)
V
5

Network quality matters, try to switch to a different network. What helped me was changing my Internet connection from Virgin Media high speed land-based broadband to a hotspot on my phone.

Before that I tried the accepted answer to limit clone size, tried switching between 64 and 32 bit versions, tried disabling the git file cache, none of them helped.

Then I switched to the connection via my mobile, and the first step (git clone --depth 1 <repo_URI>) succeeded. Switched back to my broadband, but the next step (git fetch --unshallow) also failed. So I deleted the code cloned so far, switched to the mobile network tried again the default way (git clone <repo_URI>) and it succeeded without any issues.

Volumetric answered 10/11, 2020 at 11:11 Comment(0)
T
4

Tried almost all the answers here but no luck.. Finally got it worked by using the Github desktop app, https://desktop.github.com/

Macbook with M1 chip/Monterey not sure if it mattered.

Tillfourd answered 22/3, 2022 at 17:29 Comment(1)
on windows as well, Github desktop workedCochleate
M
3

In my case nothing worked when the protocol was https, then I switched to ssh, and ensured, I pulled the repo from last commit and not entire history, and also specific branch. This helped me:

git clone --depth 1 "ssh:.git" --branch “specific_branch”

Mandelbaum answered 31/8, 2015 at 10:23 Comment(0)
C
3

I have the same problem. Following the first step above i was able to clone, but I cannot do anything else. Can't fetch, pull or checkout old branches.

Each command runs much slower than usual, then dies after compressing the objects.

I:\dev [master +0 ~6 -0]> git fetch --unshallow
remote: Counting objects: 645483, done.
remote: Compressing objects: 100% (136865/136865), done.

error: RPC failed; result=18, HTTP code = 20082 MiB | 6.26 MiB/s

fatal: early EOF

fatal: The remote end hung up unexpectedly

fatal: index-pack failed

This also happens when your ref's are using too much memory. Pruning the memory fixed this for me. Just add a limit to what you fetching like so ->

git fetch --depth=100

This will fetch the files but with the last 100 edits in their histories. After this, you can do any command just fine and at normal speed.

Condiment answered 29/8, 2016 at 12:32 Comment(2)
what do u mean TED?Condiment
this "answer" should have been a comment on @Molecule 's answer.Gully
B
3

None of the solutions above worked for me.

The solution that finally worked for me was switching SSH client. GIT_SSH environment variable was set to the OpenSSH provided by Windows Server 2019. Version 7.7.2.1

C:\Windows\System32\OpenSSH\ssh.exe

I simply installed putty, 0.72

choco install putty

And changed GIT_SSH to

C:\ProgramData\chocolatey\lib\putty.portable\tools\PLINK.EXE

Butlery answered 4/9, 2019 at 8:26 Comment(0)
C
3

In my case the problem was none of the git configuration parameters but the fact that my repository had one file exceeding the maximum file size allowed on my system. I was able to check it trying to download a large file and getting an "File Size Limit Exceeded" on Debian.

After that I edited my /etc/security/limits.conf file adding et the end of it the following lines:

  • hard fsize 1000000
  • soft fsize 1000000

To actually "apply" the new limit values you need to re-login

Corwun answered 6/3, 2020 at 9:10 Comment(1)
This works? Can you let me know what exactly this change do?Brooch
S
3

I have tried for several times after I set git buffer, as I mentioned in the question, it seems work now.

So if you met this error, run this command:

git config --global http.postBuffer 2M

and then try again for some times.

Reference:

git push error: RPC failed; result=56, HTTP code = 0

Sallust answered 20/10, 2020 at 6:32 Comment(2)
Why would this affect a git clone using the git: protocol?Isola
I don't know But using this command makes it possible to clone large projectsSallust
S
3

For me it worked when I changed the compression to git config --global core.compression 9

This works

Spiffy answered 29/9, 2021 at 23:49 Comment(0)
M
2

Tried most of the answers here, I got the error with the PUTTY SSH Client with all possible constellations.

Once I switched to OpenSSH the error was gone (remove the Environment Variable GIT_SSH and restart the git bash).

I was using a new machine and newest git versions. On many other/older machines (AWS as well) it did work as expected with PUTTY as well without any git configuration.

Mond answered 29/1, 2019 at 14:46 Comment(0)
T
2

Using @cmpickle answer, I built a script to simplify the clone process.

It is hosted here: https://gist.github.com/gianlucaparadise/10286e0b1c5409bd1049d67640fb7c03

You can run it using the following line:

curl -sL https://git.io/JvtZ5 | sh -s repo_uri repo_folder
Tetragonal answered 23/1, 2020 at 14:25 Comment(0)
G
2

Tangentially related and only useful in case you have no root access and manually extract Git from an RPM (with rpm2cpio) or other package (.deb, ..) into a subfolder. Typical use case: you try to use a newer version of Git over the outdated one on a corporate server.

If git clone fails with fatal: index-pack failed without early EOF mention but instead a help message about usage: git index-pack, there is a version mismatch and you need to run git with the --exec-path parameter:

git --exec-path=path/to/subfoldered/git/usr/bin/git clone <repo>

In order to have this happen automatically, specify in your ~/.bashrc:

export GIT_EXEC_PATH=path/to/subfoldered/git/usr/libexec
Garbers answered 18/5, 2020 at 9:56 Comment(0)
B
1

I tried pretty much all the suggestions made here but none worked. For us the issue was temperamental and became worse and worse the larger the repos became (on our Jenkins Windows build slave).

It ended up being the version of ssh being used by git. Git was configured to use some version of Open SSH, specified in the users .gitconfig file via the core.sshCommand variable. Removing that line fixed it. I believe this is because Windows now ships with a more reliable / compatible version of SSH which gets used by default.

Brandy answered 15/1, 2020 at 12:28 Comment(0)
F
1

My solution was to eventually use SSH instead of HTTP/HTTPS.

Fleecy answered 20/2 at 20:21 Comment(0)
S
1
git config --global core.compression 9 repack
Sideshow answered 1/3 at 13:46 Comment(1)
Although this code might answer the question, I recommend that you also provide an explanation what your code does and how it solves the problem of the question. Answers with an explanation are usually more helpful and of better quality, and are more likely to attract upvotes.Cesspool
K
0

From a git clone, I was getting:

error: inflate: data stream error (unknown compression method)
fatal: serious inflate inconsistency
fatal: index-pack failed

After rebooting my machine, I was able to clone the repo fine.

Kenlay answered 16/8, 2016 at 11:56 Comment(0)
L
0

I turned off all the downloads I was doing in the meantime, which freed some space probably and cleared up/down bandwidth

Longicorn answered 2/12, 2017 at 21:32 Comment(0)
N
0

The git-daemon issue seems to have been resolved in v2.17.0 (verified with a non working v2.16.2.1). I.e. workaround of selecting text in console to "lock output buffer" should no longer be required.

From https://github.com/git/git/blob/v2.17.0/Documentation/RelNotes/2.17.0.txt:

  • Assorted fixes to "git daemon". (merge ed15e58efe jk/daemon-fixes later to maint).
Nevis answered 2/5, 2018 at 13:3 Comment(0)
T
0

I've experience the same problem. The REPO was too big to be downloaded via SSH. Just like @elin3t recommended, I've cloned over HTTP/HTTPS and change the REMOTE URL in .git/config to use the SSH REPO.

Towering answered 12/6, 2019 at 0:41 Comment(0)
M
0

I got the same issue as below when I run git pull

remote: Counting objects: 149, done.
Connection to git-codecommit.us-east-1.amazonaws.com closed by remote host.
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed

Then I checked the git status, There were so many uncommitted changes I fixed the issue by committing and push all the uncommitted changes.

Madai answered 20/6, 2019 at 7:50 Comment(0)
S
0

In my case, I simply upgraded my version of OpenSSL. The older versions of OpenSSL have vulnerabilities and also do not have the latest algorithms which may be needed. As of today, the command openssl version shows OpenSSL 1.1.1f 31 Mar 2020.

Saundrasaunter answered 4/8, 2020 at 20:47 Comment(0)
J
0

Although not exactly the same setup, but I had this issue on an nfs share mounted on Ubuntu 20.04. I haven't found any solution so I share how I solved it, hoping that I can help someone.

The error message was (sometimes with/without the warning):

warning: die() called many times. Recursion error or racy threaded death!
fatal: premature end of pack file, 29 bytes missing
fatal: premature end of pack file, 24 bytes missing
fatal: index-pack failed

Git shallow clone, disabling compression, etc. didn't solve the issue.

When I mounted the share with nfsvers=4.2 instead of nfsvers=4.0, the problem disappeared.

Josiah answered 12/3, 2021 at 17:57 Comment(0)
H
0

just to add a tip here that if your git clone command has a proxy parameter, your proxy server may disconnect your http/s request prematurely due to its own configuration to disallow too large sized http response binary. just fyi.

Haden answered 10/1, 2022 at 17:30 Comment(0)
R
0

Connections problems

That's the most probable reason. Especially if your git is up to date. (You can update your git if not up to date)

1- Check connection is Stable.
2- VPN => Disable it (if used => big culprit)
3- Antivirus & firewalls

VPN VPN VPN => Big culprit

git cache, buffer, and memory and compression

The other answers cover that well.

Would go with https://mcmap.net/q/73786/-fatal-early-eof-fatal-index-pack-failed

To open global config through cli:

git config --global -e

If not then:

https://mcmap.net/q/73786/-fatal-early-eof-fatal-index-pack-failed

Refuge answered 22/7, 2022 at 11:36 Comment(0)
C
0

In your local machines directory you might be running out of space. Make extra storage in the machine you are trying to download into and repeat the checkout. Your problem might be as simple as that.

Cottrell answered 31/1, 2023 at 17:18 Comment(0)
M
0

I had the same issue with GIT version 2.43.0. I tried the maximum all the above answers but it didn't work for me. Also I tried the following, On Windows Execute the following in the command line before executing the Git command:

set GIT_TRACE_PACKET=1
set GIT_TRACE=1
set GIT_CURL_VERBOSE=1 

This also did not work.

Finally I downgraded my GIT version and then it worked. I downgraded to GIT 2.39.1.

Mascagni answered 19/12, 2023 at 8:2 Comment(0)
P
0

If you are trying to clone a Next.js app and it fails every steps aforementioned then download as zip, extract

  1. npm install.
  2. npm run dev
Pneumoencephalogram answered 21/12, 2023 at 16:22 Comment(0)
P
0

This work for me, the problem was solved by editing the postBuffer size

git config --global http.postBuffer 1048576000

I got this answer from Light, deaajh in Shorebird discord server.

Primer answered 30/1 at 8:15 Comment(0)
P
0

I had an issue with cloning a huge repo. Cloning process fired these errors:

fetch-pack: unexpected disconnect while reading sideband packet
fatal: early EOF
fatal: fetch-pack: invalid index-pack output

But the below command (found in the answers in this thread) resolved this issue.

git config --global core.compression 9 repack

As per my understanding with this command git will use maximally available compression during files retrieval. And if data was compressed, then to able to use it in a normal way, git should "decompress" the data using repack command.

In my case it was one-time exercise, so after the repo was cloned I restored compression settings to the default level

git config --global core.compression -1
Proprietary answered 3/4 at 16:1 Comment(0)
F
-1

If you're on Windows, you may want to check git clone fails with "index-pack" failed?.

Basically, after running your git.exe daemon ... command, select some text from that console window. Retry pulling/cloning, it might just work now!

See this answer for more info.

Futhark answered 7/6, 2017 at 3:19 Comment(0)
C
-2

This worked for me, setting up Googles nameserver because no standard nameserver was specified, followed by restarting networking:

sudo echo "dns-nameservers 8.8.8.8" >> /etc/network/interfaces && sudo ifdown venet0:0 && sudo ifup venet0:0
Culosio answered 23/2, 2015 at 15:4 Comment(0)
J
-2

Make sure your drive has enough space left

Jihad answered 11/3, 2016 at 17:32 Comment(0)
D
-4

None of these worked for me, but using Heroku's built in tool did the trick.

heroku git:clone -a myapp

Documentation here: https://devcenter.heroku.com/articles/git-clone-heroku-app

Dela answered 16/9, 2015 at 15:28 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.