Broken pipe when pushing to git repository
Asked Answered
D

10

47

I'm trying to push for the first time a code to my git repository but i get the following error:

Counting objects: 222026, done. 
Compressing objects: 100% (208850/208850), done. 
Write failed: Broken pipe222026) 
error: pack-objects died of signal 13 
fatal: The remote end hung up unexpectedly error: failed to push some refs to 'ssh://[email protected]/<...>'

I tried to increase the http buffer size (git config http.postBuffer 524288000), I tried to git repack, but it did not work.

I was able to push a very similar size code to another repository (it was not working like this one, but after the git repack it did work). I'm trying to push it to bitbucket.

Any ideas?

Deitz answered 1/10, 2013 at 15:24 Comment(4)
Check out #18559515Mcnabb
I'm trying to push it to bitbucket, not github. bitbucket does not have repo size limits. Well, I just kept trying git repack and git push over and over, and, like the other repo, it worked. But why this error happens, I still don't know.Deitz
the problem you are facing is probably the same - having large files within your repo. So posted a link which seemed to have an explanation, and reasoning for github. Anyway, checkout link #8673449 which that answer referencesMcnabb
In my case, I had to turn off my WiFi and turn it back on again. Rest of internet was working, including fast.com, but for some reason no GitHub operations were working.Maddalena
R
45

Simple solution is to increase the HTTP post buffer size to allow for larger chunks to be pushed up to the remote repo. To do that, simply type:

git config http.postBuffer 52428800

The number is in bytes, so in this case I have set it to 50MB. The default is 1MB.

Ra answered 15/9, 2014 at 11:8 Comment(1)
docs.gitlab.com/ee/topics/git/… codifies this.Alignment
C
22

I had that issue when working with an arch distro on VMWare.

Adding

IPQoS=throughput

to my ssh config (~/.ssh/config) did the trick for me.

Coaming answered 19/2, 2019 at 8:50 Comment(1)
works for me :O I only put these line in new config file, saved and in terminal push.. then works!.Lorrin
P
14

Because I haven't seen this answer: Change your Wifi Network. Mine was blocking me and gave me the broken pipe error. After using my iPhone as a hotspot it worked perfectly!

Plebiscite answered 2/3, 2020 at 23:52 Comment(2)
After trying many other solutions. This worked for me. But I have just connected to my VPN and all went okay.Archilochus
I have something similar switch to iphone hotspot and it works, but wifi does not. However, 3 other machines on the same wifi work without issue. Something on my mac is blocking the connection over wifi but I have no idea what it isCampeche
F
12

I had the same problem, and this worked for me:

git gc --aggressive --prune

It took a while, but after it was done all git operations started working faster.
The push operation that previously failed then succeeded, probably because it became fast enough to avoid some timeout related issue.

Fatigue answered 17/1, 2019 at 14:42 Comment(0)
G
3

Note that a push can still freeze (even with postBuffer increased) when its pack files are corrupted (ie pack-objects fails)

That will be fixed in git 2.9 (June 2016). And better managed with Git 2.25 (Q1 2020)

See commit c4b2751, commit df85757, commit 3e8b06d, commit c792d7b, commit 739cf49 (19 Apr 2016) by Jeff King (peff).
(Merged by Junio C Hamano -- gitster -- in commit d689301, 29 Apr 2016)

"git push" from a corrupt repository that attempts to push a large number of refs deadlocked; the thread to relay rejection notices for these ref updates blocked on writing them to the main thread, after the main thread at the receiving end notices that the push failed and decides not to read these notices and return a failure.

Commit 739cf49 has all the details.

send-pack: close demux pipe before finishing async process

This fixes a deadlock on the client side when pushing a large number of refs from a corrupted repo.


With Git 2.25 (Q1 2020), Error handling after "git push" finishes sending the packdata and waits for the response to the remote side has been improved.

See commit ad7a403 (13 Nov 2019) by Jeff King (peff).
(Merged by Junio C Hamano -- gitster -- in commit 3ae8def, 01 Dec 2019)

send-pack: check remote ref status on pack-objects failure

Helped-by: SZEDER Gábor
Signed-off-by: Jeff King

When we're pushing a pack and our local pack-objects fails, we enter an error code path that does a few things:

  1. Set the status of every ref to REF_STATUS_NONE
  2. Call receive_unpack_status() to try to get an error report from the other side
  3. Return an error to the caller

If pack-objects failed because the connection to the server dropped, there's not much more we can do than report the hangup. And indeed, step 2 will try to read a packet from the other side, which will die() in the packet-reading code with "the remote end hung up unexpectedly".

But if the connection didn't die, then the most common issue is that the remote index-pack or unpack-objects complained about our pack (we could also have a local pack-objects error, but this ends up being the same; we'd send an incomplete pack and the remote side would complain).

In that case we do report the error from the other side (because of step 2), but we fail to say anything further about the refs.

The issue is two-fold:

  • in step 1, the "NONE" status is not "we saw an error, so we have no status".
    It generally means "this ref did not match our refspecs, so we didn't try to push it". So when we print out the push status table, we won't mention any refs at all!
    But even if we had a status enum for "pack-objects error", we wouldn't want to blindly set it for every ref. For example, in a non-atomic push we might have rejected some refs already on the client side (e.g., REF_STATUS_REJECT_NODELETE) and we'd want to report that.
  • in step 2, we read just the unpack status.
    But receive-pack will also tell us about each ref (usually that it rejected them because of the unpacker error).

So a much better strategy is to leave the ref status fields as they are (usually EXPECTING_REPORT) and then actually receive (and print) the full per-ref status.

This case is actually covered in the test suite, as t5504.8, which writes a pack that will be rejected by the remote unpack-objects.
But it's racy. Because our pack is small, most of the time pack-objects manages to write the whole thing before the remote rejects it, and so it returns success and we print out the errors from the remote.
But very occasionally (or when run under --stress), it goes slow enough to see a failure in writing, and git push reports nothing for the refs.

With this patch, the test should behave consistently.

There shouldn't be any downside to this approach.

  • If we really did see the connection drop, we'd already die in receive_unpack_status(), and we'll continue to do so.
  • If the connection drops after we get the unpack status but before we see any ref status, we'll still print the remote unpacker error, but will now say "remote end hung up" instead of returning the error up the call-stack.
    But as discussed, we weren't showing anything more useful than that with the current code. And anyway, that case is quite unlikely (the connection dropping at that point would have to be unrelated to the pack-objects error, because of the ordering of events).

In the future we might want to handle packet-read errors ourself instead of dying, which would print a full ref status table even for hangups.
But in the meantime, this patch should be a strict improvement.

Glycerol answered 1/5, 2016 at 20:18 Comment(0)
C
1

I met the same problem when uploading my gigabytes of data to github repository. Increasing the HTTP buffer size did not work for this size of data. I am not sure if it is a problem of git itself or github server. Anyway I made a shell script to handle this problem, which uploades files in the current directory step by step, in each step less than 100 MB of data. It is working fine for me. It takes time but I can just detach screen and wait overnight.

Here is the shell script: https://gist.github.com/sekika/570495bd0627acff6c836de18e78f6fd

Clino answered 6/6, 2016 at 6:30 Comment(2)
Will this script works in windowsApeldoorn
I am not sure because I am not using Windows. You can try with WSL.Clino
A
1

This is what worked for me:

  1. Run git gc --aggressive --prune
  2. Exit Visual Studio
  3. Delete the vs folder for the solution
  4. Open the solution
  5. Commit works!
Azotemia answered 6/2 at 16:17 Comment(0)
C
0

I’m not sure why, but I was having this problem and it went away when I switched from the “5G” version of my wifi network to the other one

Copulate answered 30/11, 2020 at 18:11 Comment(1)
Definitely a broken pipe due to a signal 13 is not being caused by the network link layer.Oxbridge
A
0

I encountered this whilst using a private repository hosted by GitLab. The problem was caused by committing too much data in one go.

I remediated the problem with git reset --soft + the relevant identifier, as described by an SO answer.

Alignment answered 18/4, 2023 at 23:12 Comment(0)
R
0

I encountered the same issue. I solved it by staging and pushing small chunk of files or a folder then push. restart the process until you have staged and pushed all of them.

Radiothermy answered 30/3 at 13:9 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.