error: RPC failed; curl transfer closed with outstanding read data remaining
Asked Answered
P

27

242

I'm facing this error when I try to clone a repository from GitLab (GitLab 6.6.2 4ef8369):

remote: Counting objects: 66352, done.
remote: Compressing objects: 100% (10417/10417), done.
error: RPC failed; curl 18 transfer closed with outstanding read data remaining
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed

The clone is then aborted. How can I avoid this?

Pepsin answered 27/7, 2016 at 16:47 Comment(0)
P
113

After few days, today I just resolved this problem. Generate ssh key, follow this article:

https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/

Declare it to

  1. Git provider (GitLab what I am using, GitHub).
  2. Add this to local identity.

Then clone by command:

git clone [email protected]:my_group/my_repository.git

And no error happen.

The above problem

error: RPC failed; curl 18 transfer closed with outstanding read data remaining

because have error when clone by HTTP protocol (curl command).

And, you should increment buffer size:

git config --global http.postBuffer 524288000
Pepsin answered 1/8, 2016 at 15:47 Comment(9)
Change from HTTP to SSH work for me. Config http.postBuffer didn't work.Schizopod
if error is still there , you should edit your ssh config file vi /users/username/.ssh/config and add serverAliveInterval 120 and exit the vi using wq (to save and exit). This will actually prevent the server from timeout and connection break errors.Larner
that's nice, but anyone knows why that happens for 100% cloned?Bracci
Changing http.postBuffer worked for me - thanks!Oblast
after all great steps listed here, git clone ssh://[email protected]/<repository base>/<repository name>.git is the only clone command that worked with me. and there's no need to reference the ssh is in the command at all (in case you're also wondering like i was).Heap
worked for me too for pulling a large solution via a slow vpn connectionMarko
Beware: I experienced several issues with npm publish when raising the postBuffer. When I set it down to 50000000, issues were gone. The default value is 1000000, by the way.Preset
changing http.postBuffer 524288000 worked for me.Thank youAmmadis
This works for me: stackoverflow.com/questions/78267333/…Roubaix
C
367

It happens more often than not, I am on a slow internet connection and I have to clone a decently huge git repository. The most common issue is that the connection closes and the whole clone is cancelled.

Cloning into 'large-repository'...
remote: Counting objects: 20248, done.
remote: Compressing objects: 100% (10204/10204), done.
error: RPC failed; curl 18 transfer closed with outstanding read data remaining 
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed

After a lot of trial and errors and a lot of “remote end hung up unexpectedly” I have a way that works for me. The idea is to do a shallow clone first and then update the repository with its history.

$ git clone http://github.com/large-repository --depth 1
$ cd large-repository
$ git fetch --unshallow
Choking answered 24/5, 2017 at 7:29 Comment(14)
This is the only answer that describes a workaround for the problem without switching to SSH. This worked for me, thanks!Attainder
thx dude, this answer worked for me too. Just change my network with fast internet connectionCrumb
The key here is --depth 1 and --unshallow. This also works for fetching an existing repo on slow connection: git fetch --depth 1 then git fetch --unshallow.Acetum
For clarity @AndrewT., the git fetch --unshallow command deals with loss of connection in a more forgiving way than the git clone? And that's what makes the difference here?Insolvent
Now, the git fetch --unshallow command give RPC failed; errorConcealment
why turns out failure since 100% cloned, anyone knows the reason? thanksBracci
Didn't work for me. Failed on the git fetch --unshallow. Guess my repo is too big even for this approach. Only SSH worked.Carnal
If git fetch --unshallow still reports errors, you can use git fetch --depth=100 and then git fetch --depth=200 and then git fetch --depth=300 and so on to fetch repo incrementally. This way works for Linux kernel repo, which is extremely large.Kronfeld
does git fetch --unshallow fetches all branches [as part of history info] ? I am still not seeing few branches. Am I missing something ?Aeniah
as others mentioned, still can't get past git clone and my repo is just a jekyll site with some images. this answer is outdated, going with ssh and secure connections is the only real solution today.Heap
Form me just using git config --global http.postBuffer 524288000 as described at https://mcmap.net/q/12281/-error-rpc-failed-curl-transfer-closed-with-outstanding-read-data-remaining did the trickCalculating
I tried this, but then all the files from the repo (in my case, SolidWorks files) are only 1KB large and cannot be opened. Git still says that they are up-to-date. Anyone else have this issue?Apprehensive
@Kronfeld 's solution git fetch --depth=100 and git fetch --depth=200 worked for me. Below are the comments I ran. git clone http://github.com/large-repository --depth 1 and cd large-repository and git fetch --depth=100Nanceenancey
@ManikandanS this works for me: stackoverflow.com/questions/78267333/…Roubaix
P
113

After few days, today I just resolved this problem. Generate ssh key, follow this article:

https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/

Declare it to

  1. Git provider (GitLab what I am using, GitHub).
  2. Add this to local identity.

Then clone by command:

git clone [email protected]:my_group/my_repository.git

And no error happen.

The above problem

error: RPC failed; curl 18 transfer closed with outstanding read data remaining

because have error when clone by HTTP protocol (curl command).

And, you should increment buffer size:

git config --global http.postBuffer 524288000
Pepsin answered 1/8, 2016 at 15:47 Comment(9)
Change from HTTP to SSH work for me. Config http.postBuffer didn't work.Schizopod
if error is still there , you should edit your ssh config file vi /users/username/.ssh/config and add serverAliveInterval 120 and exit the vi using wq (to save and exit). This will actually prevent the server from timeout and connection break errors.Larner
that's nice, but anyone knows why that happens for 100% cloned?Bracci
Changing http.postBuffer worked for me - thanks!Oblast
after all great steps listed here, git clone ssh://[email protected]/<repository base>/<repository name>.git is the only clone command that worked with me. and there's no need to reference the ssh is in the command at all (in case you're also wondering like i was).Heap
worked for me too for pulling a large solution via a slow vpn connectionMarko
Beware: I experienced several issues with npm publish when raising the postBuffer. When I set it down to 50000000, issues were gone. The default value is 1000000, by the way.Preset
changing http.postBuffer 524288000 worked for me.Thank youAmmadis
This works for me: stackoverflow.com/questions/78267333/…Roubaix
S
50

you need to turn off the compression:

git config --global core.compression 0

then you need to use shallow clone

git clone --depth=1 <url>

then most important step is to cd into your cloned project

cd <shallow cloned project dir>

now deopen the clone,step by step

git fetch --depth=N, with increasing N

eg.

git fetch --depth=4

then,

git fetch --depth=100

then,

git fetch --depth=500

you can choose how many steps you want by replacing this N,

and finally download all of the remaining revisions using,

git fetch --unshallow 

upvote if it helps you :)

Subside answered 29/5, 2020 at 5:22 Comment(3)
This is the only option that worked for me. On my case error was happening on: git clone --depth=1 <url> However, as per your instruction, I've executed first: git config --global core.compression 0 Then all following steps, and everything worked great! PS: I have good internet connection, just today is behaving weirdly. Thank you!Hippy
Can you detail what does disabling compression help accomplish?Procter
@Procter Here what we are doing is disabling the default behavior of compressing the full object and then fetch. instead we are fetching without compressing which allows us to fetch step by step by specifying the depth.Subside
S
24

When I tried cloning from the remote, got the same issue repeatedly:

remote: Counting objects: 182, done.
remote: Compressing objects: 100% (149/149), done.
error: RPC failed; curl 18 transfer closed with outstanding read data remaining
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed

Finally this worked for me:

git clone https://[email protected]/repositoryName.git --depth 1
Subcortex answered 22/8, 2017 at 9:29 Comment(3)
what --depth 1 doesKlingensmith
If the source repository is complete, convert a shallow repository to a complete one, removing all the limitations imposed by shallow repositories. If the source repository is shallow, fetch as much as possible so that the current repository has the same history as the source repository.Authority
BUT i don't want to clone, I want to push . How can i do it with depthEyeglasses
S
13

Simple Solution: Rather then cloning via https, clone it via ssh.

For example:

git clone https://github.com/vaibhavjain2/xxx.git - Avoid
git clone [email protected]:vaibhavjain2/xxx.git - Correct
Staffordshire answered 3/4, 2018 at 4:40 Comment(1)
Yes. I am windows user.Staffordshire
H
8

Network connection problems.
Maybe due to the persistent connection timeout.
The best way is to change to another network.

Hawks answered 27/8, 2018 at 5:31 Comment(1)
Changed the wifi for a faster internet connection then it worked, thanks for saving my time.Lashawna
A
8

Usually it happen because of one of the below reasone:

  1. Slow Internet.
  • Switching to LAN cable with stable network connection helps in many cases. Avoid doing any parallel network intensive task while you are fetching.
  1. Small TCP/IP connection time out on Server side from where you are trying to fetch.
  • Not much you can do about. All you can do is request your System Admin or CI/CD Team responsible to increaseTCP/IP Timeout and wait.
  1. Heavy Load on Server.
  • Due to heavy server load during work hour downloading a large file can fail constantly.Leave your machine after starting download for night.
  1. Small HTTPS Buffer on Client machine.
  • Increasing buffer size for post and request might help but not guaranteed

git config --global http.postBuffer 524288000

git config --global http.maxRequestBuffer 524288000

git config --global core.compression 0

Aleras answered 5/2, 2021 at 9:27 Comment(0)
I
6

As above mentioned, first of all run your git command from bash adding the enhanced log directives in the beginning: GIT_TRACE=1 GIT_CURL_VERBOSE=1 git ...

e.g. GIT_CURL_VERBOSE=1 GIT_TRACE=1 git -c diff.mnemonicprefix=false -c core.quotepath=false fetch origin This will show you detailed error information.

Iluminadailwain answered 29/11, 2016 at 11:0 Comment(0)
N
6

These steps worked for me:using git:// instead of https://

Nucleotidase answered 5/12, 2018 at 9:31 Comment(1)
actually, this answer is more specific than next ones in this thread ..Kktp
B
4

With me this problem occurred because the proxy configuration. I added the ip git server in the proxy exception. The git server was local, but the no_proxy environment variable was not set correctly.

I used this command to identify the problem:

#Linux:
export GIT_TRACE_PACKET=1
export GIT_TRACE=1
export GIT_CURL_VERBOSE=1

#Windows
set GIT_TRACE_PACKET=1
set GIT_TRACE=1
set GIT_CURL_VERBOSE=1

In return there was the "Proxy-Authorization" as the git server was spot should not go through the proxy. But the real problem was the size of the files defined by the proxy rules

Babette answered 31/8, 2016 at 17:44 Comment(0)
D
4

For me, the issue was that the connection closes before the whole clone complete. I used ethernet instead of wifi connection. Then it solves for me

Denounce answered 18/11, 2019 at 6:57 Comment(0)
P
4

This error seems to happen more commonly with a slow, or troubled internet connection. I have connected with good internet speed then it is worked perfectly.

Pavior answered 30/4, 2020 at 8:48 Comment(0)
C
4

For me what worked is, as this error may occur for memory requirement of git. I have added these lines to my global git configuration file .gitconfig which is present in $USER_HOME i.e C:\Users\<USER_NAME>\.gitconfig

[core] 
packedGitLimit = 512m 
packedGitWindowSize = 512m 
[pack] 
deltaCacheSize = 2047m 
packSizeLimit = 2047m 
windowMemory = 2047m
Coates answered 19/1, 2021 at 15:42 Comment(0)
P
2

This problem arrive when you are proxy issue or slow network. You can go with the depth solution or

git fetch --all  or git clone 

    

If this give error of curl 56 Recv failure then download the file via zip or spicify the name of branch instead of --all

git fetch origin BranchName 
Photoconductivity answered 29/7, 2020 at 10:45 Comment(1)
Using git fetch origin BranchName I was able to continue an interrupted git clone. Thank you.Henghold
P
1

Tried all of the answers on here. I was trying to add cocoapods onto my machine.

I didn't have an SSH key so thanks @Do Nhu Vy

https://mcmap.net/q/12281/-error-rpc-failed-curl-transfer-closed-with-outstanding-read-data-remaining

And finally used

git clone https://git.coding.net/CocoaPods/Specs.git ~/.cocoapods/repos/master

to finally fix the issue found https://mcmap.net/q/12550/-cocoapods-error-rpc-failed-curl-18-transfer-closed-with-outstanding-read-data-remaining

Postmortem answered 22/7, 2019 at 21:25 Comment(0)
I
1

I am facing this problem also. resolve it. The problem is the slow internet connection. Please check your internet connection nothing else. I have connected with good internet speed then it is worked perfectly. hope it helped you.

Indulgent answered 29/9, 2021 at 17:18 Comment(0)
A
0

This problem usually occurs while cloning large repos. If git clone http://github.com/large-repository --depth 1 does not work on windows cmd. Try running the command in windows powershell.

Argue answered 25/9, 2020 at 13:18 Comment(0)
D
0

can be two reason

  1. Internet is slow (this was in my case)
  2. buffer size is less,in this case you can run command git config --global http.postBuffer 524288000
Deepset answered 18/12, 2020 at 10:24 Comment(0)
E
0

This problem is solved 100%. I was facing this problem , my project manager change the repo name but i was using old repo name.

Engineer@-Engi64 /g/xampp/htdocs/hospitality
$ git clone https://git-codecommit.us-east-2.amazonaws.com/v1/repo/cms
Cloning into 'cms'...
remote: Counting objects: 10647, done.
error: RPC failed; curl 56 OpenSSL SSL_read: SSL_ERROR_SYSCALL, errno 10054
fatal: the remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed

How i solved this problem. Repo link was not valid so that's why i am facing this issue. Please check your repo link before cloning.

Experiential answered 17/2, 2021 at 16:44 Comment(0)
A
0

I got the same issue while pushing some code to Github.

I tried git config --global http.postBuffer 524288000 but It didn't work for me.

Reason

It was because your commit history and/or any file(s) size is bigger.

My Case

In my case, package-lock.json was causing the problem. It was 1500+KB in size and 33K lines of code.

How I solved it?

  1. I commit and pushed everything without package-lock.json
  2. Copy the content of package-lock.json.
  3. Created a new file with the name of package-lock.json from the GitHub repo page.
  4. Paste the content of package-lock.json and commit.
  5. git pull on local.

And Done.

Tips

  • Maintain each commit size smaller
  • Push frequently
  • Use a good internet connection

I hope it helped you.

Autoicous answered 13/8, 2021 at 17:24 Comment(0)
G
0
git clone --global core.compression 0

then

git clone --depth=1 <https://your_repo.git>

then

git fetch --depth=2

then

git fetch --depth=10

... etc. until he writes

remote: Total 0 (delta 0), reused 0 (delta 0), pack-reused 0

at the end you can write

git fetch --unshallow

and you will be thrown

fatal: --unshallow on a complete repository does not make sense

if at some stage you get an error again, try setting the --depth property to a smaller value and gradually increasing further

Gerlac answered 1/6, 2022 at 16:38 Comment(1)
The same method is described in NikhilP's answer. Please do not post duplicate answeres.Recommend
B
0

I had this error when doing git push after changing to HTTP/1.1.

Solution: turn off my VPN and re-run git push.

Braeunig answered 1/11, 2022 at 13:18 Comment(0)
F
-1

Changing git clone protocol to try.

for example, this error happened when "git clone https://xxxxxxxxxxxxxxx"

you can try with "git clone git://xxxxxxxxxxxxxx", maybe ok then.

Frill answered 14/2, 2018 at 7:6 Comment(0)
C
-1

I was able to clone the repo with GitHub Desktop

Collinsworth answered 9/8, 2022 at 12:17 Comment(1)
This does not provide an answer to the question. Once you have sufficient reputation you will be able to comment on any post; instead, provide answers that don't require clarification from the asker. - From ReviewPresumption
S
-2

git config

[core]
    autocrlf = input
    compression = 0
[remote "origin"]
    proxy = 127.0.0.1:1086
[http]
    version = HTTP/1.1
[https]
    postBuffer = 524288000

retry.sh

set -x
while true
do
  git clone xxxxx
  if [ $? -eq 0 ]; then
    break
  fi
done
Sharlenesharline answered 25/2, 2023 at 8:47 Comment(1)
Your answer could be improved with additional supporting information. Please edit to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers in the help center.Plasticizer
P
-8

These steps are working for me:

cd [dir]
git init
git clone [your Repository Url]

I hope that works for you too.

Polysaccharide answered 31/5, 2018 at 3:33 Comment(0)
P
-16

try this

$ git config --global user.name "John Doe"
$ git config --global user.email [email protected]

https://git-scm.com/book/en/v2/Getting-Started-First-Time-Git-Setup

this is work for me.. capture.png

Porpoise answered 10/10, 2017 at 7:10 Comment(1)
The bug is sporadic due to an unreliable network. The solution presented here didn't actually fix the problem. The network just happened to be more reliable at the moment you tried cloning again.Jambalaya

© 2022 - 2024 — McMap. All rights reserved.