Big size of ServicePoint object after several hours sending HTTP request in parallel
Asked Answered
E

1

4

We are using HttpClient to send requests to remote Web API in parallel:

public async Task<HttpResponseMessage> PostAsync(HttpRequestInfo httpRequestInfo)
{
    using (var httpClient = new HttpClient())
    {
        httpClient.BaseAddress = new Uri(httpRequestInfo.BaseUrl);
        if (httpRequestInfo.RequestHeaders.Any())
        {
            foreach (var requestHeader in httpRequestInfo.RequestHeaders)
            {
                httpClient.DefaultRequestHeaders.Add(requestHeader.Key, requestHeader.Value);
            }
        }

        return await httpClient.PostAsync(httpRequestInfo.RequestUrl, httpRequestInfo.RequestBody);
    }
}

This API can be called by several threads concurrently. After running about four hours we found memory leaks issue happened, from profiling tool, it seems there are two ServicePoint objects, one of which is quite big, about 160 MB.

From my knowledge, I can see some problems above codes:

  • We should share HttpClient instance as possible as we can. In our case, the request address and headers may vary a lot, so is this a point we can do something or it doesn't hurt too much performance? I just think of that we can prepare a dictionary to store and look up HttpClient instances.
  • We didn't modify the DefaultConnectionLimit of ServicePoint, so in default it can only send two requests to the same server concurrently. If we change this value to larger one, the memory leaks problem can be solved?
  • We also suppressed the HTTPS certificate validation: ServicePointManager.ServerCertificateValidationCallback = delegate { return true; }; Does this have something to do with the problem?

Due to this issue is not easily reproduced(need a lot of time), I just need some thoughts so that I can optimize our code for long time running.

Emphasize answered 17/3, 2015 at 10:18 Comment(4)
What kind of memory profiler are you using and have you forced a heap generation cleanup on all generations before checking what objects remain alive (msdn.microsoft.com/en-us/library/ee787088%28v=vs.110%29.aspx)?Ratline
@Ratline I use dotMemory, and of course I force GC collecting happens and wait for long enough. Anyway, I have figured out the problem, it's something related to internet traffice performance, not memory leaks.Emphasize
Based on your answer, the next time this happens you should cut any new connections and then invoke generation garbage collection after some period of time. That should let the old connections die out and you should see if a true memory leak is present. It sounds like you had more requests queued than you were able to process at once, effectively, your application nearly DoS'd itself as it may have eventually run out of memory.Ratline
What you can try to do next time (this is very helpful by the way) is ask the customer to produce a dump of the application from Task Manager, open it in Visual Studio alongside the version of the code you sent to the customer, and debug into your application. View the active threads of your application. You will likely see a lot of threads waiting to post.Ratline
E
2

Explain the situation myself, just in case others also meet this issue later.

First, this is not memory leak, it's something performance problem.

We test our application in virtual machine, on which we opened the proxy. It leads to the internet is quite slow. So in our case, each HTTP request might cost 3-4 seconds. As time going, there will be a lot of connections in the ServicePoint queue. Therefore, it's not memory leaks, that's just because the previous connections are not finished quickly enough.

Just make sure each HTTP request is not that slow, everything becomes normal.

We also tried to reduce the HttpClient instances, to increase the HTTP request performance:

private readonly ConcurrentDictionary<HttpRequestInfo, HttpClient> _httpClients;

private HttpClient GetHttpClient(HttpRequestInfo httpRequestInfo)
{
    if (_httpClients.ContainsKey(httpRequestInfo))
    {
        HttpClient value;
        if (!_httpClients.TryGetValue(httpRequestInfo, out value))
        {
            throw new InvalidOperationException("It seems there is no related http client in the dictionary.");
        }

        return value;
    }

    var httpClient = new HttpClient { BaseAddress = new Uri(httpRequestInfo.BaseUrl) };
    if (httpRequestInfo.RequestHeaders.Any())
    {
        foreach (var requestHeader in httpRequestInfo.RequestHeaders)
        {
            httpClient.DefaultRequestHeaders.Add(requestHeader.Key, requestHeader.Value);
        }
    }

    httpClient.DefaultRequestHeaders.ExpectContinue = false;
    httpClient.DefaultRequestHeaders.ConnectionClose = true;
    httpClient.Timeout = TimeSpan.FromMinutes(2);

    if (!_httpClients.TryAdd(httpRequestInfo, httpClient))
    {
        throw new InvalidOperationException("Adding new http client thrown an exception.");
    }

    return httpClient;
}

In our case, only the request body is different for same server address. I also override the Equals and GetHashCode method of HttpRequestInfo.

Meanwhile, we set ServicePointManager.DefaultConnectionLimit = int.MaxValue;

Hopes this can help you.

Emphasize answered 19/3, 2015 at 2:4 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.