We are using HttpClient
to send requests to remote Web API in parallel:
public async Task<HttpResponseMessage> PostAsync(HttpRequestInfo httpRequestInfo)
{
using (var httpClient = new HttpClient())
{
httpClient.BaseAddress = new Uri(httpRequestInfo.BaseUrl);
if (httpRequestInfo.RequestHeaders.Any())
{
foreach (var requestHeader in httpRequestInfo.RequestHeaders)
{
httpClient.DefaultRequestHeaders.Add(requestHeader.Key, requestHeader.Value);
}
}
return await httpClient.PostAsync(httpRequestInfo.RequestUrl, httpRequestInfo.RequestBody);
}
}
This API can be called by several threads concurrently. After running about four hours we found memory leaks issue happened, from profiling tool, it seems there are two ServicePoint
objects, one of which is quite big, about 160 MB.
From my knowledge, I can see some problems above codes:
- We should share
HttpClient
instance as possible as we can. In our case, the request address and headers may vary a lot, so is this a point we can do something or it doesn't hurt too much performance? I just think of that we can prepare a dictionary to store and look upHttpClient
instances. - We didn't modify the
DefaultConnectionLimit
ofServicePoint
, so in default it can only send two requests to the same server concurrently. If we change this value to larger one, the memory leaks problem can be solved? - We also suppressed the HTTPS certificate validation:
ServicePointManager.ServerCertificateValidationCallback = delegate { return true; };
Does this have something to do with the problem?
Due to this issue is not easily reproduced(need a lot of time), I just need some thoughts so that I can optimize our code for long time running.