How to prevent packet fragmentation for a HttpWebRequest
Asked Answered
L

4

8

I am having a problem using HttpWebRequest against a HTTP daemon on an embedded device. The problem appears to be that there is enough of a delay between the http headers being written to the socket stream, and the http payload (a POST), that the socket releases what's in the socket buffer to the server. This results in the HTTP request being split over two packets (fragmentation).

This is perfectly valid, of course, however the server the other end doesn't cope with it if the packets are split by more than about 1.8ms. So I am wondering if there are any realistic ways to control this (on the client).

There do not appear to be any properties on HttpWebRequest that give this level of control over the socket used for the send, and one can't appear to access the socket itself (ie via reflection) because it is only created during the send, and released afterwards (as part of the outbound http connection pooling stuff). The BufferWriteStream property just buffers the body content within the webrequest (so it's still available for redirects etc...), and doesn't appear to affect the way the overall request is written to the socket.

So what to do?

(I'm really trying to avoid having to re-write the HTTP client from the socket up)

One option might be to write some kind of proxy that the HttpWebRequest sends to (maybe via the ServicePoint), and in that implementation buffer the entire TCP request. But that seems like a lot of hard work.

It also works fine when I'm running Fidder (for the same reason) but that's not really an option in our production environment...

[ps: I know it's definately the interval between the fragmented packets that's the problem, because I knocked up a socket-level test where I explicitly controlled the fragmentation using a NoDelay socket]

Ledbetter answered 5/2, 2010 at 7:11 Comment(2)
You did perfect work to understand the problem. The only thing you forget about is the server. It's behavior is abnormal, it must receive all the packets within timeout interval (which is about 20-100 seconds). Because it is an RFC standard. Is there possibility to fix the server?Lacework
I've asked the device vendors about this, but being an embedded device I suspect this may get complicated, which is why I was trying to find a client-side fix.Ledbetter
L
2

In the end the vendor pushed out a firmware upgrade that included a new version of HTTPD and the problem went away. They were using BusyBox linux, and apparently there was some other problem with the HTTPD implementation that they had suffered from.

In terms of my original question, I don't think there is any reliable way of doing it, apart from writing a socket proxy. Some of the workarounds I played with above worked by luck not design (because they meant .net sent the whole packet in one go).

Ledbetter answered 24/3, 2011 at 9:5 Comment(0)
M
1

Is your embedded server a HTTP/1.1 server? If so, try setting Expect100Continue=false on the webrequest before you call GetRequestStream(). This will ensure that the HTTP stack does not expect the "HTTP/1.1 100 continue" header response from the server, before sending the entity body. So, even though the packets will still be split between the header and body, the inter packet gap will be shorter.

Miramontes answered 5/2, 2010 at 14:21 Comment(2)
By posting my request as HTTP 1.0 I was already disabling that (I assume you mean just null out the 'Expect' header, because there's no explicit Expect100Continue property on the request).Ledbetter
Sorry, I meant set ServiecPoint.Expect100Continue=false. In any case I see that you have a solution. However, I agree with the previous commenters that your server is not behaving correctly. It should not be requiring the Request and Entity to come in the same packet. If you can post a network capture (Wireshark) of the request/response (before your changes) we can see what exactly is happening, and that might help figure out what the solution should be.Miramontes
L
1

What has seemed to have fixed it is disabling Nagling on the ServicePoint associated with that URI, and sending the request as HTTP 1.0 (neither on their own seem to fix it):

var servicePoint = ServicePointManager.FindServicePoint(uri.Uri);
servicePoint.UseNagleAlgorithm = false;

However this still seems to have fixed it only by making the request go out faster, rather than forcing the headers and payload to be written as one packet. So it could presumably fail on a loaded machine / high latency link etc.

Wonder how hard it would be to write a defragmenting proxy...

Ledbetter answered 8/2, 2010 at 0:25 Comment(1)
Have you also disabled Expect100Continue. Although possibly not relevant to the embedded server I've foung that these two properties are subject to some undocumented rules, e.g. for me Epect100Continue = true disables the naggle algorithm in some circumstances despite that property being set to true. Also look at SupportsPipelining. I suspect that switching this off will close the connection right away instead of leaving it open, which might not be a good idea on an embedded server.Audiphone
S
0

Just looking at the client side splitting packets problem, I posted an answer to my own question which is linked to this one:

I saw the answer here:

http://us.generation-nt.com/answer/too-packets-httpwebrequest-help-23298102.html

Stere answered 13/11, 2012 at 16:50 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.