Who's setting TCP window size down to 0, Indy or Windows?
Asked Answered
S

3

6

We have an application server which have been observed sending headers with TCP window size 0 at times when the network had congestion (at a client's site).

We would like to know if it is Indy or the underlying Windows layer that is responsible for adjusting the TCP window size down from the nominal 64K in adaptation to the available throughput.
And we would be able to act upon it becoming 0 (nothing gets send, users wait => no good).

So, any info, link, pointer to Indy code are welcome...

Disclaimer: I'm not a network specialist. Please keep the answer understandable for the average me ;-)
Note: it's Indy9/D2007 on Windows Server 2003 SP2.

More gory details:
The TCP zero window cases happen on the middle tier talking to the DB server.
It happens at the same moments when end users complain of slowdowns in the client application (that's what triggered the network investigation).
2 major Network issues causing bottlenecks have been identified.
The TCP zero window happened when there was network congestion, but may or may not be caused by it.
We want to know when that happen and have a way to do something (logging at least) in our code.

So the core question is who sets the window size to 0 and where?
Where to hook (in Indy?) to know when that condition occurs?

Shantel answered 8/6, 2010 at 20:27 Comment(0)
L
7

The window size in the TCP header is noramlly set by the TCP stack software to reflect the size of the buffer space available. If your server is sending packets with a window set to zero, it probably because the client is sending data faster than the application running on the server is reading it, and the buffers associated with the TCP connection are now full.

This is perfectly normal operation for the TCP protocol if the client sends data faster than the server can read it. The client should refrain from sending data until the server sends a non-zero window size (there's no point, as it would be discarded anyway).

This may or may not reflect a serious problem between client and server, but if the condition persists it probably means the application running on the server has stopped reading the received data (once it starts reading, this frees up buffer space for TCP, and the TCP stack will send a new non-zero window size).

Lurlene answered 8/6, 2010 at 20:55 Comment(6)
The core question is who sets the window size to 0 and where? Updating the question...Shantel
The TCP stack (in this case, part of Windows Server) sets the window size to zero, but it does this because the application running on the server (Indy?) isn't reading the data.Lurlene
So it could be the case that if the middle tier cannot deliver to the client app due to network clogging, it stops reading the data coming from the DB server which then causes the OS to set the tcp window size to 0... and everybody waits till things get better. Right?Shantel
@Stephen C. Steel: Indy is the communication library (TCP, IP, HTTP...) coming with Delphi.Shantel
@Francois Since it is the server sending TCP packets with a zero window size, it is the server application which is failing to read as fast as the client is sending (not the other way round). As to why the server isn't reading, I can't help you there - that depends on details about your server application, and I'm not familar with it.Lurlene
@Stephen C. Steel Thanks for all the explanations.Shantel
T
2

A TCP header with a window size of zero indicates that the receiver's buffers are full. This is a normal condition for a faster writer than reader.

In reading your description, it's not clear if this is unexpected. What caused you to open a protocol analyzer?

Trusteeship answered 8/6, 2010 at 20:33 Comment(1)
A few here or there would not seem a problem, but when it happened, there were like 40 in the same second. The investigation has been triggered when end users complained of noticeable slowdowns in the client app. 2 majors issues were identified at the network level that would cause bottlenecks. But the TCP zero window may or may not be caused by these conditions, and anyway we would like to be aware when it happens and be able to at least log some info.Shantel
D
2

Since you might be interested in a solution to your problem, too:

If you have some control on what's running on the server side (the one that sends the 0 window size messages): Did you consider using setsockopt() with SO_RCVBUF to significantly increase the size of the receive buffer of your socket?

In Indy, setsockopt() is a method of the TIdSocketHandle. You should apply it to all the TIdSocketHandle objects associated with your socket. And in Indy 9, those are located through property Bindings in your TIdTCPServer.

I suggest first using getsockopt() with SO_RCVBUF to see what the OS gives you as a default buffer size. Then significantly increase this, may be by successive trials, doubling the size every time. You might also want re-run a getsockopt() call after your setsockopt() to insure that your setsockopt was actually performed: There is usually an upper limit that the socket implementation sets to the buffer sizes. And in this case, there is usually an OS-dependent way to move that ceiling value up. But those are rather extreme cases, and you are not too likely to need this.

If you don't have control on the source code on the side that gets overflowed, just check to see if the software running there exposes some parameter to change that buffer size.

Good luck!

Delossantos answered 10/6, 2010 at 7:10 Comment(1)
Thanks for this very valuable piece of infoShantel

© 2022 - 2024 — McMap. All rights reserved.