What happens if one doesn't call [POSIX's] `recv` "fast enough"?
Asked Answered
H

2

9

I want to account for a possible scenario where clients of my TCP/IP stream socket service send data to my service faster than it manages to move the data to its buffers (I am talking about application buffers, naturally) with recv and work with it.

So basically, what happens in such scenarios?

The way I see it, some facility beneath my service has to receive pieces of the incoming stream and store these somewhere until I issue 'recv', right? Most certainly the operating system. What happens if it runs out of memory to store the pieces while my service is not receiving them fast enough?

I don't want to re-open old questions, but I can't seem to find an answer to this seemingly obvious one?

Helfand answered 15/1, 2011 at 16:53 Comment(0)
G
8

TCP provides flow control . The TCP stack (both on the sender and receiver side) will be able to buffer some data for you, and this is usually done in the OS kernel.

When the receiver buffers fill up, the sender will know about it, and stop sending more data, eventually leading to the sending application blocking(or otherwise not being able to send more data) until space becomes available again.

Shortly described, every TCP packet(segment) sent includes the size of data that can be buffered - the window size. This means the other end at all times know how much data it can send without the receiver throwing it away because the buffers are full. If the window size becomes 0, buffers are full and no more data will be sent (and in case of the sender being blocking, a send() call will block), Theres procedures for probing whether the tcp window is still 0, so sending can resume again when the data has been consumed.

There's some more details here

Grizzle answered 15/1, 2011 at 17:53 Comment(2)
Not quite. Every TCP acknowledgment contains the current window size. Wikipedia is incorrect on this point. The correct reference is not Wikipedia but RFC 793.Frigorific
Are there any approximate values on buffer size one might consider typical/safe? Or is it too varied to make any such statement.Mouton
B
1

It's network driver stack that maintains data buffers (including the ones for incoming data). If the buffer is filled, consequent TCP packets are dropped, and the client is stuck trying to send the data. There's a bit more on this here and here.

Burst answered 15/1, 2011 at 17:47 Comment(3)
No. A zero window is advertised to the sender and the sender stops sending. Ultimately the sender's socket send buffer fills up and the application blocks unless it is in non-blocking mode, in which case it gets EAGAIN or EWOULDBLOCK. TCP packets would only get dropped if the sender wasn't implementing TCP correctly and sending into a zero-sized window.Frigorific
@EJP your comment is only partially correct. On IP level the packets are discarded. My reply is incorrect in reference to TCP packets (which I guess was a typo), yet the client is still blocked (even with your comment).Bourges
The question is about TCP. TCP segments consist of IP packets. On the IP level the packets are only discarded if the enclosing segments are sent, and they are only sent if the window permits, and the window doesn't permit, so they shouldn't be sent.Frigorific

© 2022 - 2024 — McMap. All rights reserved.