A TCP connection can be seen as two data pipelines between two endpoints. One data pipeline for sending data from A to B and one data pipeline for sending data from B to A. These two pipelines belong to a single connection but they don't otherwise influence each other. Sending data on one pipeline has no effect on data being sent on the other pipeline. If data on one pipeline is reply data to data sent previously on the other pipeline, this is something only your application will know, TCP knows nothing about that. The task of TCP is to make sure that data reliably makes it from one end of the pipeline to the other end and that as fast as possible, that is all that TCP cares for.
As soon as one side is done sending data, it tells the other side it is done by tranmitting it a packet with the FIN
flag set. Sending a FIN
flag means "I have sent all the data I wanted to send to you, so my send pipeline is now closed". You can trigger that intentionally in your code by calling shutdown(socketfd, SHUT_WR)
. If the other side will then call recv()
on the socket, it won't get an error but receive will say that it read zero bytes, which means "end of stream". End of stream is not an error, it only means that no more data will ever arrive there, no matter how often you are going to call recv()
on that socket.
Of course, this doesn't affect the other pipeline, so when A -> B
is closed, B -> A
can still be used. You can still receive from that socket, even though you closed your sending pipeline. At some point, though, also B will be done with sending data and also transmit a FIN
. Once both pipelines are closed, the connection as a whole is closed and this would be a graceful shutdown, as both sides have been able to send all the data they wanted to send and no data should have been lost, since as long as there was unconfirmed data in flight, the other side would not have said it is done but wait for that data to be reliably transferred first.
Alternatively there is the RST
flag which closes the entire connection at once, regardless if the other side was done sending and regardless if there was unconfirmed data in flight, so a RST
has a high potential of causing data to be lost. As that is an exceptional situation that may require special handling, it would be useful for programmers to know if that was the case, that's why there exists two errors:
EPIPE
- You cannot send over that pipe as that pipe is not valid anymore. However, all data that you were sending before it broke was still reliably delivered, you just cannot send any new data.
ECONNRESET
- Your pipe is broken and it may be the case that data you were trying to send before got lost in the middle of transfer. If that is a problem, you better handle it somehow.
But these two errors do not map one to one to the FIN
and RST
flag. If you receive a RST
in a situation where the system sees no risk of data loss, there is no reason to drive you round the bend for nothing. So if all data you sent before was ACKed to be correctly received and then the connection was closed by a RST
when you tried to send new data, no data was lost. This includes the current data you tried to send as this data wasn't lost, it was never sent on the way, that's a difference as you still have it around whereas data you were sending before may not be around anymore. If your car breaks down in the middle of a road trip then this is quite a different situation than if you you are still at home as your car engine refused to even start. So in the end it's your system that decides if a RST
triggers a ECONNRESET
or a EPIPE
.
Okay, but why would the other side send you a RST
in the first place? Why not always closing with FIN
? Well, there exists a couple of reasons but the two most prominent ones are:
A side can only signal the other one that it is done sending but the only way to signal that it is done with the entire connection is to send a RST
. So if one side wants to close a connection and it wants to close it gracefully, it will first send a FIN
to signal that it won't send new data anymore and then give the other side some time to stop sending data, allowing in-flight data to pass through and to finally send a FIN
as well. However, what if the other side doesn't want to stop and keeps sending and sending? This behavior is legal as a FIN
doesn't mean that the connection needs to close, it only means one side is done. The result is that the FIN
is followed by RST
to finally close that connection. This may have caused in-flight data to be lost or it may not, only the recipient of the RST
will know for sure as if data was lost, it must have been on his side since the sender of the RST
was surely not sending any more data after the FIN
. For a recv()
call, this RST
has no effect as there was a FIN
before signaling "end of stream", so recv()
will report having read zero bytes.
One side shall close the connection, yet it sill has unsent data. Ideally it would wait till all unsent data has been sent and then transmit a FIN
, however, the time it is allowed to wait is limited and after that time has passed, there is still unsent data left. In that case it cannot send a FIN
as that FIN
would be a lie. It would tell the other side "Hey, I sent all the data I wanted to send" but that's not true. There was data that should have been sent but as the close was required to be instant, this data had to be discarded and as a result, this side will directly send a RST
. Whether this RST
triggers a ECONNRESET
for the send()
call depends again on the fact, if the recipient of the RST
had unsent data in flight or not. However, it will for sure trigger a ECONNRESET
error on the next recv()
call to tell the program "The other side actually wanted to send more data to you but it couldn't and thus some of that data was lost", since this may again be a situation that handling somehow, as the data you've received was for sure incomplete and this is something you should be made aware of.
If you want to force a socket to be always closed directly with RST
and never with FIN
/FIN
or FIN
/RST
, you can just set the Linger time to zero.
struct linger l = { .l_onoff = 1, .l_linger = 0 };
setsockopt(socketfd, SOL_SOCKET, SO_LINGER, &l, sizeof(l));
Now the socket must close instantly and without any delay, no matter how little and the only way to close a TCP socket instantly is to send a RST
. Some people think "Why enabling it and setting time to zero? Why not just disabling it instead?" but disabling has a different meaning.
The linger time is the time a close()
call may block to perform pending send actions to close a socket gracefully. If enabled (.l_onoff != 0
), a call to close()
may block for up to .l_linger
seconds. If you set time to zero, it may not block at all and thus terminates instantly (RST
). However, if you disable it, then close()
will never block either but then the system may still linger on close, yet this lingering happens in the background, so your process won't notice it any longer and thus also cannot know when the socket has really closed, as the socketfd
becomes invalid at once, even if the underlying socket in kernel still exists.