TL;DR: Use --timeout=X
(X in seconds) to change the default rsync server timeout, not --inplace
.
The issue is the rsync server processes (of which there are two, see rsync --server ...
in ps
output on the receiver) continue running, to wait for the rsync client to send data.
If the rsync server processes do not receive data for a sufficient time, they will indeed timeout, self-terminate and cleanup by moving the temporary file to it's "proper" name (e.g., no temporary suffix). You'll then be able to resume.
If you don't want to wait for the long default timeout to cause the rsync server to self-terminate, then when your internet connection returns, log into the server and clean up the rsync server processes manually. However, you must politely terminate rsync -- otherwise, it will not move the partial file into place; but rather, delete it (and thus there is no file to resume). To politely ask rsync to terminate, do not SIGKILL
(e.g., -9
), but SIGTERM
(e.g., pkill -TERM -x rsync
- only an example, you should take care to match only the rsync processes concerned with your client).
Fortunately there is an easier way: use the --timeout=X
(X in seconds) option; it is passed to the rsync server processes as well.
For example, if you specify rsync ... --timeout=15 ...
, both the client and server rsync processes will cleanly exit if they do not send/receive data in 15 seconds. On the server, this means moving the temporary file into position, ready for resuming.
I'm not sure of the default timeout value of the various rsync processes will try to send/receive data before they die (it might vary with operating system). In my testing, the server rsync processes remain running longer than the local client. On a "dead" network connection, the client terminates with a broken pipe (e.g., no network socket) after about 30 seconds; you could experiment or review the source code. Meaning, you could try to "ride out" the bad internet connection for 15-20 seconds.
If you do not clean up the server rsync processes (or wait for them to die), but instead immediately launch another rsync client process, two additional server processes will launch (for the other end of your new client process). Specifically, the new rsync client will not re-use/reconnect to the existing rsync server processes. Thus, you'll have two temporary files (and four rsync server processes) -- though, only the newer, second temporary file has new data being written (received from your new rsync client process).
Interestingly, if you then clean up all rsync server processes (for example, stop your client which will stop the new rsync servers, then SIGTERM
the older rsync servers, it appears to merge (assemble) all the partial files into the new proper named file. So, imagine a long running partial copy which dies (and you think you've "lost" all the copied data), and a short running re-launched rsync (oops!).. you can stop the second client, SIGTERM
the first servers, it will merge the data, and you can resume.
Finally, a few short remarks:
- Don't use
--inplace
to workaround this. You will undoubtedly have other problems as a result, man rsync
for the details.
- It's trivial, but
-t
in your rsync options is redundant, it is implied by -a
.
- An already compressed disk image sent over rsync without compression might result in shorter transfer time (by avoiding double compression). However, I'm unsure of the compression techniques in both cases. I'd test it.
- As far as I understand
--checksum
/ -c
, it won't help you in this case. It affects how rsync decides if it should transfer a file. Though, after a first rsync completes, you could run a second rsync with -c
to insist on checksums, to prevent the strange case that file size and modtime are the same on both sides, but bad data was written.
SIGINT
(aka^C
) be 'politer' thanSIGTERM
? – Moratorium