TL;DR
socket.set_option(boost::asio::detail::socket_option::integer<SOL_SOCKET, SO_RCVTIMEO>{ 200 });
FULL ANSWER
This question keep being asked over and over again for many years. Answers I saw so far are quite poor. I'll add this info right here in one of the first occurrences of this question.
Everybody trying to use ASIO to simplify their networking code would be perfectly happy if the author would just add an optional parameter timeout to all sync and async io functions. Unfortunately, this is unlikely to happen (in my humble opinion, just for ideological reasons, after all, AS in ASIO is for a reason).
So these are the ways to skin this poor cat available so far, none of them especially appetizing. Let's say we need 200ms timeout.
1) Good (bad) old socket API:
const int timeout = 200;
::setsockopt(socket.native_handle(), SOL_SOCKET, SO_RCVTIMEO, (const char *)&timeout, sizeof timeout);//SO_SNDTIMEO for send ops
Please note those peculiarities:
- const int for timeout - on Windows the required type is actually DWORD, but the current set of compilers luckily has it the same, so const int will work both in Win and Posix world.
- (const char*) for value. On Windows const char* is required, Posix requires const void*, in C++ const char* will convert to const void* silently while the opposite is not true.
Advantages: works and probably will always work as the socket API is old and stable. Simple enough. Fast.
Disadvantages: technically might require appropriate header files (different on Win and even different UNIX flavors) for setsockopt and the macros, but current implementation of ASIO pollutes global namespace with them anyway. Requires a variable for timeout. Not type-safe. On Windows, requires that the socket is in overlapped mode to work (which current ASIO implementation luckily uses, but it is still an implementation detail). UGLY!
2) Custom ASIO socket option:
typedef boost::asio::detail::socket_option::integer<SOL_SOCKET, SO_RCVTIMEO> rcv_timeout_option; //somewhere in your headers to be used everywhere you need it
//...
socket.set_option(rcv_timeout_option{ 200 });
Advantages: Simple enough. Fast. Beautiful (with typedef).
Disadvantages: Depends on ASIO implementation detail, which might change (but OTOH everything will change eventually, and such detail is less likely to change then public APIs subject to standardization). But in case this happens, you'll have to either write a class according to https://www.boost.org/doc/libs/1_68_0/doc/html/boost_asio/reference/SettableSocketOption.html (which is of course a major PITA thanks to obvious overengineering of this part of ASIO) or better yet revert to 1.
3) Use C++ async/future facilities.
#include <future>
#include <chrono>
//...
auto status = std::async(std::launch::async, [&] (){ /*your stream ops*/ })
.wait_for(std::chrono::milliseconds{ 200 });
switch (status)
{
case std::future_status::deferred:
//... should never happen with std::launch::async
break;
case std::future_status::ready:
//...
break;
case std::future_status::timeout:
//...
break;
}
Advantages: standard.
Disadvantages: always starts a new thread (in practice), which is relatively slow (might be good enough for clients, but will lead to DoS vulnerability for servers as threads and sockets are "expensive" resources). Don't try to use std::launch::deferred instead of std::launch::async to avoid new thread launch as wait_for will always return future_status::deferred without trying to run the code.
4) The method prescribed by ASIO - use async operations only (which is not really the answer to the question).
Advantages: good enough for servers too if huge scalability for short transactions is not required.
Disadvantages: quite wordy (so I will not even include examples - see ASIO examples). Requires very careful lifetime management of all your objects used both by async operations and their completion handlers, which in practice requires all classes containing and using such data in async operations be derived from enable_shared_from_this, which requires all such classes allocated on heap, which means (at least for short operations) that scalability will start taper down after about 16 threads as every heap alloc/dealloc will use a memory barrier.