I believe it is a race condition, although a very rare one. The implementation of condition_variable::timed_wait() with a duration simply converts the value to a system_time using get_system_time()+wait_duration. If the system time changes between the time get_system_time() is called, and the calculated wait end time is reconverted to a tick-based counter for the underlying OS call, your wait time will be wrong.
To test this idea, on Windows, I wrote a simple program with one thread generating some output every 100ms, like this:
for (;;)
{
boost::this_thread::sleep( boost::get_system_time() +
boost::posix_time::milliseconds( 100 ) );
std::cout << "Ping!" << std::endl;
}
Another thread was setting the system time back one minute in the past every 100ms (this thread uses the OS-level "Sleep()" call which avoids conversions to system time):
for ( ;; )
{
Sleep( 100 );
SYSTEMTIME sysTime;
GetSystemTime( &sysTime );
FILETIME fileTime;
SystemTimeToFileTime( &sysTime, /*out*/&fileTime );
ULARGE_INTEGER fileTime64 = (ULARGE_INTEGER(fileTime.dwHighDateTime) << 32) |
fileTime.dwLowDateTime;
fileTime64 -= 10000000 * 60; // one minute in the past
fileTime.dwHighDateTime = (fileTime64>>32) & 0xFFFFFFFF;
fileTime.dwLowDateTime = fileTime64 & 0xFFFFFFFF;
FileTimeToSystemTime( &fileTime, /*out*/&sysTime );
SetSystemTime( &sysTime );
}
The first thread, though supposed to output "Ping!" every 100 milliseconds, locked up rather quickly.
Unless I'm missing something, it seems Boost doesn't provide any APIs that avoid this problem of internal conversions to system time, leaving apps vulnerable to outside changes to the clock.