Is there an algorithm for estimating clock-skew that will work over Http?
Asked Answered
P

2

5

I'm writing a multi-player game for Windows Phone 7. I need to make sure that events happen at the same time for each of the players. My approach at the moment is to broadcast, in advance, the time at which I want the event to take place, and rely on the phone's clock being reasonably accurate.

The trouble is, I've seen some situations where the clock is not accurate - it can be out by a couple of seconds. So what I'd like to do is estimate how different the phone clock's time is to the server's. Of course there's network latency to be taken into account, particular since the only network protocol open to me is Http.

So my question is, does anybody know of an algorithm that I can use to estimate the difference in clock time between client and server, to an accuracy of about 100ms?

From my days as a Maths undergraduate, I seem to remember that there was a statistical model that could be used in this situation where we are sampling a value that is assumed to consist of a constant plus an error amount (the latency) that can be assumed to follow some distribution. Does anybody know about this, and does it actually apply?

Prizefight answered 22/2, 2011 at 8:57 Comment(2)
I'm pretty sure latency isn't normally distributed. I frequently get outlier latencies at mean + several standard deviations. I never get outlier latencies at mean - several standard deviations, i.e. response arrives before I send the ping ;-)Fendig
You make a good point there, Steve - corrected!Prizefight
P
9

Christian's algorithm (discovered in a presentation linked to from aaa's answer) turned out to be just what I needed.

On my blog I have a complete (and rather elegant, if I say so myself!) implementation using WCF Rest on the server side and RestSharp and the Reactive Framework on the client side.

Here's an excerpt from the blog post explaining the algorithm:

  1. Client sends a message to the server: “What’s the time?” [adding ‘Mr. Wolf’ is optional]. Crucially, it notes the time that it sent the message (call it Tsent)
  2. Server responds as quick as it can, giving the time according to its own clock, Tserver.
  3. When Client gets the message, it notes the time of receipt (call it Treceived). Then it does some maths: the round-trip time, RTT, is Treceived – Tsent . So assuming that the server responded instantly, and that the network latency was the same in both directions, that means that the server actually sent the message RTT/2 seconds ago. Thus, at the instant Client receives the message, the time at the server is Tserver + RTT/2. Then the Client can compare with its own clock and determine the difference – the clock skew.
Prizefight answered 25/3, 2011 at 13:26 Comment(0)
Q
0

Not a statistical model/algorithm, but...
I'd do this by recording the time taken to make a call to the server and getting a response back.

I'd then use half this amount of time (assuming it takes the same amount of time to send the request as the response) to estimate any difference. I'd pass the half timespan value to the server with the devices actual time and let the server work out any difference (accounting for this half timespan offset).

This would assume that the second call to the server (the one with the half timespan) takes as long as the first (timed) request. A web farm, load balancer or uneven server load could compromise this.
Make sure your methods for doing this process are doing as little else as possible to avoid extra delays.

You could try making multiple calls and using the mean of the times to account for varying request times. Experiment to see if this is worth while.

It all depends how accurate you need things. If you really need (near) perfect accuracy you may be out of luck.

Queenhood answered 22/2, 2011 at 9:21 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.