Converting between NTP and C# DateTime
Asked Answered
F

4

6

I use the following code to convert between NTP and a C# DateTime. I think the forward corversion is correct, but backwards is wrong.

See the following code to convert 8 bytes into a DatTime:

Convert NTP to DateTime

public static ulong GetMilliSeconds(byte[] ntpTime)
{
    ulong intpart = 0, fractpart = 0;

    for (var i = 0; i <= 3; i++)
        intpart = 256 * intpart + ntpTime[i];
    for (var i = 4; i <= 7; i++)
        fractpart = 256 * fractpart + ntpTime[i];

    var milliseconds = intpart * 1000 + ((fractpart * 1000) / 0x100000000L);

    Debug.WriteLine("intpart:      " + intpart);
    Debug.WriteLine("fractpart:    " + fractpart);
    Debug.WriteLine("milliseconds: " + milliseconds);
    return milliseconds;
}

public static DateTime ConvertToDateTime(byte[] ntpTime)
{
    var span = TimeSpan.FromMilliseconds(GetMilliSeconds(ntpTime));
    var time = new DateTime(1900, 1, 1, 0, 0, 0, DateTimeKind.Utc);
    time += span;
    return time;
}

Convert from DateTime to NTP

public static byte[] ConvertToNtp(ulong milliseconds)
{
    ulong intpart = 0, fractpart = 0;
    var ntpData = new byte[8];

    intpart = milliseconds / 1000;
    fractpart = ((milliseconds % 1000) * 0x100000000L) / 1000;

    Debug.WriteLine("intpart:      " + intpart);
    Debug.WriteLine("fractpart:    " + fractpart);
    Debug.WriteLine("milliseconds: " + milliseconds);

    var temp = intpart;
    for (var i = 3; i >= 0; i--)
    {
        ntpData[i] = (byte)(temp % 256);
        temp = temp / 256;
    }

    temp = fractpart;
    for (var i = 7; i >= 4; i--)
    {
        ntpData[i] = (byte)(temp % 256);
        temp = temp / 256;
    }
    return ntpData;
}

The following input produces the output:

bytes = { 131, 170, 126, 128,
           46, 197, 205, 234 }

var ms = GetMilliSeconds(bytes );
var ntp = ConvertToNtp(ms)

//GetMilliSeconds output
milliseconds: 2208988800182
intpart:      2208988800
fractpart:    784715242

//ConvertToNtp output
milliseconds: 2208988800182
intpart:      2208988800
fractpart:    781684047

Notice that the conversion from milliseconds to fractional part is wrong. Why?

Update:

As Jonathan S. points out - it's loss of fraction. So instead of converting back and forth, I want to manipulate with the NTP timestamp directly. More specific, add milliseconds to it. I would assume the following function would do just that, but I'm having a hard time validating it. I am very unsure about the fraction-part.

public static void AddMilliSeconds(ref byte[] ntpTime, ulong millis)
{
    ulong intpart = 0, fractpart = 0;

    for (var i = 0; i < 4; i++)
        intpart = 256 * intpart + ntpTime[i];
    for (var i = 4; i <= 7; i++)
        fractpart = 256 * fractpart + ntpTime[i];

    intpart += millis / 1000;
    fractpart += millis % 1000;

    var newIntpart = BitConverter.GetBytes(SwapEndianness(intpart));
    var newFractpart = BitConverter.GetBytes(SwapEndianness(fractpart));

    for (var i = 0; i < 8; i++)
    {
        if (i < 4)
            ntpTime[i] = newIntpart[i];
        if (i >= 4)
            ntpTime[i] = newFractpart[i - 4];
    }
}
Fremitus answered 26/5, 2013 at 20:14 Comment(1)
I've updated my answer below. Good luck!Kilmer
K
5

What you're running into here is loss of precision in the conversion from NTP timestamp to milliseconds. When you convert from NTP to milliseconds, you're dropping part of the fraction. When you then take that value and try to convert back, you get a value that's slightly different. You can see this more clearly if you change your ulong values to decimal values, as in this test:

public static decimal GetMilliSeconds(byte[] ntpTime)
{
    decimal intpart = 0, fractpart = 0;

    for (var i = 0; i <= 3; i++)
        intpart = 256 * intpart + ntpTime[i];
    for (var i = 4; i <= 7; i++)
        fractpart = 256 * fractpart + ntpTime[i];

    var milliseconds = intpart * 1000 + ((fractpart * 1000) / 0x100000000L);

    Console.WriteLine("milliseconds: " + milliseconds);
    Console.WriteLine("intpart:      " + intpart);
    Console.WriteLine("fractpart:    " + fractpart);
    return milliseconds;
}

public static byte[] ConvertToNtp(decimal milliseconds)
{
    decimal intpart = 0, fractpart = 0;
    var ntpData = new byte[8];

    intpart = milliseconds / 1000;
    fractpart = ((milliseconds % 1000) * 0x100000000L) / 1000m;

    Console.WriteLine("milliseconds: " + milliseconds);
    Console.WriteLine("intpart:      " + intpart);
    Console.WriteLine("fractpart:    " + fractpart);

    var temp = intpart;
    for (var i = 3; i >= 0; i--)
    {
        ntpData[i] = (byte)(temp % 256);
        temp = temp / 256;
    }

    temp = fractpart;
    for (var i = 7; i >= 4; i--)
    {
        ntpData[i] = (byte)(temp % 256);
        temp = temp / 256;
    }
    return ntpData;
}

public static void Main(string[] args)
{
    byte[] bytes = { 131, 170, 126, 128,
           46, 197, 205, 234 };

    var ms = GetMilliSeconds(bytes);
    Console.WriteLine();
    var ntp = ConvertToNtp(ms);
}

This yields the following result:

milliseconds: 2208988800182.7057548798620701
intpart:      2208988800
fractpart:    784715242

milliseconds: 2208988800182.7057548798620701
intpart:      2208988800.1827057548798620701
fractpart:    784715242.0000000000703594496

It's the ~0.7 milliseconds that are screwing things up here.

Since the NTP timestamp includes a 32-bit fractional second ("a theoretical resolution of 2^-32 seconds or 233 picoseconds"), a conversion to integer milliseconds will result in a loss of precision.

Response to Update:

Adding milliseconds to the NTP timestamp wouldn't be quite as simple as adding the integer parts and the fraction parts. Think of adding the decimals 1.75 and 2.75. 0.75 + 0.75 = 1.5, and you'd need to carry the one over to the integer part. Also, the fraction part in the NTP timestamp is not base-10, so you can't just add the milliseconds. Some conversion is necessary, using a proportion like ms / 1000 = ntpfrac / 0x100000000.

This is entirely untested, but I'd think you'd want to replace your intpart += and fracpart += lines in AddMilliSeconds to be more like this:

intpart += millis / 1000;

ulong fractsum = fractpart + (millis % 1000) / 1000 * 0x100000000L);

intpart += fractsum / 0x100000000L;
fractpart = fractsum % 0x100000000L;
Kilmer answered 26/5, 2013 at 21:53 Comment(1)
That makes perfect sense, thank you. Now, instead of converting back and forth, i think it would be better to manipulate with the NTP directly. I have updated my question to reflect this.Fremitus
T
2

Suggestion To Cameron's Solution: use

ntpEpoch = (new DateTime(1900, 1, 1, 0, 0, 0, 0, DateTimeKind.Utc)).Ticks;

to make sure that you dont calculate from your local time

Talc answered 10/12, 2014 at 18:6 Comment(0)
N
2

Same as others but without division

 return (ulong)(elapsedTime.Ticks * 1e-7 * 4294967296ul)

Or

 return (ulong)(((long)(elapsedTime.Ticks * 0.0000001) << 32) + (elapsedTime.TotalMilliseconds % 1000 * 4294967296 * 0.001));


        //TicksPerPicosecond = 0.0000001m

        //4294967296 = uint.MaxValue + 1

        //0.001 == PicosecondsPerNanosecond

The full method would then be:

public static System.DateTime UtcEpoch2036 = new System.DateTime(2036, 2, 7, 6, 28, 16, System.DateTimeKind.Utc);

public static System.DateTime UtcEpoch1900 = new System.DateTime(1900, 1, 1, 0, 0, 0, System.DateTimeKind.Utc);

 public static ulong DateTimeToNptTimestamp(ref System.DateTime value/*, bool randomize = false*/)
    {
        System.DateTime baseDate = value >= UtcEpoch2036 ? UtcEpoch2036 : UtcEpoch1900;

        System.TimeSpan elapsedTime = value > baseDate ? value.ToUniversalTime() - baseDate.ToUniversalTime() : baseDate.ToUniversalTime() - value.ToUniversalTime();

        //Media.Common.Extensions.TimeSpan.TimeSpanExtensions.MicrosecondsPerMillisecond = 1000

        //TicksPerPicosecond = 0.0000001m = 1e-7

        //4294967296 = uint.MaxValue + 1

        //0.001 == PicosecondsPerNanosecond = 1e-3            

        //429496.7296 Picoseconds = 4.294967296e-7 Seconds

        //4.294967296e-7 * 1000 Milliseconds per second = 0.0004294967296 * 1e+9 (PicosecondsPerMilisecond) = 429.4967296

        //0.4294967296 nanoseconds * 100 nanoseconds = 1 tick = 42.94967296 * 10000 ticks per millisecond = 429496.7296 / 1000 = 429.49672960000004

        unchecked
        {
            //return (ulong)((long)(elapsedTime.Ticks * 0.0000001m) << 32 | (long)((decimal)elapsedTime.TotalMilliseconds % 1000 * 4294967296m * 0.001m));
            //return (ulong)(((long)(elapsedTime.Ticks * 0.0000001m) << 32) + (elapsedTime.TotalMilliseconds % 1000 * 4294967296ul * 0.001));
            //return (ulong)(elapsedTime.Ticks * 1e-7 * 4294967296ul); //ie-7 * 4294967296ul = 429.4967296 has random diff which complies better? (In order to minimize bias and help make timestamps unpredictable to an intruder, the non - significant bits should be set to an unbiased random bit string.)
            //return (ulong)(elapsedTime.Ticks * 429.4967296m);//decimal precision is better but we still lose precision because of the magnitude? 0.001 msec dif ((ulong)(elapsedTime.Ticks * 429.4967296000000000429m))
            //429.49672960000004m has reliable 003 msec diff
            //Has 0 diff but causes fraction to be different from examples...
            //return (ulong)((elapsedTime.Ticks + 1) * 429.4967296m);
            //Also adding + 429ul;
            return (ulong)(elapsedTime.Ticks * 429.496729600000000000429m);
            //var ticks =  (ulong)(elapsedTime.Ticks * 429.496729600000000000429m); //Has 0 diff on .137 measures otherwise 0.001 msec or 1 tick, keeps the examples the same.
            //if(randomize) ticks ^= (ulong)(Utility.Random.Next() & byte.MaxValue);
            //return ticks;
        }

Where as the reverse would be:

public static System.DateTime NptTimestampToDateTime(ref uint seconds, ref uint fractions, System.DateTime? epoch = null)
        {
            //Convert to ticks
            //ulong ticks = (ulong)((seconds * System.TimeSpan.TicksPerSecond) + ((fractions * System.TimeSpan.TicksPerSecond) / 0x100000000L)); //uint.MaxValue + 1

            unchecked
            {
                //Convert to ticks,                 

                //'UtcEpoch1900.AddTicks(seconds * System.TimeSpan.TicksPerSecond + ((long)(fractions * 1e+12))).Millisecond' threw an exception of type 'System.ArgumentOutOfRangeException'

                //0.01 millisecond = 1e+7 picseconds = 10000 nanoseconds
                //10000 nanoseconds = 10 micros = 10000000 pioseconds
                //0.001 Centisecond = 10 Microsecond
                //1 Tick = 0.1 Microsecond
                //0.1 * 100 Nanos Per Tick = 100
                //TenMicrosecondsPerPicosecond = 10000000 = TimeSpan.TicksPerSecond = 10000000 
                                                                                            //System.TimeSpan.TicksPerSecond is fine here also...
                long ticks = seconds * System.TimeSpan.TicksPerSecond + ((long)(fractions * Media.Common.Extensions.TimeSpan.TimeSpanExtensions.TenMicrosecondsPerPicosecond) >> Common.Binary.BitsPerInteger);

                //Return the result of adding the ticks to the epoch
                //If the epoch was given then use that value otherwise determine the epoch based on the highest bit.
                return epoch.HasValue ? epoch.Value.AddTicks(ticks) :
                        (seconds & 0x80000000L) == 0 ?
                            UtcEpoch2036.AddTicks(ticks) :
                                UtcEpoch1900.AddTicks(ticks);
            }
        }
Nobleminded answered 7/1, 2019 at 2:8 Comment(7)
Julius, try that first expression you provided with DateTime.UtcNow.Ticks, and tell me what happens. Fix it, and I'll remove the down vote.Concelebrate
Cameron please explain... you would use Utc.Now and subtract the desired Epoch to get elapsedTime... from there the answer is the same. Do you want to see the full methods ?Nobleminded
Julius, my mistake, I did miss subtracting the epoch. Please edit your answer which will enable me to give the vote back to you.Concelebrate
Julius, I do believe there is still a mistake in your code, your choice of using a float for 1e-7, means 51 bits of mantissa, not 64 bits of precision, meaning you are losing information and the trailing end of your numbers. See example: dotnetfiddle.net/Oovo8eConcelebrate
Julius, you can use this fiddle to see how many bits are needed to store the (now-epoch) delta: dotnetfiddle.net/16pazc, 55 bits, which the double can't store precisely. The other fiddle shows how the calculations are not the same between double/decimal.Concelebrate
Thanks for your input, I am aware! I have edited my answer to provide the full methods and their notes.Nobleminded
Thank you, if you edit your answer to address the epoch changes or improve the methodology any further I will also upvote your answer!Nobleminded
C
1

DateTime ticks to NTP and back.

static long ntpEpoch = (new DateTime(1900, 1, 1, 0, 0, 0, 0, DateTimeKind.Utc)).Ticks;
static public long Ntp2Ticks(UInt64 a)
{
  var b = (decimal)a * 1e7m / (1UL << 32);
  return (long)b + ntpEpoch;
}
static public UInt64 Ticks2Ntp(long a)
{
    decimal b = a - ntpEpoch;
    b = (decimal)b / 1e7m * (1UL << 32);
    return (UInt64)b;
}
Concelebrate answered 18/4, 2014 at 18:45 Comment(2)
Remember that after 2036 the Epoch included in your answer will overflow so another epoch has been given in the RFC and should be included in this answer as well as the logic to detect which epoch to use. github.com/juliusfriedman/net7mma/blob/master/Ntp/…Nobleminded
Jay your code to calculate NTP time in that repo is wrong, see this fiddle to see why: dotnetfiddle.net/Vu6YosConcelebrate

© 2022 - 2024 — McMap. All rights reserved.