How to measure network throughput during runtime
Asked Answered
C

1

1

I'm wondering how to best measure network throughput during runtime. I'm writing a client/server application (both in java). The server regularly sends messages (of compressed media data) over a socket to the client. I would like to adjust the compression level used by the server to match the network quality.

So I would like to measure the time a big chunk of data (say 500kb) takes to completely reach the client including all delays in between. Tools like Iperf don't seem to be an option because they do their measurements by creating their own traffic.

The best idea I could come up with is: somehow determine the clock difference of client and server, include a server send timestamp with each message and then have the client report back to the server the difference between this timestamp and the time the client received the message. The server can then determine the time it took a message to reach the client.

Is there an easier way to do this? Are there any libraries for this?

Caliper answered 22/10, 2011 at 18:9 Comment(2)
I do not understand why it would be difficult to save the time at server side when starting serving a client and check the duration at end.Concentrated
well how would you check the duration?Caliper
O
1

A simple solution:

Save a timestamp on the server before you send a defined amount of packages.

Then send the packages to the client and let the client report back to the server when it has recieved the last package.

Save a new timestamp on the server when the client has answered.

all you need to to now is determine die RTT and substract RTT/2 from the difference between the two timestamps.

This should get you a fairly accurate measurement.

Outhe answered 23/10, 2011 at 20:0 Comment(1)
Nice idea! Would be easier. If the RTT remains more or less constant and there is no congestion on the client to server channel.Caliper

© 2022 - 2024 — McMap. All rights reserved.