Simulate varying latency for network request in Chrome
Asked Answered
M

4

6

Is it possible to simulate changing network latencies (within a range) for different requests via chrome?

E.g. for testing what happens when the order of ajax responses differs.

Middleoftheroad answered 28/6, 2018 at 12:17 Comment(0)
T
8

DevTools technical writer here. We have network throttling in the Network panel:

Network Throttling

But that creates a steady throttled state. As of Chrome 68 we don't have any feature for randomizing the amount of throttling within a given range.

You might be able to achieve this using Puppeteer.

Taitaichung answered 29/6, 2018 at 22:39 Comment(0)
D
2

I think it would possible, creating an extension and use the Chrome debugger and Network domain

Diondione answered 28/6, 2018 at 12:36 Comment(0)
L
0

you can use this Chrome/Firefox devtools extension that can simulate http request delay for configurable URL Chrome devtools plugin

Leonleona answered 13/1, 2022 at 13:42 Comment(0)
M
0

The simplest solution to randomize latency is a proxy or middleware. This will work with any browser.

Here's a working middleware that can be slotted into live-server or any connect or express server:

module.exports = function (req, res, next) {
    // Randomly delay the response, to help uncover race conditions.
    const delay = Math.random() * 2000;
    setTimeout(next, delay);
}

I didn't build this into a full proxy server since I already was using live-server, which supports loading middleware through the CLI (although with a caveat about where it loads files relative to).

For a proxy server, you should be able to use node-http-proxy and adjust their latency example to use a random number instead of a fixed delay.

Notes and Possible Improvements

A flaw in this naive implementation is that if a request naturally takes longer than the maximum randomized delay, it may never be reordered to come before other requests. You'd need to increase the randomized latency, which makes it disproportionately slower when you have a lot of small but serial requests cascading.

A more advanced version of this could specifically orchestrate to re-order responses, by keeping track of outstanding requests, and delaying them until all responses chosen to come before them have been sent, and it could use a random seed for repeatability.

It's not trivial, considering some requests may be dependent on others, e.g. a script loads another script, or a css file loads an image.

As-is, reproducibility can already be achieved using the Replay browser or similar time-travel debugging tools — very useful to have in your toolkit!

However, if it was a built-in feature, you could see the random seed and reproduce it even if you didn't know a particular run was going to uncover a bug, without being prepared with an active time-travel debugging session.

Another fine point to note is that if a bug is caused by the proximity of responses in time, adding a lot of randomized latency may spread them out and cause the bug to be reproduced infrequently. The orchestrating version may or may not have this problem.

Also, not all bugs exist due to the order that responses come in; there may be other asynchronous code, and race conditions can exist involving both response times and other delays, in combination. While the orchestrating version's purpose is to reduce undue latency, it may reduce latency that is useful in uncovering bugs.

Perhaps a hybrid would be best, and probably a binomial distribution would be better than a linear distribution. It's an interesting problem. Anyways, I think I've said enough.

Mechanist answered 10/4 at 18:37 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.