Googletest: How to run tests asynchronously?
Asked Answered
C

2

5

Given a large project with thousands of tests, some of which take multiple minutes to complete. When executed sequentially, the whole set of test takes more than an hour to finish. The testing time could be reduced by executing tests in parallel.

As far as I know there are no means to do that directly from googletest/mock, like a --async option. Or am I wrong?

One solution is to determine the tests which can run in parallel and write a script that starts each in a separate job, i.e.

./test --gtest_filter=TestSet.test1 &
./test --gtest_filter=TestSet.test2 &
...

But this would require additional maintenance effort and introduce another "layer" between the test code and its execution. I'd like a more convenient solution. For example, one could suffix the TEST and TEST_F macros and introduce TEST_ASYNC, TEST_F_ASYNC. Tests defined with TEST_ASYNC would then be executed by independent threads, starting at the same time.

How can this be achieved? Or is there another solution?

Continuative answered 2/7, 2014 at 10:30 Comment(0)
E
3

I would suggest you are solving the wrong problem. You want unit tests to run quickly, in fact if a test takes several minutes to run, it's not a unit test.

I suggest you split your tests into proper unit tests and integration/regression or slow running tests. You can then run the unit tests as you develop and just run the longer running ones before a push/commit.

You could even run the two (or more) sets of tests yourself simultaneously.

The docs themselves suggest using filters to solve this.


Edit in light of a downvote, and a new toy mentioned in the docs.

Since I gave this answer, the docs have been updated and now mention a parallel runner, which "works by listing the tests of each binary, and then executing them on workers in separate processes" which would solve the problem. When I first wrote the answer this didn't exist.

Edraedrea answered 2/7, 2014 at 10:37 Comment(3)
I agree about unit tests running as quickly as possible. The tests I am talking about are integration tests (I just replaced the misleading label). To keep things simple the same framework is used as for the unit tests.Continuative
I feel this is a dogmatic answer. Even if he reclassifies his tests from being unit tests to being integration tests, the question is still the same: how to run them asynchronously. And to say that a unit test isn't a unit test if it is testing something that is time consuming on current hardware seems untrue in many cases.Claytonclaytonia
@Claytonclaytonia when I wrote this, this way the standard/dogmatic way of doing things. I notice there's a new (non Google) script mentioned in the docs, which will run tests on different threads, so I have updated my answer.Edraedrea
D
11

Late response, but I'll put it here for anyone searching for a similar answer. Working on WebRTC I found a similar need to speed up our test execution. Executing all of our tests sequentially took more than 20 minutes, and a bunch of them spend at least some time waiting (so they don't even fully utilize a core).

Even for "proper unit tests" I'd argue this is still relevant, because there's a difference between your single-threaded tests taking 20 seconds and ~1 second to execute (if your workstation is massively parallel, this speedup is not uncommon).

To solve this for us, I developed a script that test executes tests in parallel. This is stable enough to run on our continuous integration, and released here: https://github.com/google/gtest-parallel/

This python script essentially takes --gtest_filter=Foo (which you can specify) of one or more gtest binary specified, splits them up on several workers and runs individual tests in parallel. This works fine so long as the tests are independent (don't write to shared files, etc). For tests that didn't work fine, we put them in a webrtc_nonparallel_tests binary and ran those separately, but the vast majority were already fine, and we fixed several of them because we wanted the speedup.

Dramaturge answered 8/7, 2017 at 15:51 Comment(0)
E
3

I would suggest you are solving the wrong problem. You want unit tests to run quickly, in fact if a test takes several minutes to run, it's not a unit test.

I suggest you split your tests into proper unit tests and integration/regression or slow running tests. You can then run the unit tests as you develop and just run the longer running ones before a push/commit.

You could even run the two (or more) sets of tests yourself simultaneously.

The docs themselves suggest using filters to solve this.


Edit in light of a downvote, and a new toy mentioned in the docs.

Since I gave this answer, the docs have been updated and now mention a parallel runner, which "works by listing the tests of each binary, and then executing them on workers in separate processes" which would solve the problem. When I first wrote the answer this didn't exist.

Edraedrea answered 2/7, 2014 at 10:37 Comment(3)
I agree about unit tests running as quickly as possible. The tests I am talking about are integration tests (I just replaced the misleading label). To keep things simple the same framework is used as for the unit tests.Continuative
I feel this is a dogmatic answer. Even if he reclassifies his tests from being unit tests to being integration tests, the question is still the same: how to run them asynchronously. And to say that a unit test isn't a unit test if it is testing something that is time consuming on current hardware seems untrue in many cases.Claytonclaytonia
@Claytonclaytonia when I wrote this, this way the standard/dogmatic way of doing things. I notice there's a new (non Google) script mentioned in the docs, which will run tests on different threads, so I have updated my answer.Edraedrea

© 2022 - 2024 — McMap. All rights reserved.