Can Python's unittest test in parallel, like nose can?
Asked Answered
M

8

66

Python's NOSE testing framework has the concept of running multiple tests in parallel.

The purpose of this is not to test concurrency in the code, but to make tests for code that has "no side-effects, no ordering issues, and no external dependencies" run faster. The performance gain comes from concurrent I/O waits when they are accessing different devices, better use of multi CPUs/cores, and by running time.sleep() statements in parallel.

I believe the same thing could be done with Python's unittest testing framework, by having a plugin Test Runner.

Has anyone had any experience with such a beast, and can they make any recommendations?

Misstate answered 17/1, 2011 at 4:57 Comment(3)
Similar to #2074574Blaspheme
You should mock your sleeps, so you don't have to wait from them in your tests.Antipathetic
@Ytsen: Yes, it is often useful, but that 'should' hides a large number of assumptions about the nature of the testing. The sleeps are often appropriate.Misstate
C
28

Python unittest's builtin testrunner does not run tests in parallel. It probably wouldn't be too hard write one that did. I've written my own just to reformat the output and time each test. That took maybe 1/2 a day. I think you can swap out the TestSuite class that is used with a derived one that uses multiprocess without much trouble.

Creepy answered 18/1, 2011 at 6:26 Comment(3)
I accepted this answer, because the implication is that there is no off-the-shelf one available now.Misstate
@Misstate did you ever do this?Heckman
@wkschwartz: Not this way. I had a play with some multi-process test-running code completely independent of the unittest framework. It required examination of each test-case to confirm it was completely independent of any other test - a lot of tests had unpredicted interactions.Misstate
H
23

The testtools package is an extension of unittest which supports running tests concurrently. It can be used with your old test classes that inherit unittest.TestCase.

For example:

import unittest
import testtools

class MyTester(unittest.TestCase):
    # Tests...

suite = unittest.TestLoader().loadTestsFromTestCase(MyTester)
concurrent_suite = testtools.ConcurrentStreamTestSuite(lambda: ((case, None) for case in suite))
concurrent_suite.run(testtools.StreamResult())
Hairworm answered 12/6, 2013 at 7:23 Comment(2)
Just tested this today and my tests still run in sequence :-(Import
When I tried to run this, the system hanged due to out of memoryEstimation
F
8

Please use pytest-xdist, if you want parallel run.

The pytest-xdist plugin extends py.test with some unique test execution modes:

  • test run parallelization: if you have multiple CPUs or hosts you can use those for a combined test run. This allows to speed up development or to use special resources of remote machines.

[...]

More info: Rohan Dunham's blog

Foretopgallant answered 17/5, 2017 at 14:43 Comment(1)
If you also want to generate junitxml report, then this is your only choiceLightship
F
7

If you only need Python3 suport, consider using my fastunit.

I just change few code of unittest, making test case run as coroutines.

It really saved my time.

I just finished it last week, and may not testing enough, if any error happens, please let me know, so that I can make it better, thanks!

Freckle answered 5/2, 2018 at 14:21 Comment(2)
thanks for help me correct my expression :D, I just change the code to async/await style, so now it only support python 3.5+, still hope everyone can use it to increase efficiency.Freckle
Running your tests asynchronously does not run them in parallel and as such, it should not improve the execution time of your tests. If it does, however, then you should really consider mocking some of your asynchronous code.Antipathetic
O
7

Another option that might be easier, if you don't have that many test cases and they are not dependent, is to kick off each test case manually in a separate process.

For instance, open up a couple tmux sessions and then kick off a test case in each session using something like:

python -m unittest -v MyTestModule.MyTestClass.test_n
Outermost answered 26/6, 2018 at 18:33 Comment(0)
C
5

You can override the unittest.TestSuite and implement some concurrency paradigm. Then, you use your customized TestSuite class just like normal unittest. In the following example, I implement my customized TestSuite class using async:

import unittest
import asyncio

class CustomTestSuite(unittest.TestSuite):
    def run(self, result, debug=False):
        """
        We override the 'run' routine to support the execution of unittest in parallel
        :param result:
        :param debug:
        :return:
        """
        topLevel = False
        if getattr(result, '_testRunEntered', False) is False:
            result._testRunEntered = topLevel = True
        asyncMethod = []
        loop = asyncio.new_event_loop()
        asyncio.set_event_loop(loop)
        for index, test in enumerate(self):
            asyncMethod.append(self.startRunCase(index, test, result))
        if asyncMethod:
            loop.run_until_complete(asyncio.wait(asyncMethod))
        loop.close()
        if topLevel:
            self._tearDownPreviousClass(None, result)
            self._handleModuleTearDown(result)
            result._testRunEntered = False
        return result

    async def startRunCase(self, index, test, result):
        def _isnotsuite(test):
            "A crude way to tell apart testcases and suites with duck-typing"
            try:
                iter(test)
            except TypeError:
                return True
            return False

        loop = asyncio.get_event_loop()
        if result.shouldStop:
            return False

        if _isnotsuite(test):
            self._tearDownPreviousClass(test, result)
            self._handleModuleFixture(test, result)
            self._handleClassSetUp(test, result)
            result._previousTestClass = test.__class__

            if (getattr(test.__class__, '_classSetupFailed', False) or
                    getattr(result, '_moduleSetUpFailed', False)):
                return True

        await loop.run_in_executor(None, test, result)

        if self._cleanup:
            self._removeTestAtIndex(index)

class TestStringMethods(unittest.TestCase):

    def test_upper(self):
        self.assertEqual('foo'.upper(), 'FOO')


    def test_isupper(self):
        self.assertTrue('FOO'.isupper())
        self.assertFalse('Foo'.isupper())


    def test_split(self):
        s = 'hello world'
        self.assertEqual(s.split(), ['hello', 'world'])
        # check that s.split fails when the separator is not a string
        with self.assertRaises(TypeError):
            s.split(2)


if __name__ == '__main__':
    suite = CustomTestSuite()
    suite.addTest(TestStringMethods('test_upper'))
    suite.addTest(TestStringMethods('test_isupper'))
    suite.addTest(TestStringMethods('test_split'))
    unittest.TextTestRunner(verbosity=2).run(suite)

In the main, I just construct my customized TestSuite class CustomTestSuite, add all the test cases, and finally run it.

Chlorinate answered 19/7, 2018 at 19:5 Comment(0)
G
2

If this is what you did initially

runner = unittest.TextTestRunner()
runner.run(suite)

-----------------------------------------

replace it with

from concurrencytest import ConcurrentTestSuite, fork_for_tests

concurrent_suite = ConcurrentTestSuite(suite, fork_for_tests(4))
runner.run(concurrent_suite)
Germinate answered 1/2, 2017 at 13:3 Comment(2)
Where did this module come from? Is it this one: github.com/cgoldberg/concurrencytest ?Misstate
Good! I've used it recently and it works really well for me.Episternum
B
1

Since Python 3.3 you can run the built-in test package with the -j/--multiprocess option to specify the number of worker processes to run the tests in parallel.

By default it runs Python's own regression tests, but you can specify your own test directory with the --testdir option, and use the -m/--match option to filter tests by a glob pattern.

For example, to run all tests in the current directory with 8 worker processes:

python -mtest -j8 --testdir .

Note that the test package is meant for internal use by Python only and the -j option is currently an undocumented feature so use it at your own risk, though the fact that the CPython project on GitHub relies on this feature to speed up its build tests hopefully means that you can also rely on it for the foreseeable future.

Note also the limitation that the test package only looks for test modules prefixed with test_, with no option to specify a different name prefix or pattern, though the test_ prefix is a good naming convention to stick to.

Blackcock answered 1/7 at 3:46 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.