Unit-testing with dependencies between tests
Asked Answered
E

6

36

How do you do unit testing when you have

  • some general unit tests
  • more sophisticated tests checking edge cases, depending on the general ones

To give an example, imagine testing a CSV-reader (I just made up a notation for demonstration),

def test_readCsv(): ...

@dependsOn(test_readCsv)
def test_readCsv_duplicateColumnName(): ...

@dependsOn(test_readCsv)
def test_readCsv_unicodeColumnName(): ...

I expect sub-tests to be run only if their parent test succeeds. The reason behind this is that running these tests takes time. Many failure reports that go back to a single reason wouldn't be informative, either. Of course, I could shoehorn all edge-cases into the main test, but I wonder if there is a more structured way to do this.

I've found these related but different questions,

UPDATE:

I've found TestNG which has great built-in support for test dependencies. You can write tests like this,

@Test{dependsOnMethods = ("test_readCsv"))
public void test_readCsv_duplicateColumnName() {
   ...
}
Ellington answered 1/10, 2010 at 21:29 Comment(0)
L
15

Personally, I wouldn't worry about creating dependencies between unit tests. This sounds like a bit of a code smell to me. A few points:

  • If a test fails, let the others fail to and get a good idea of the scale of the problem that the adverse code change made.
  • Test failures should be the exception rather than the norm, so why waste effort and create dependencies when the vast majority of the time (hopefully!) no benefit is derived? If failures happen often, your problem is not with unit test dependencies but with frequent test failures.
  • Unit tests should run really fast. If they are running slow, then focus your efforts on increasing the speed of these tests rather than preventing subsequent failures. Do this by decoupling your code more and using dependency injection or mocking.
Lizalizabeth answered 1/10, 2010 at 21:45 Comment(4)
Yes, BUT: if you have a thing that fails and causes a cascade of failures, you are not guaranteed to get the fail messages in the right order to fix them (esp. if you do any auto-discovery of tests). For example, we have a set of unit tests that check to see whether a web environment is set up correctly, which (among other things) is handy for new employees. If you're missing file X and that file is symlinked in 3 places, you want to fix the missing file first. That might not be obvious to a newbie and thus you diminish the usefulness of the test suite.Abram
There are some scenarios where dependencies are needed between tests. For example: Testing an API wrapper that logs into an external server. That login session would need to be shared between tests.Trumantrumann
"If a test fails, let the others fail to and get a good idea of the scale of the problem that the adverse code change made." ... pass or fail are meaningless for tests with unmet dependencies. If creating an object fails or the object is created but results with bad properties. You can't run any meaningful tests on that object's behavior. Some may disagree on whether or not the creation is a test in itself, either way, if it fails no further testing can be done on it. Therefore some argue for test reports with pass, fail, could not test. Though not many frameworks support this.Windsail
to add to this much later... such dependencies are very helpful for situations where you can't avoid large monoliths. Such as where each function tested extends the state of an object in a way that depends on its previous state. Purists would say, 'just mock the different states,' but that can be very time consuming. If you can't avoid such a monolith, my experience has been its much better to simply construct the monolith once and test as each state change happens. Remember if testing doesn't save you time, its not worth doing in the first place.Begley
H
5

Proboscis is a python version of TestNG (which is a Java library).

See packages.python.org/proboscis/

It supports dependencies, e.g.

@test(depends_on=[test_readCsv])
public void test_readCsv_duplicateColumnName() {
   ...
}
Handfast answered 23/5, 2012 at 14:8 Comment(1)
Unsure if I missed something but I couldn't find how to support both setup_class() and setup() with proboscis. It seemed to have a @before_class but not a @before.Jasun
I
2

I'm not sure what language you're referring to (as you don't specifically mention it in your question) but for something like PHPUnit there is an @depends tag that will only run a test if the depended upon test has already passed.

Depending on what language or unit testing you use there may also be something similar available

Illusage answered 1/10, 2010 at 21:36 Comment(0)
J
2

I have implemented a plugin for Nose (Python) which adds support for test dependencies and test prioritization.

As mentioned in the other answers/comments this is often a bad idea, however there can be exceptions where you would want to do this (in my case it was performance for integration tests - with a huge overhead for getting into a testable state, minutes vs hours).

You can find it here: nosedep.

A minimal example is:

def test_a:
  pass

@depends(before=test_a)
def test_b:
  pass

To ensure that test_b is always run before test_a.

Jasun answered 13/10, 2015 at 6:36 Comment(0)
A
2

You may want use pytest-dependency. According to theirs documentation code looks elegant:

import pytest

@pytest.mark.dependency()
@pytest.mark.xfail(reason="deliberate fail")
def test_a():
    assert False

@pytest.mark.dependency()
def test_b():
    pass

@pytest.mark.dependency(depends=["test_a"])
def test_c():
    pass

@pytest.mark.dependency(depends=["test_b"])
def test_d():
    pass

@pytest.mark.dependency(depends=["test_b", "test_c"])
def test_e():
    pass

Please note, it is plugin for pytest, not unittest which is part of python itself. So, you need 2 more dependencies (f.e. add into requirements.txt):

pytest==5.1.1
pytest-dependency==0.4.0
Atencio answered 27/8, 2019 at 23:18 Comment(0)
S
0

According to best practices and unit testing principles unit test should not depend on other ones.

Each test case should check concrete isolated behavior.

Then if some test case fail you will know exactly what became wrong with our code.

Sparling answered 4/10, 2010 at 10:0 Comment(2)
I have the same problem as the OP. I have one test that tests the constructor. Another tests that tests a method on the class. Obviously, if the constructor fails, both tests will fail, which HIDES what is wrong with my code. The constructor test should fail, the method test should skip.Plantagenet
This misses the point of the question. The second test can run on its own, independent of the first, and checks concrete, isolated behaviour. However, if the first test fails, the second test is guaranteed to fail, so there is no point running it.Plantagenet

© 2022 - 2024 — McMap. All rights reserved.