Writing Quality Tests
Asked Answered
C

7

15

We know that code coverage is a poor metric to use when gauging the quality of test code. We also know that testing the language/framework is a waste of time.

On the other hand, what metrics can we use to identify quality tests? Are there any best practices or rules of thumbs that you've learned to help you identify and write higher quality tests?

Caruso answered 12/10, 2008 at 19:2 Comment(0)
E
16
  1. Make sure your tests are independent of each other. A test shouldn't depend on the execution or results of some other test.
  2. Make sure each test has clearly defined entry criteria, test steps and exit criteria.
  3. Set up a Requirements Verification Traceability Matrix (RVTM). Each test should verify one or more requirement. Also, each requirement should be verified by at least one test.
  4. Make sure your tests are identifiable. Establish a simple naming or labeling convention and stick to it. Reference the test indentifier when logging defects.
  5. Treat your tests like you treat your code. Have a testware development process that mirrors your software development process. Tests should have peer reviews, be under version control, have change control procedures, etc.
  6. Categorize and organize your tests. Make it easy to find and run a test, or suite of tests, as needed.
  7. Make your tests as succinct as possible. This makes them easier to run, and automate. It's better to run lots of little tests than one large test.
  8. When a test fails, make it easy to see why the test failed.
Enamour answered 13/10, 2008 at 11:53 Comment(0)
T
5

Make sure it's easy and quick to write tests. Then write lots of them.

I've found that it's very hard to predict in advance which tests will be the ones which end up failing either now, or a long way down the line. I tend to take a scatter-gun approach, trying to hit corner cases if I can think of them.

Also, don't be afraid of writing bigger tests which test a bunch of things together. Of course if that test fails it might take longer to figure out what went wrong, but often problems only arise once you start gluing things together.

Tungting answered 12/10, 2008 at 19:39 Comment(0)
H
2

write tests that verify the base functionality and the individual use-cases of the software's intent. Then write tests to check edge cases and verify expected exceptions.

in other words, write good unit tests from a customer perspective, and forget about metrics for test code. no metrics will tell you if your test code is good, only functioning software tells you when your test code is good.

Hornbook answered 12/10, 2008 at 19:8 Comment(0)
B
2

I think Use case prove very useful to get the best test coverage. If you have your functionality in terms of use case it be easily converted into different test scenarios to cover positive , negative and exceptions. The use case also states the prerequisites and data prep if any for the same which proves very handy while writing test cases.

Byblow answered 14/10, 2008 at 16:8 Comment(0)
K
1

My rules of thumb:

  1. Cover even simpler test cases in your test plan (don't risk leaving the most used functionality untested)
  2. Trace the corresponding requirement near each test case
  3. As Joel says, have a separate team that does testing
Kagera answered 12/10, 2008 at 19:14 Comment(0)
W
1

I'd disagree that code coverage isn't a useful metric. If you don't have 100% code coverage, that at least indicates areas that need more tests.

In general, though - once you get adequate statement coverage, the next logical place to go is in writing tests that are either designed to directly verify the requirements that the code was written to meet, or that are intended to stress the edge-cases. Neither of these will fall naturally out of anything you can easily measure directly.

Wasserman answered 12/10, 2008 at 19:21 Comment(3)
Is a test for a property getter that only does a return foo; needed? What about hundreds of them? Do you really think that code should be covered by tests?Salesperson
Note that I didn't say one test per method. What I said was that missing coverage indicates functionality that's not being tested. If that getter method is going to be used somewhere in the system, then it ought to be used in (one or more of) the tests, as well.Wasserman
@Sergio Acosta: If property tests generated automatically, I don't see a problem in testing property getters and setters. The problem lies in when you are writing tests by hand. You'll probably have better things to test than getters and setters.Glossitis
T
1

There are two good ways to verify test quality

1. Code review

With code review is possible to verify importants steps defined by @Patrick Cuff in his answer https://mcmap.net/q/242618/-writing-quality-tests

Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills.

2. Mutation testing

The second is cheaper - this is automated job which measure test quality.

Mutation testing (or Mutation analysis or Program mutation) is used to design new software tests and evaluate the quality of existing software tests.

Related questions

Turbulence answered 11/1, 2014 at 23:33 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.