What is code coverage and how do YOU measure it? [closed]
Asked Answered
M

8

351

What is code coverage and how do YOU measure it?

I was asked this question regarding our automating testing code coverage. It seems to be that, outside of automated tools, it is more art than science. Are there any real-world examples of how to use code coverage?

Moldavia answered 12/10, 2008 at 2:35 Comment(0)
P
307

Code coverage is a measurement of how many lines/blocks/arcs of your code are executed while the automated tests are running.

Code coverage is collected by using a specialized tool to instrument the binaries to add tracing calls and run a full set of automated tests against the instrumented product. A good tool will give you not only the percentage of the code that is executed, but also will allow you to drill into the data and see exactly which lines of code were executed during a particular test.

Our team uses Magellan - an in-house set of code coverage tools. If you are a .NET shop, Visual Studio has integrated tools to collect code coverage. You can also roll some custom tools, like this article describes.

If you are a C++ shop, Intel has some tools that run for Windows and Linux, though I haven't used them. I've also heard there's the gcov tool for GCC, but I don't know anything about it and can't give you a link.

As to how we use it - code coverage is one of our exit criteria for each milestone. We have actually three code coverage metrics - coverage from unit tests (from the development team), scenario tests (from the test team) and combined coverage.

BTW, while code coverage is a good metric of how much testing you are doing, it is not necessarily a good metric of how well you are testing your product. There are other metrics you should use along with code coverage to ensure the quality.

Panda answered 12/10, 2008 at 2:54 Comment(4)
"There are other metrics you should use along with code coverage to ensure the quality." Could you say what are these other metrics?Statutory
You can also use Testwell CTC++, it is a pretty complete code coverage tool for C, C++, C# and JavaWittol
@Statutory Mutation tests are another measure of how extensive your tests are.Jampack
Franci, I quoted you in my illustrative answer. Could you comment on my statement that "The ideal of 100% blackbox coverage is a fantasy"?Marylnmarylou
A
279

Code coverage basically tells you how much of your code is covered under tests. For example, if you have 90% code coverage, it means 10% of the code is not covered under tests.

I know you might be thinking that if 90% of the code is covered, it's good enough, but you have to look from a different angle. What is stopping you from getting 100% code coverage?

A good example will be this:

if(customer.IsOldCustomer()) 
{
}
else 
{
}

Now, in the code above, there are two paths/branches. If you are always hitting the "YES" branch, you are not covering the "else" part and it will be shown in the Code Coverage results. This is good because now you know that what is not covered and you can write a test to cover the "else" part. If there was no code coverage, you are just sitting on a time bomb, waiting to explode.

NCover is a good tool to measure code coverage.

Alfilaria answered 12/10, 2008 at 3:11 Comment(0)
S
77

Just remember, having "100% code-coverage" doesn't mean everything is tested completely - while it means every line of code is tested, it doesn't mean they are tested under every (common) situation..

I would use code-coverage to highlight bits of code that I should probably write tests for. For example, if whatever code-coverage tool shows myImportantFunction() isn't executed while running my current unit-tests, they should probably be improved.

Basically, 100% code-coverage doesn't mean your code is perfect. Use it as a guide to write more comprehensive (unit-)tests.

Sandstorm answered 12/10, 2008 at 3:24 Comment(3)
-"100% code-coverage" doesn't mean everything is tested completely - while it means every line of code is tested, it doesn't mean they are tested under every (common) situation..- "under every (common) situation" is this in regards to data input and parameters? I'm having difficulty understanding why if everything is tested, it doesn't equate to being tested completely.Beadroll
Just because every line of your code is run at some point in your tests, it doesn't mean you have tested every possible scenario that the code could be run under. If you just had a function that took x and returned x/x and you ran the test using my_func(2) you would have 100% coverage (as the function's code will have been run) but you've missed a huge issue when 0 is the parameter. I.e. you haven't tested all necessary scenarios even with 100% coverage.Diaphone
@steve, I quoted you in my illustrative answer. I'd be interested in hearing your or dbr's opinion of my statement that "The ideal of 100% blackbox coverage is a fantasy"?Marylnmarylou
S
67

Complementing a few points to many of the previous answers:

Code coverage means, how well your test set is covering your source code. i.e. to what extent is the source code covered by the set of test cases.

As mentioned in above answers, there are various coverage criteria, like paths, conditions, functions, statements, etc. But additional criteria to be covered are

  1. Condition coverage: All boolean expressions to be evaluated for true and false.
  2. Decision coverage: Not just boolean expressions to be evaluated for true and false once, but to cover all subsequent if-elseif-else body.
  3. Loop Coverage: means, has every possible loop been executed one time, more than once and zero time. Also, if we have assumption on max limit, then, if feasible, test maximum limit times and, one more than maximum limit times.
  4. Entry and Exit Coverage: Test for all possible call and its return value.
  5. Parameter Value Coverage (PVC). To check if all possible values for a parameter are tested. For example, a string could be any of these commonly: a) null, b) empty, c) whitespace (space, tabs, new line), d) valid string, e) invalid string, f) single-byte string, g) double-byte string. Failure to test each possible parameter value may leave a bug. Testing only one of these could result in 100% code coverage as each line is covered, but as only one of seven options are tested, means, only 14.2% coverage of parameter value.
  6. Inheritance Coverage: In case of object oriented source, when returning a derived object referred by base class, coverage to evaluate, if sibling object is returned, should be tested.

Note: Static code analysis will find if there are any unreachable code or hanging code, i.e. code not covered by any other function call. And also other static coverage. Even if static code analysis reports that 100% code is covered, it does not give reports about your testing set if all possible code coverage is tested.

Spies answered 28/10, 2013 at 7:53 Comment(0)
I
17

Code coverage has been explained well in the previous answers. So this is more of an answer to the second part of the question.

We've used three tools to determine code coverage.

  1. JTest - a proprietary tool built over JUnit. (It generates unit tests as well.)
  2. Cobertura - an open source code coverage tool that can easily be coupled with JUnit tests to generate reports.
  3. Emma - another - this one we've used for a slightly different purpose than unit testing. It has been used to generate coverage reports when the web application is accessed by end-users. This coupled with web testing tools (example: Canoo) can give you very useful coverage reports which tell you how much code is covered during typical end user usage.

We use these tools to

  • Review that developers have written good unit tests
  • Ensure that all code is traversed during black-box testing
Indira answered 12/10, 2008 at 3:59 Comment(0)
S
7

Code coverage is simply a measure of the code that is tested. There are a variety of coverage criteria that can be measured, but typically it is the various paths, conditions, functions, and statements within a program that makeup the total coverage. The code coverage metric is the just a percentage of tests that execute each of these coverage criteria.

As far as how I go about tracking unit test coverage on my projects, I use static code analysis tools to keep track.

Stepdaughter answered 12/10, 2008 at 2:47 Comment(0)
R
5

For Perl there's the excellent Devel::Cover module which I regularly use on my modules.

If the build and installation is managed by Module::Build you can simply run ./Build testcover to get a nice HTML site that tells you the coverage per sub, line and condition, with nice colors making it easy to see which code path has not been covered.

Rand answered 12/10, 2008 at 11:56 Comment(0)
M
3

What code coverage IS NOT

To truly understand what code coverage is, it is very important to understand what it is not.

A couple of answers/comments here and on related questions have alluded to this:

  • Franci Penov

    BTW, while code coverage is a good metric of how much testing you are doing, it is not necessarily a good metric of how well you are testing your product.

  • steve

    Just because every line of your code is run at some point in your tests, it doesn't mean you have tested every possible scenario that the code could be run under. If you just had a function that took x and returned x/x and you ran the test using my_func(2) you would have 100% coverage (as the function's code will have been run) but you've missed a huge issue when 0 is the parameter. I.e. you haven't tested all necessary scenarios even with 100% coverage.

  • KeithS:

    However, the flip side of coverage is actually twofold: first, a test that adds coverage for coverage's sake is useless; every test must prove that code works as expected in some novel situation. Also, "coverage" is not "exercise"; your test suites may execute every line of code in the SUT, but they may not prove that a line of logic works in every situation.

No one says it more succinctly and to the point than Mark Simpson:

Code coverage tells you what you definitely haven't tested, not what you have.

An Illustrative Example

I spent some time writing a reply to a feature request that Istanbul (a Javascript test coverage tool) "Change definition of coverage to require more than 1 hit" per line. No one will ever see it there 🤣, so I thought it might be useful to reuse the gist of it here:

A coverage tool CANNOT prove that your code is tested adequately. All it can do is tell you that you provided some kind of coverage for every line of code in your codebase, but even then it doesn't prove the coverage means anything, because a test might execute a line of code without making any assertions on its results. Only you as a developer can decide the actual semantically unique input variations and boundary conditions that need to be covered by tests and ensure that the test logic does in fact make the right assertions.

For example, say you have the following Javascript function. A single test that asserts an input of (1, 1) returns 1 would give you 100% line coverage. What does that prove?

function max(a, b) {
    return a > b ? a : b
}

Putting aside for a moment the semantically poor coverage of this test, the 100% line coverage is rather misleading too, as it doesn't provide 100% branch coverage. That's easily seen by splitting the branches onto different lines and rerunning the line coverage report:

function max(a, b) {
    if (a > b) {
        return a
    } else {
        return b
    }
}

or even

function max(a, b) {
    return a > b ?
    a :
    b
}

What this tells us is that the "coverage" metric depends too much on the implementation, whereas ideally testing should be black box. And even then it's a judgement call.

For example, would the following three input cases constitute complete testing of the max function?

  • (2, 1)
  • (1, 2)
  • (1, 1)

You'd get 100% line and 100% branch coverage for the above implementations. But what about non-number inputs? Ok, so you add two more input cases:

  • (null, 1)
  • (1, null)

which forces you to update the implementation:

function max(a, b) {
    if (typeof a !== 'number' ||  typeof b !== 'number') {
        return undefined
    }
    return a > b ? a : b
}

Looking good. You have 100% line and branch coverage, and you've covered invalid inputs.

But is that enough? What about negative numbers?

The ideal of 100% blackbox coverage is a fantasy

In my opinion, in this situation, for the simple nature of this function, testing negative number cases is anal overkill. If the situation were different, say the function only existed because we need to implemented some tricky algorithm or optimization, that may or may not work as expected for negative numbers, then I'd add more input cases including negative numbers.

Often times, you only discover corner cases because you have hundreds or thousands of users and only through their using your software in unexpected ways or in conditions and software environments you could not foresee or reproduce even if you could are such rare cases exposed. And often those rare cases are artifacts of the nature of your implementation, not something you'd arrive at from analysis of an idealized abstraction of the buggy code's interfaces.

I think what that shows is the ideal of 100% blackbox coverage is a bit of a fantasy. You would waste a lot of time writing unnecessary tests if you treated everything as an idealized black box. In the example above, I know the implementation uses a simple and reliable non-number check and then uses the native Javascript logic to compare values (a > b), and that it would be silly to do anything more complex. Knowing that, I'm not going to test passing in negative numbers, floats, strings, objects, etc.

At the end of the day, you have to be practical and use good judgement, and that judgement usually cannot ignore knowing something about the nature of what's in the black box, or at least the assumptions made inside the black box.

All this said, I don't have a CS degree 😂. What's the equivalent of IANAL for programmer advice?

Marylnmarylou answered 5/8, 2022 at 14:47 Comment(2)
(What does it mean "anal overkill"? If it's just what I think, probably better to just say "overkill")Fresh
I originally thought that all mentioned quotes were related to persons related to unit testing world in general, instead they are "just" SO contributors. I don't think it's completely fair to collect statements from other answers to improve this specific answer. But I'm quite new here.Fresh

© 2022 - 2024 — McMap. All rights reserved.