I'm using gcov for unit testing coverage analysis in my C++ project, which includes regions parallelized using OpenMP. Upon reviewing the gcov results, I've noticed that the lines parallelized with OpenMP directives are excluded from the analysis. Is there a way to include these parallelized regions in the coverage analysis?
I attach a minimal example code and how I run gcov.
File example.cpp:
#include <cmath>
#include <omp.h>
int main(int argc, char* argv[]){
#pragma omp parallel for
for(int i=0;i<200;i++){
double a=std::sqrt(5+i);
}
return 0;
}
I compile the code using the following command:
g++ -fprofile-arcs -ftest-coverage -fopenmp example.cpp -o example
And then execute gcov as follows:
./example && gcov -w example.cpp
The output file example.cpp.gcov generated is:
-: 0:Source:example.cpp
-: 0:Graph:example.gcno
-: 0:Data:example.gcda
-: 0:Runs:1
-: 1:#include <cmath>
-: 2:#include <omp.h>
-: 3:
1: 4:int main(int argc, char* argv[]){
-: 5: #pragma omp parallel for
-: 6: for(int i=0;i<200;i++){
-: 7: double a=std::sqrt(5+i);
-: 8: }
1: 9: return 0;
-: 10:}
EDIT:
I just tried with a slightly more complicated example code to discard the compiler optimized away useless code lines, and the result is the same as before:
#include <cmath>
#include <omp.h>
#include <iostream>
int main(int argc, char* argv[]){
double a=0.;
#pragma omp parallel for
for(int i=0;i<200;i++){
#pragma omp critical
{
a=a+std::sqrt(5+i);
}
}
std::cout << "a: " << a << std::endl;
return 0;
}
-
(noncode), not as0
(uncovered). I cannot reproduce this exact behaviour on any GCC version, all that I tested also attach debug info to the#pragma
andsqrt
lines. I'm also surprised that you usegcov -w
but the output doesn't contain any info about basic blocks. What exact versions are you using? – Sincerity-O0
should be the default config. Maybe it has been overrided somehow on your machine (or with some environment variable) or you use a non-standard GCC? – Froggy#pragma omp parallel for
is to parallelize the loop iterations across multiple threads. Focusing on individual iteration coverage within a parallel loop might not be as relevant. Maybe unrolling loops help you a bit. – Hemingway