In my class design, I use abstract classes and virtual functions extensively. I had a feeling that virtual functions affects the performance. Is this true? But I think this performance difference is not noticeable and looks like I am doing premature optimization. Right?
A good rule of thumb is:
It's not a performance problem until you can prove it.
The use of virtual functions will have a very slight effect on performance, but it's unlikely to affect the overall performance of your application. Better places to look for performance improvements are in algorithms and I/O.
An excellent article that talks about virtual functions (and more) is Member Function Pointers and the Fastest Possible C++ Delegates.
Your question made me curious, so I went ahead and ran some timings on the 3GHz in-order PowerPC CPU we work with. The test I ran was to make a simple 4d vector class with get/set functions
class TestVec
{
float x,y,z,w;
public:
float GetX() { return x; }
float SetX(float to) { return x=to; } // and so on for the other three
}
Then I set up three arrays each containing 1024 of these vectors (small enough to fit in L1) and ran a loop that added them to one another (A.x = B.x + C.x) 1000 times. I ran this with the functions defined as inline
, virtual
, and regular function calls. Here are the results:
- inline: 8ms (0.65ns per call)
- direct: 68ms (5.53ns per call)
- virtual: 160ms (13ns per call)
So, in this case (where everything fits in cache) the virtual function calls were about 20x slower than the inline calls. But what does this really mean? Each trip through the loop caused exactly 3 * 4 * 1024 = 12,288
function calls (1024 vectors times four components times three calls per add), so these times represent 1000 * 12,288 = 12,288,000
function calls. The virtual loop took 92ms longer than the direct loop, so the additional overhead per call was 7 nanoseconds per function.
From this I conclude: yes, virtual functions are much slower than direct functions, and no, unless you're planning on calling them ten million times per second, it doesn't matter.
See also: comparison of the generated assembly.
A good rule of thumb is:
It's not a performance problem until you can prove it.
The use of virtual functions will have a very slight effect on performance, but it's unlikely to affect the overall performance of your application. Better places to look for performance improvements are in algorithms and I/O.
An excellent article that talks about virtual functions (and more) is Member Function Pointers and the Fastest Possible C++ Delegates.
When Objective-C (where all methods are virtual) is the primary language for the iPhone and freakin' Java is the main language for Android, I think it's pretty safe to use C++ virtual functions on our 3 GHz dual-core towers.
In very performance critical applications (like video games) a virtual function call can be too slow. With modern hardware, the biggest performance concern is the cache miss. If data isn't in the cache, it may be hundreds of cycles before it's available.
A normal function call can generate an instruction cache miss when the CPU fetches the first instruction of the new function and it's not in the cache.
A virtual function call first needs to load the vtable pointer from the object. This can result in a data cache miss. Then it loads the function pointer from the vtable which can result in another data cache miss. Then it calls the function which can result in an instruction cache miss like a non-virtual function.
In many cases, two extra cache misses are not a concern, but in a tight loop on performance critical code it can dramatically reduce performance.
From page 44 of Agner Fog's "Optimizing Software in C++" manual:
The time it takes to call a virtual member function is a few clock cycles more than it takes to call a non-virtual member function, provided that the function call statement always calls the same version of the virtual function. If the version changes then you will get a misprediction penalty of 10 - 30 clock cycles. The rules for prediction and misprediction of virtual function calls is the same as for switch statements...
switch
. With totally arbitrary case
values, sure. But if all case
s are consecutive, a compiler might be able to optimise this into a jump-table (ah, that reminds me of the good old Z80 days), which should be (for want of a better term) constant-time. Not that I recommend trying to replace vfuncs with switch
, which is ludicrous. ;) –
Swee rules for prediction and misprediction of virtual function calls is the same as for switch statements
is also true in a sense that let's say vtable is implemented as a switch-case, then there are two possibilities: 1) it gets optimized to a jump table (as you said) if the cases are consecutive, 2) it can't be optimized to a jump table because the cases aren't consecutive, and so will get a misprediction penalty of 10 - 30 clock cycles
as Anger states. –
Planer absolutely. It was a problem way back when computers ran at 100Mhz, as every method call required a lookup on the vtable before it was called. But today.. on a 3Ghz CPU that has 1st level cache with more memory than my first computer had? Not at all. Allocating memory from main RAM will cost you more time than if all your functions were virtual.
Its like the old, old days where people said structured programming was slow because all the code was split into functions, each function required stack allocations and a function call!
The only time I would even think of bothering to consider the performance impact of a virtual function, is if it was very heavily used and instantiated in templated code that ended up throughout everything. Even then, I wouldn't spend too much effort on it!
PS think of other 'easy to use' languages - all their methods are virtual under the covers and they don't crawl nowadays.
There's another performance criteria besides execution time. A Vtable takes up memory space as well, and in some cases can be avoided: ATL uses compile-time "simulated dynamic binding" with templates to get the effect of "static polymorphism", which is sort of hard to explain; you basically pass the derived class as a parameter to a base class template, so at compile time the base class "knows" what its derived class is in each instance. Won't let you store multiple different derived classes in a collection of base types (that's run-time polymorphism) but from a static sense, if you want to make a class Y that is the same as a preexisting template class X which has the hooks for this kind of overriding, you just need to override the methods you care about, and then you get the base methods of class X without having to have a vtable.
In classes with large memory footprints, the cost of a single vtable pointer is not much, but some of the ATL classes in COM are very small, and it's worth the vtable savings if the run-time polymorphism case is never going to occur.
See also this other SO question.
By the way here's a posting I found that talks about the CPU-time performance aspects.
Yes, you're right and if you curious about the cost of virtual function call you might find this post interesting.
The only ever way that I can see that a virtual function will become a performance problem is if many virtual functions are called within a tight loop, and if and only if they cause a page fault or other "heavy" memory operation to occur.
Though like other people have said it's pretty much never going to be a problem for you in real life. And if you think it is, run a profiler, do some tests, and verify if this really is a problem before trying to "undesign" your code for a performance benefit.
When class method is not virtual, compiler usually does in-lining. In contrary, when you use pointer to some class with virtual function, the real address will be known only at runtime.
This is well illustrated by test, time difference ~700% (!):
#include <time.h>
class Direct
{
public:
int Perform(int &ia) { return ++ia; }
};
class AbstrBase
{
public:
virtual int Perform(int &ia)=0;
};
class Derived: public AbstrBase
{
public:
virtual int Perform(int &ia) { return ++ia; }
};
int main(int argc, char* argv[])
{
Direct *pdir, dir;
pdir = &dir;
int ia=0;
double start = clock();
while( pdir->Perform(ia) );
double end = clock();
printf( "Direct %.3f, ia=%d\n", (end-start)/CLOCKS_PER_SEC, ia );
Derived drv;
AbstrBase *ab = &drv;
ia=0;
start = clock();
while( ab->Perform(ia) );
end = clock();
printf( "Virtual: %.3f, ia=%d\n", (end-start)/CLOCKS_PER_SEC, ia );
return 0;
}
The impact of virtual function call highly depends on situation. If there are few calls and significant amount of work inside function - it could be negligible.
Or, when it is a virtual call repeatedly used many times, while doing some simple operation - it could be really big.
++ia
. So what? –
Tachylyte I've gone back and forth on this at least 20 times on my particular project. Although there can be some great gains in terms of code reuse, clarity, maintainability, and readability, on the other hand, performance hits still do exist with virtual functions.
Is the performance hit going to be noticeable on a modern laptop/desktop/tablet... probably not! However, in certain cases with embedded systems, the performance hit may be the driving factor in your code's inefficiency, especially if the virtual function is called over and over again in a loop.
Here's a some-what dated paper that anaylzes best practices for C/C++ in the embedded systems context: http://www.open-std.org/jtc1/sc22/wg21/docs/ESC_Boston_01_304_paper.pdf
To conclude: it's up to the programmer to understand the pros/cons of using a certain construct over another. Unless you're super performance driven, you probably don't care about the performance hit and should use all the neat OO stuff in C++ to help make your code as usable as possible.
In my experience, the main relevant thing is the ability to inline a function. If you have performance/optimization needs that dictate a function needs to be inlined, then you can't make the function virtual because it would prevent that. Otherwise, you probably won't notice the difference.
One thing to note is that this:
boolean contains(A element) {
for (A current : this)
if (element.equals(current))
return true;
return false;
}
may be faster than this:
boolean contains(A element) {
for (A current : this)
if (current.equals(element))
return true;
return false;
}
This is because the first method is only calling one function while the second may be calling many different functions. This applies to any virtual function in any language.
I say "may" because this depends on the compiler, the cache etc.
The performance penalty of using virtual functions can never outweight the advantages you get at the design level. Supposedly a call to a virtual function would be 25% less efficient then a direct call to a static function. This is because there is a level of indirection throught the VMT. However the time taken to make the call is normally very small compared to the time taken in the actual execution of your function so the total performance cost will be nigligable, especially with current performance of hardware. Furthermore the compiler can sometimes optimise and see that no virtual call is needed and compile it into a static call. So don't worry use virtual functions and abstract classes as much as you need.
The performance penalty of using virtual functions can sometimes be so insignificant that it is completely outweighed by the advantages you get at the design level.
The key difference is saying sometimes
, not never
. –
Swee I always questioned myself this, especially since - quite a few years ago - I also did such a test comparing the timings of a standard member method call with a virtual one and was really angry about the results at that time, having empty virtual calls being 8 times slower than non-virtuals.
Today I had to decide whether or not to use a virtual function for allocating more memory in my buffer class, in a very performance critical app, so I googled (and found you), and in the end, did the test again.
// g++ -std=c++0x -o perf perf.cpp -lrt
#include <typeinfo> // typeid
#include <cstdio> // printf
#include <cstdlib> // atoll
#include <ctime> // clock_gettime
struct Virtual { virtual int call() { return 42; } };
struct Inline { inline int call() { return 42; } };
struct Normal { int call(); };
int Normal::call() { return 42; }
template<typename T>
void test(unsigned long long count) {
std::printf("Timing function calls of '%s' %llu times ...\n", typeid(T).name(), count);
timespec t0, t1;
clock_gettime(CLOCK_REALTIME, &t0);
T test;
while (count--) test.call();
clock_gettime(CLOCK_REALTIME, &t1);
t1.tv_sec -= t0.tv_sec;
t1.tv_nsec = t1.tv_nsec > t0.tv_nsec
? t1.tv_nsec - t0.tv_nsec
: 1000000000lu - t0.tv_nsec;
std::printf(" -- result: %d sec %ld nsec\n", t1.tv_sec, t1.tv_nsec);
}
template<typename T, typename Ua, typename... Un>
void test(unsigned long long count) {
test<T>(count);
test<Ua, Un...>(count);
}
int main(int argc, const char* argv[]) {
test<Inline, Normal, Virtual>(argc == 2 ? atoll(argv[1]) : 10000000000llu);
return 0;
}
And was really surprised that it - in fact - really does not matter at all anymore. While it makes just sense to have inlines faster than non-virtuals, and them being faster then virtuals, it often comes to the load of the computer overall, whether your cache has the necessary data or not, and whilst you might be able to optimize at cache-level, I think, that this should be done by the compiler developers more than by application devs.
© 2022 - 2024 — McMap. All rights reserved.