c++ standard practice: virtual interface classes vs. templates
Asked Answered
P

6

51

I have to make a decision regarding generalization vs polymorphism.

Well the scenario is standard: I want to make my monolithic interdependent code to be more modular, clean and extensible. It is still in a stage where the change of design principle is doable, and, as I look at it, highly desirable.

Will I introduce purely virtual base classes (interfaces) or templates?

I am aware of the basics regarding the template option: less indirection, better performance, more compiling but no late binding, and so on.

The stl does not use much (or none?) inheritance, and boost doesn't either. But I think those are aimed to be really small basic tools that are used every 2 lines of code by the programmer.

I consider the inheritance and late binding approach to be more sensible for plug-in style of big pieces of code and functionality that should be interchangeable, updateable etc. after deployment or even during runtime.

Well my scenario lies somewhat inbetween.

I dont need to exchange pieces of code on the fly at runtime, compile time is fine. Usually it is also a very central and frequently used piece of functionality, it is not logically seperatable into big blocks.

This lets me tend somewhat to the template solution. To me it also looks somewhat cleaner.

Are there any big bad implications, are interfaces still THE way to go? When are they not? Which complies more with standard c++ style?

I know this is bordering on subjective, but I am really interested in some experiences. I don't own a copy of Scott Meyers effective C++ so I set my hopes on you guys :)

Ptyalism answered 22/7, 2009 at 15:4 Comment(1)
related question that talks about spped/performance of templates vs virtual-classes: https://mcmap.net/q/332099/-c-templates-for-performance-closed/52074Brelje
S
33

You're basically right, dynamic polymorphism (inheritance, virtuals) is generally the right choice when the type should be allowed to change at runtime (for example in plugin architectures). Static polymorphism (templates) is a better choice if the type should only change at compile-time.

The only potential downsides to templates are that 1) they generally have to be defined in the headers (which means more code gets #included), and this often leads to slower compile-times.

But design-wise, I can't see any problems in using templates when possible.

Which complies more with standard c++ style?

Depends on what "standard C++ style" is. The C++ standard library uses a bit of everything. The STL uses templates for everything, the slightly older IOStreams library uses inheritance and virtual functions, and the library functions inherited from C uses neither, of course.

These days, templates are by far the most popular choice though, and I'd have to say that is the most "standard" approach.

Saltwort answered 22/7, 2009 at 15:11 Comment(3)
Well I do see one problem using templates rather than interfaces : requirements are totally implicit. When you have to implement pure virtual function, you are given its exact signature. But when you see a template type like _AllocT or Iter you have no idea what your class is required to have nor if it even has to be a class. Your only way to know is by looking for a decent documentation about it, which i did have trouble doing today when trying to create my own stl-compatible allocator class.Britzka
"You only way to know is by looking for a decent documentation about it" - or by trying to compile and seeing which functions the compiler complains about being unable to find, yes. Also, Concepts are intended to solve this problem. (And even if it had been an interface, you'd still need to find decent documentation. Knowing which functions to override is not enough. You also need to know what their semantics should be, and the interface doesn't tell you that). Still, you are right. There is a reason the language supports both. :)Saltwort
Wouldn't it be neater to erase many templates and replace them with interfaces, if C++ could support that?Bluebird
C
14

Properties of classic object-oriented polymorphism:

  • objects are bound at run-time; this is more flexible, but also consumes more resources (CPU) at run-time
  • strong typing brings somewhat more type safety, but the need to dynamic_cast (and its potential to blow up into a customer's face) might easily compensate for that
  • probably more widely known and understood, but "classical" deep inheritance hierarchies seem horrendous to me

Properties of compile-time polymorphism by template:

  • compile-time binding allows more aggressive optimizations, but prevents run-time flexibility
  • duck-typing might seem more awkward, but failures are usually compile-time failures
  • can sometimes be harder to read and understand; without concepts, compiler diagnostics might sometimes become enraging

Note that there is no need to decide for either one. You can freely mix and mingle both of them (and many other idioms and paradigms). Often, this leads to very impressive (and expressive) code. (See, for example, things like type erasure.) To get an idea what's possible through clever mixing of paradigms, you might want to browse through Alexandrescu's "Modern C++ Design".

Cigar answered 22/7, 2009 at 15:30 Comment(2)
Strong typing in OOP? That doesn't even make sense. You get all the type safety benefits from compile-time polymorphism. In OOP, type erasure (for example by hiding the actual type behind an interface) and up/downcasts pretty much eliminate any hope you might have had for type safty.Saltwort
You are certainly right. What I meant, however, (and phrased poorly) is that with in run-time polymorphy, you can be sure that what you get is a deliberate implementation of some interface, while duck typing in compile-time polymorphy might accept any accidental match. How to say this better?Cigar
G
9

It's something of a false opposition. Yes, the major use of inheritance and virtual functions is in iostreams, which are very old, and are written in quite a different style to the rest of the std libraries.

But many of the "coolest" modern C++ libraries such as boost do make use of runtime polymorphism, they just use templates to make it more convenient to use.

boost::any and std::tr1::function (formerly also from boost) are fine examples.

They are both single item containers to something whose concrete type is unknown at compile-time (this is especially obvious with any, because it has its own kind of dynamic cast operator to get the value out).

Grocer answered 22/7, 2009 at 15:23 Comment(1)
+1 Your answer was very enlightening, but in compliance to the question I accepted jalf's advice. Thanks.Ptyalism
P
7

After having shoveled a little more experience on my plate, there are some things in templates that I don't like: There are certain disadvantages that disqualify template meta programming from being a useable language:

  • readability: too many brackets, too many non language enforced(therefore misused) conventions
  • for someone undergoing the usual evolution in programming languages, templates are unreadable und incomprehensible (just look at boost bgl)
  • Sometimes it feels like someone tried to write a c++ code generator in awk.
  • compiler error messages are cl(utter)ed crap
  • too many hacks necessary (most of them remedied in c++0x) to get some basic "language" like functionality.
  • No Templates in implementation files resulting in header only libraries (which is a very two sided sword)
  • usual IDE's code completion features are not of much help with templates.
  • Doing big stuff in MPL seems "haggly", can't find another word for it. Every line of templated code produces constraints on that template type, which are enforced in a text-replacing kind of way. There are semantics inherent in inheritance hierarchies, there are none whatsoever in template structures. It's like everything is a void* and the compiler tries to tell you if there'll be a segfault.

All that being said, I use it quite successfully on basic utilities and libraries. Writing high-level functionality or hardware related stuff with it, doesn't seem to cut it for me. Meaning I template my building blocks but build the house the classic way.

Ptyalism answered 15/10, 2010 at 11:11 Comment(0)
C
1

I use both in my large code base. When the type is known at compile time, I design it with templates, when it's known only at run time, I use virtual functions. I find the virtual functions easier to program and easier to read later, however there are times where performance is critical and the fact that the templated polymorphism (if you can really call it polymorphism) can be inlined really helps.

Choriocarcinoma answered 15/10, 2010 at 15:26 Comment(0)
D
0

From my point of view it is what ever you are best at. If you have more experience with OO use OO. If you have more experience with generics use generics.

Both techniques have some equivalent patterns that mean for a lot of things you can use either. EG Strategy in OO vs Policy in Generics or Template Method in OO vs Commonly Recurring Template Pattern in Generics.

If you are planning refactor production code that already works but the structure is a bit smelly. Don't use it as an excuse to play with some new design technique, because in a year or two once you understand the technique better you may regret how you refactored your code. It is very easy to introduce a new inflexibilities when applying techniques as you are learning them. The aim of is to improve the design of existing code if you aren't skilled in a technique how do you know you are improving the design as opposed to building a large phallic symbol in your code.

Personally I am better at OO and tend to favour it as I know I can produce clean designs that are easy to understand and that most people can change. Most generic code I right is aimed at interfacing with other generic code, eg writing an iterator, or a generic function for algorithms.

Dachshund answered 22/7, 2009 at 16:17 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.