The claim that it is a mistake ever to use a standard C++ container as a base class surprises me.
If it is no abuse of the language to declare ...
// Example A
typedef std::vector<double> Rates;
typedef std::vector<double> Charges;
... then what, exactly, is the hazard in declaring ...
// Example B
class Rates : public std::vector<double> {
// ...
} ;
class Charges: public std::vector<double> {
// ...
} ;
The positive advantages to B include:
- Enables overloading of functions because f(Rates &) and f(Charges &) are distinct signatures
- Enables other templates to be specialized, because X<Rates> and X<Charges> are distinct types
- Forward declaration is trivial
- Debugger probably tells you whether the object is a Rates or a Charges
- If, as time goes by, Rates and Charges develop personalities — a Singleton for Rates, an output format for Charges — there is an obvious scope for that functionality to be implemented.
The positive advantages to A include:
- Don't have to provide trivial implementations of constructors etc
- The fifteen-year-old pre-standard compiler that's the only thing that will compile your legacy doesn't choke
- Since specializations are impossible, template X<Rates> and template X<Charges> will use the same code, so no pointless bloat.
Both approaches are superior to using a raw container, because if the implementation changes from vector<double> to vector<float>, there's only one place to change with B and maybe only one place to change with A (it could be more, because someone may have put identical typedef statements in multiple places).
My aim is that this be a specific, answerable question, not a discussion of better or worse practice. Show the worst thing that can happen as a consequence of deriving from a standard container, that would have been prevented by using a typedef instead.
Edit:
Without question, adding a destructor to class Rates or class Charges would be a risk, because std::vector does not declare its destructor as virtual. There is no destructor in the example, and no need for one. Destroying a Rates or Charges object will invoke the base class destructor. There is no need for polymorphism here, either. The challenge is to show something bad happening as a consequence of using derivation instead of a typedef.
Edit:
Consider this use case:
#include <vector>
#include <iostream>
void kill_it(std::vector<double> *victim) {
// user code, knows nothing of Rates or Charges
// invokes non-virtual ~std::vector<double>(), then frees the
// memory allocated at address victim
delete victim ;
}
typedef std::vector<double> Rates;
class Charges: public std::vector<double> { };
int main(int, char **) {
std::vector<double> *p1, *p2;
p1 = new Rates;
p2 = new Charges;
// ???
kill_it(p2);
kill_it(p1);
return 0;
}
Is there any possible error that even an arbitrarily hapless user could introduce in the ??? section which will result in a problem with Charges (the derived class), but not with Rates (the typedef)?
In the Microsoft implementation, vector<T> is itself implemented via inheritance. vector<T,A> is a publicly derived from _Vector_Val<T,A> Should containment be preferred?
kill_it
is wrong. If the dynamic type ofvictim
is notstd::vector
, then thedelete
simply invokes undefined behaviour. The call tokill_it(p2)
causes this to happen, and so nothing needs to be added to the//???
section for this to have undefined behaviour. – Ibanez