I have seen the copy-and-swap idiom recommended in various places as the recommended/best/only way to implement strong exception safety for the assignment operator. It seems to me that this approach also has a downside.
Consider the following simplified vector-like class which utilizes copy-and-swap:
class IntVec {
size_t size;
int* vec;
public:
IntVec()
: size(0),
vec(0)
{}
IntVec(IntVec const& other)
: size(other.size),
vec(size? new int[size] : 0)
{
std::copy(other.vec, other.vec + size, vec);
}
void swap(IntVec& other) {
using std::swap;
swap(size, other.size);
swap(vec, other.vec);
}
IntVec& operator=(IntVec that) {
swap(that);
return *this;
}
//~IntVec() and other functions ...
}
Implementing the assignment via the copy constructor may be efficient and guarantee exception safety, but it can also cause an unneeded allocation, potentially even causing an uncalled for out-of-memory error.
Consider the case of assigning a 700MB IntVec
to a 1GB IntVec
on a machine with a <2GB heap limit. An optimal assignment will realize it already has enough memory allocated and only copy the data into it's already allocated buffer. The copy-and-swap implementation will cause an allocation of another 700MB buffer before the 1GB one is released, causing all 3 to try co-exist in memory at once, which will throw an out-of-memory error needlessly.
This implementation would solve the problem:
IntVec& operator=(IntVec const& that) {
if(that.size <= size) {
size = that.size;
std::copy(that.vec, that.vec + that.size, vec);
} else
swap(IntVec(that));
return *this;
}
So the bottom line is:
Am I right that this is a problem with the copy-and-swap idiom, or do normal compiler optimizations somehow eliminate the extra allocation, or am I overlooking some problem with my "better" version that the copy-and-swap one solves, or am I doing my math/algorithms wrong and the problem doesn't really exist?
std::array
) – Thadthaddaus