People are going to say "It depends on what you're doing".
And they are right.
There's an example here where a conventionally-designed program using std::vector
was performance tuned, through a series of six stages, and its execution time was reduced from 2700 microseconds per unit of work to 3.7, for a speedup factor of 730x.
The first thing done was to notice that a large percentage of time was going into growing arrays and removing elements from them.
So a different array class was used, which reduced time by a large amount.
The second thing done was to notice that a large percentage of time was still going into array-related activities.
So the arrays were eliminated altogether, and linked lists used instead, producing another large speedup.
Then other things were using a large percentage of the remaining time, such as new
ing and delete
ing objects.
Then those objects were recycled in free lists, producing another large speedup.
After a couple more stages, a decision was made to stop trying, because it was getting harder to find things to improve, and the speedup was deemed sufficient.
The point is, don't just sort of choose something that's highly recommended and then hope for the best.
Rather get it built one way or another and then do performance tuning like this, and be willing to make major changes in your data structure design, based on what you see a high percentage of time being spent on.
And iterate it.
You might change your storage scheme from A to B, and later from B to C.
That's perfectly OK.
std::array<std::array<double,10>, 10>
. – Quezadastd::vector
is a drop-in relacement for C arrays. What precisely do you want to be performant? – Bywordoperator[]
)..at()
will bounds-check. If your compiler is doing bounds checking on[]
then it's time to find another compiler. (Also assuming a non-debug mode of compiling) – Irenairene