Which is most cache friendly?
Asked Answered
D

4

12

I am trying to get a good grip on data oriented design and how to program best with the cache in mind. There's basically two scenarios that I cannot quite decide which is better and why - is it better to have a vector of objects, or several vectors with the objects atomic data?

A) Vector of objects example

struct A
{
    GLsizei mIndices;
    GLuint mVBO;
    GLuint mIndexBuffer;
    GLuint mVAO;

    size_t vertexDataSize;
    size_t normalDataSize;
};

std::vector<A> gMeshes;

for_each(gMeshes as mesh)
{
    glBindVertexArray(mesh.mVAO);
    glDrawElements(GL_TRIANGLES, mesh.mIndices, GL_UNSIGNED_INT, 0);
    glBindVertexArray(0);

    ....
}

B) Vectors with the atomic data

std::vector<GLsizei> gIndices;
std::vector<GLuint> gVBOs;
std::vector<GLuint> gIndexBuffers;
std::vector<GLuint> gVAOs;
std::vector<size_t> gVertexDataSizes;
std::vector<size_t> gNormalDataSizes;

size_t numMeshes = ...;

for (index = 0; index++; index < numMeshes)
{
    glBindVertexArray(gVAOs[index]);
    glDrawElements(GL_TRIANGLES, gIndices[index], GL_UNSIGNED_INT, 0);
    glBindVertexArray(0);

    ....
}

Which one is more memory efficient and cache friendly resulting in less cache misses and better performance, and why?

Disjoined answered 1/10, 2013 at 21:25 Comment(7)
Your struct doesn't look big enough for it to really make a difference, but if it was huge, I'd expect your first option to have the fewest missesConnected
I've heard in console game programming that you should try and keep the same kind of data closeby (i.e. the second approach); and mixed-content data like the first is a taboo. But I'm not sure how relevant that advice is.Greedy
Wouldn't that depend on the access patterns? I.e., if you are accessing just a few of those elements - but reading all of their data - quite often, the first option looks more promising, while if you usually only use one of the member variables, the second one looks better? (This is just a guess though.)Frowsty
Suggest you tag this with openglGaullist
See #8378167Sauropod
heare is another: Structure of arrays and array of structures - performance differenceDee
You know what would be more cache efficient? If you use GL_UNSIGNED_SHORT for your index data type. In GPUs, if you have fewer than 65537 vertices you can improve the efficiency of post-T&L cache and tagging by using 16-bit indices. You might think that 8-bit indices would logically improve performance even more for buffers with fewer than 257 vertices, but most hardware does not support 8-bit indices natively.Hugo
S
5

With some variation according to which level of cache you're talking about, cache works as follows:

  • if the data is already in cache then it is fast to access
  • if the data is not in cache then you incur a cost, but an entire cache line (or page, if we're talking RAM vs swap file rather than cache vs RAM) is brought into cache, so access close to the missed address will not miss.
  • if you're lucky then the memory subsystem will detect sequential access and pre-fetch data that it thinks you're about to need.

So naively the questions to ask are:

  1. how many cache misses occur? -- B wins, because in A you fetch some unused data per record, whereas in B you fetch none other than a small rounding error at the end of the iteration. So in order to visit all of the necessary data, B fetches fewer cache lines, assuming a significant number of records. If the number of records is insignificant, then cache performance may have little or nothing to do with the performance of your code, because a program that uses a small enough amount of data will find that it's all in cache all the time.
  2. is the access sequential? -- yes in both cases, although this might be harder to detect in case B because there are two interleaved sequences rather than just one.

So, I would sort of expect B to be faster for this code. However:

  • if this is the only access to the data, then you could speed up A by removing most of the data members from the struct. So do that. Presumably in fact it is not the only access to the data in your program, and the other accesses might affect performance in two ways: the time they actually take, and whether they populate the cache with the data you need.
  • what I expect and what actually happens are frequently different things, and there is little point relying on speculation if you have any ability to test it. In the best case, the sequential access means that there are no cache misses in either code. Testing performance requires no special tool (although they can make it easier), just a clock with a second hand. At a pinch, fashion a pendulum from your phone charger.
  • there are some complications I have ignored. Depending on hardware, if you're unlucky with B then at the lowest cache level you could find that the accesses to one vector are evicting the accesses to the other vector, because the corresponding memory just happens to use the same location in cache. This would cause two cache misses per record. This will only happen on what's called "direct-mapped cache". "Two-way cache" or better would save the day, by allowing chunks of both vectors to co-exist even if their first preference location in cache is the same. I don't think that PC hardware generally uses direct-mapped cache, but I don't know for sure and I don't know much about GPUs.
Suction answered 1/10, 2013 at 22:9 Comment(1)
Modern Intel CPUs (Sandy/Ivy Bridge) have 8-way L1 and L2 cache and 12-way L3. Not sure about AMD. I'm also pretty sure most ARM processors with more than 4k L1 cache are 4-way.Ipsus
H
1

I understand that this is partly opinion-based, and also that it could be a case of premature optimization, but your first option definitely has the best aesthetics. It's one vector versus six - no contest in my eyes.

For cache performance, it ought to be better. That is because the alternative requires access to two different vectors, which splits memory access every single time you render a mesh.

With the structure approach, the mesh is essentially a self-contained object and correctly implies no relation to other meshes. When drawing, you only access that mesh, and when rendering all meshes, you do one at a time in a cache-friendly manner. Yes, you will eat cache more quickly because your vector elements are larger, but you won't be contesting it.

You may also find other benefits later on from using this representation. ie if you want to store additional data about a mesh. Adding extra data in more vectors will quickly clutter your code and increase the risk of making silly errors, whereas it's trivial to make changes to the structure.

Highflown answered 1/10, 2013 at 21:51 Comment(2)
On GPU architectures it's common to have optimized operations for massively parallel memory access; they are optimized for SoA because a single operation can read many consecutive memory positions, while the AoS would require strides between the elements.Balustrade
"access to two different vectors, which splits memory access". I don't think that's inherently cache-unfriendly. To over-simplify, half of your cache can be used to cache one vector while simultaneously the other half can be caching the other vector (and a third half is caching the stack). Since each vector in B is much smaller than half the size of the vector in A, this at least has the potential to be a win.Suction
D
1

I recommend profiling with either perf or oprofile and posting your results back here (assuming you are running linux), including the number of elements you iterated across, number of iterations in total, and the hardware you tested on.

If I had to guess (and this is only a guess), I'd suspect that the first approach might be faster due to the locality of data within each structure, and hopefully the OS/hardware can prefetch additional elements for you. But again, this will depend on cache size, cache line size, and other aspects.

Defining "better" is interesting too. Are you looking for overall time to process N elements, low variance in each sample, minimal cache misses (which will be influenced by other processes running on your system), etc.

Don't forget that with STL vectors, you are also at the mercy of the allocator... e.g. it can decide at any time to reallocate the array, which will invalidate your cache. Another factor to try to isolate if you can!

Diatonic answered 1/10, 2013 at 21:52 Comment(2)
Not running linux unfortunatelyDisjoined
I suspect there will be good profilers for this sort of thing on Windows too... even the windows performance counters would be a good start.Diatonic
T
1

Depends on your access patterns. Your first version is AoS (array of structures), second is SoA (structure of arrays).

SoA tends to use less memory (unless you store so few elements that the overhead of the arrays is actually non-trivial) if there's any kind of structure padding that you'd normally get in the AoS representation. It also tends to be a much bigger PITA to code against since you have to maintain/sync parallel arrays.

AoS tends to excel for random-access. As an example, for simplicity let's say each element fits into a cache line and is properly aligned (64 byte size and alignment, e.g.). In that case, if you are randomly accessing an nth element, you get all the relevant data for the element in a single cache line. If you used an SoA and dispersed those fields across separate arrays, you'd have to load memory into multiple cache lines just to load the data for that one element. And because we're accessing the data in a random pattern, we don't benefit from spatial locality much at all since the next element we're going to be accessing could be somewhere completely else in memory.

However, SoA tends to excel for sequential access mainly because there's often less data to load into the CPU cache in the first place for the entire sequential loop because it excludes structure padding and cold fields. By cold fields, I mean fields you don't need to access in a particular sequential loop. For example, a physics system might not care about particle fields involved with how the particle looks to the user, like color and a sprite handle. That's irrelevant data. It only cares about particle positions. The SoA allows you to avoid loading that irrelevant data into cache lines. It allows you to load as much relevant data into a cache line at once so you end up with fewer compulsory cache misses (as well as page faults for large enough data) with the SoA.

That's also only covering memory access patterns. With SoA reps, you also tend to be able to write more efficient and simpler SIMD instructions. But again it's mainly suited for sequential access.

You can also mix the two concepts. You might use an AoS for hot fields frequently accessed together in random-access patterns, then hoist out the cold fields and store them in parallel.

Tightwad answered 21/12, 2017 at 3:11 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.