Understanding std::hardware_destructive_interference_size and std::hardware_constructive_interference_size
Asked Answered
P

3

96

C++17 added std::hardware_destructive_interference_size and std::hardware_constructive_interference_size. First, I thought it is just a portable way to get the size of a L1 cache line but that is an oversimplification.

Questions:

  • How are these constants related to the L1 cache line size?
  • Is there a good example that demonstrates their use cases?
  • Both are defined static constexpr. Is that not a problem if you build a binary and execute it on other machines with different cache line sizes? How can it protect against false sharing in that scenario when you are not certain on which machine your code will be running?
Papism answered 24/9, 2016 at 19:34 Comment(0)
S
89

The intent of these constants is indeed to get the cache-line size. The best place to read about the rationale for them is in the proposal itself:

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0154r1.html

I'll quote a snippet of the rationale here for ease-of-reading:

[...] the granularity of memory that does not interfere (to the first-order) [is] commonly referred to as the cache-line size.

Uses of cache-line size fall into two broad categories:

  • Avoiding destructive interference (false-sharing) between objects with temporally disjoint runtime access patterns from different threads.
  • Promoting constructive interference (true-sharing) between objects which have temporally local runtime access patterns.

The most sigificant issue with this useful implementation quantity is the questionable portability of the methods used in current practice to determine its value, despite their pervasiveness and popularity as a group. [...]

We aim to contribute a modest invention for this cause, abstractions for this quantity that can be conservatively defined for given purposes by implementations:

  • Destructive interference size: a number that’s suitable as an offset between two objects to likely avoid false-sharing due to different runtime access patterns from different threads.
  • Constructive interference size: a number that’s suitable as a limit on two objects’ combined memory footprint size and base alignment to likely promote true-sharing between them.

In both cases these values are provided on a quality of implementation basis, purely as hints that are likely to improve performance. These are ideal portable values to use with the alignas() keyword, for which there currently exists nearly no standard-supported portable uses.


"How are these constants related to the L1 cache line size?"

In theory, pretty directly.

Assume the compiler knows exactly what architecture you'll be running on - then these would almost certainly give you the L1 cache-line size precisely. (As noted later, this is a big assumption.)

For what it's worth, I would almost always expect these values to be the same. I believe the only reason they are declared separately is for completeness. (That said, maybe a compiler wants to estimate L2 cache-line size instead of L1 cache-line size for constructive interference; I don't know if this would actually be useful, though.)


"Is there a good example that demonstrates their use cases?"

At the bottom of this answer I've attached a long benchmark program that demonstrates false-sharing and true-sharing.

It demonstrates false-sharing by allocating an array of int wrappers: in one case multiple elements fit in the L1 cache-line, and in the other a single element takes up the L1 cache-line. In a tight loop a single, a fixed element is chosen from the array and updated repeatedly.

It demonstrates true-sharing by allocating a single pair of ints in a wrapper: in one case, the two ints within the pair do not fit in L1 cache-line size together, and in the other they do. In a tight loop, each element of the pair is updated repeatedly.

Note that the code for accessing the object under test does not change; the only difference is the layout and alignment of the objects themselves.

I don't have a C++17 compiler (and assume most people currently don't either), so I've replaced the constants in question with my own. You need to update these values to be accurate on your machine. That said, 64 bytes is probably the correct value on typical modern desktop hardware (at the time of writing).

Warning: the test will use all cores on your machines, and allocate ~256MB of memory. Don't forget to compile with optimizations!

On my machine, the output is:

Hardware concurrency: 16
sizeof(naive_int): 4
alignof(naive_int): 4
sizeof(cache_int): 64
alignof(cache_int): 64
sizeof(bad_pair): 72
alignof(bad_pair): 4
sizeof(good_pair): 8
alignof(good_pair): 4
Running naive_int test.
Average time: 0.0873625 seconds, useless result: 3291773
Running cache_int test.
Average time: 0.024724 seconds, useless result: 3286020
Running bad_pair test.
Average time: 0.308667 seconds, useless result: 6396272
Running good_pair test.
Average time: 0.174936 seconds, useless result: 6668457

I get ~3.5x speedup by avoiding false-sharing, and ~1.7x speedup by ensuring true-sharing.


"Both are defined static constexpr. Is that not a problem if you build a binary and execute it on other machines with different cache line sizes? How can it protect against false sharing in that scenario when you are not certain on which machine your code will be running?"

This will indeed be a problem. These constants are not guaranteed to map to any cache-line size on the target machine in particular, but are intended to be the best approximation the compiler can muster up.

This is noted in the proposal, and in the appendix they give an example of how some libraries try to detect cache-line size at compile time based on various environmental hints and macros. You are guaranteed that this value is at least alignof(max_align_t), which is an obvious lower bound.

In other words, this value should be used as your fallback case; you are free to define a precise value if you know it, e.g.:

constexpr std::size_t cache_line_size() {
#ifdef KNOWN_L1_CACHE_LINE_SIZE
  return KNOWN_L1_CACHE_LINE_SIZE;
#else
  return std::hardware_destructive_interference_size;
#endif
}

During compilation, if you want to assume a cache-line size just define KNOWN_L1_CACHE_LINE_SIZE.

Hope this helps!

Benchmark program:

#include <chrono>
#include <condition_variable>
#include <cstddef>
#include <functional>
#include <future>
#include <iostream>
#include <random>
#include <thread>
#include <vector>

// !!! YOU MUST UPDATE THIS TO BE ACCURATE !!!
constexpr std::size_t hardware_destructive_interference_size = 64;

// !!! YOU MUST UPDATE THIS TO BE ACCURATE !!!
constexpr std::size_t hardware_constructive_interference_size = 64;

constexpr unsigned kTimingTrialsToComputeAverage = 100;
constexpr unsigned kInnerLoopTrials = 1000000;

typedef unsigned useless_result_t;
typedef double elapsed_secs_t;

//////// CODE TO BE SAMPLED:

// wraps an int, default alignment allows false-sharing
struct naive_int {
    int value;
};
static_assert(alignof(naive_int) < hardware_destructive_interference_size, "");

// wraps an int, cache alignment prevents false-sharing
struct cache_int {
    alignas(hardware_destructive_interference_size) int value;
};
static_assert(alignof(cache_int) == hardware_destructive_interference_size, "");

// wraps a pair of int, purposefully pushes them too far apart for true-sharing
struct bad_pair {
    int first;
    char padding[hardware_constructive_interference_size];
    int second;
};
static_assert(sizeof(bad_pair) > hardware_constructive_interference_size, "");

// wraps a pair of int, ensures they fit nicely together for true-sharing
struct good_pair {
    int first;
    int second;
};
static_assert(sizeof(good_pair) <= hardware_constructive_interference_size, "");

// accesses a specific array element many times
template <typename T, typename Latch>
useless_result_t sample_array_threadfunc(
    Latch& latch,
    unsigned thread_index,
    T& vec) {
    // prepare for computation
    std::random_device rd;
    std::mt19937 mt{ rd() };
    std::uniform_int_distribution<int> dist{ 0, 4096 };

    auto& element = vec[vec.size() / 2 + thread_index];

    latch.count_down_and_wait();

    // compute
    for (unsigned trial = 0; trial != kInnerLoopTrials; ++trial) {
        element.value = dist(mt);
    }

    return static_cast<useless_result_t>(element.value);
}

// accesses a pair's elements many times
template <typename T, typename Latch>
useless_result_t sample_pair_threadfunc(
    Latch& latch,
    unsigned thread_index,
    T& pair) {
    // prepare for computation
    std::random_device rd;
    std::mt19937 mt{ rd() };
    std::uniform_int_distribution<int> dist{ 0, 4096 };

    latch.count_down_and_wait();

    // compute
    for (unsigned trial = 0; trial != kInnerLoopTrials; ++trial) {
        pair.first = dist(mt);
        pair.second = dist(mt);
    }

    return static_cast<useless_result_t>(pair.first) +
        static_cast<useless_result_t>(pair.second);
}

//////// UTILITIES:

// utility: allow threads to wait until everyone is ready
class threadlatch {
public:
    explicit threadlatch(const std::size_t count) :
        count_{ count }
    {}

    void count_down_and_wait() {
        std::unique_lock<std::mutex> lock{ mutex_ };
        if (--count_ == 0) {
            cv_.notify_all();
        }
        else {
            cv_.wait(lock, [&] { return count_ == 0; });
        }
    }

private:
    std::mutex mutex_;
    std::condition_variable cv_;
    std::size_t count_;
};

// utility: runs a given function in N threads
std::tuple<useless_result_t, elapsed_secs_t> run_threads(
    const std::function<useless_result_t(threadlatch&, unsigned)>& func,
    const unsigned num_threads) {
    threadlatch latch{ num_threads + 1 };

    std::vector<std::future<useless_result_t>> futures;
    std::vector<std::thread> threads;
    for (unsigned thread_index = 0; thread_index != num_threads; ++thread_index) {
        std::packaged_task<useless_result_t()> task{
            std::bind(func, std::ref(latch), thread_index)
        };

        futures.push_back(task.get_future());
        threads.push_back(std::thread(std::move(task)));
    }

    const auto starttime = std::chrono::high_resolution_clock::now();

    latch.count_down_and_wait();
    for (auto& thread : threads) {
        thread.join();
    }

    const auto endtime = std::chrono::high_resolution_clock::now();
    const auto elapsed = std::chrono::duration_cast<
        std::chrono::duration<double>>(
            endtime - starttime
            ).count();

    useless_result_t result = 0;
    for (auto& future : futures) {
        result += future.get();
    }

    return std::make_tuple(result, elapsed);
}

// utility: sample the time it takes to run func on N threads
void run_tests(
    const std::function<useless_result_t(threadlatch&, unsigned)>& func,
    const unsigned num_threads) {
    useless_result_t final_result = 0;
    double avgtime = 0.0;
    for (unsigned trial = 0; trial != kTimingTrialsToComputeAverage; ++trial) {
        const auto result_and_elapsed = run_threads(func, num_threads);
        const auto result = std::get<useless_result_t>(result_and_elapsed);
        const auto elapsed = std::get<elapsed_secs_t>(result_and_elapsed);

        final_result += result;
        avgtime = (avgtime * trial + elapsed) / (trial + 1);
    }

    std::cout
        << "Average time: " << avgtime
        << " seconds, useless result: " << final_result
        << std::endl;
}

int main() {
    const auto cores = std::thread::hardware_concurrency();
    std::cout << "Hardware concurrency: " << cores << std::endl;

    std::cout << "sizeof(naive_int): " << sizeof(naive_int) << std::endl;
    std::cout << "alignof(naive_int): " << alignof(naive_int) << std::endl;
    std::cout << "sizeof(cache_int): " << sizeof(cache_int) << std::endl;
    std::cout << "alignof(cache_int): " << alignof(cache_int) << std::endl;
    std::cout << "sizeof(bad_pair): " << sizeof(bad_pair) << std::endl;
    std::cout << "alignof(bad_pair): " << alignof(bad_pair) << std::endl;
    std::cout << "sizeof(good_pair): " << sizeof(good_pair) << std::endl;
    std::cout << "alignof(good_pair): " << alignof(good_pair) << std::endl;

    {
        std::cout << "Running naive_int test." << std::endl;

        std::vector<naive_int> vec;
        vec.resize((1u << 28) / sizeof(naive_int));  // allocate 256 mibibytes

        run_tests([&](threadlatch& latch, unsigned thread_index) {
            return sample_array_threadfunc(latch, thread_index, vec);
        }, cores);
    }
    {
        std::cout << "Running cache_int test." << std::endl;

        std::vector<cache_int> vec;
        vec.resize((1u << 28) / sizeof(cache_int));  // allocate 256 mibibytes

        run_tests([&](threadlatch& latch, unsigned thread_index) {
            return sample_array_threadfunc(latch, thread_index, vec);
        }, cores);
    }
    {
        std::cout << "Running bad_pair test." << std::endl;

        bad_pair p;

        run_tests([&](threadlatch& latch, unsigned thread_index) {
            return sample_pair_threadfunc(latch, thread_index, p);
        }, cores);
    }
    {
        std::cout << "Running good_pair test." << std::endl;

        good_pair p;

        run_tests([&](threadlatch& latch, unsigned thread_index) {
            return sample_pair_threadfunc(latch, thread_index, p);
        }, cores);
    }
}
Signore answered 6/10, 2016 at 3:58 Comment(4)
I authored the proposal, great answer! To clarify one point you made: "I would almost always expect these values to be the same. I believe the only reason they are declared separately is for completeness.". Yes, they should always be the same unless: 1) the ISA has shipped with different cacheline sizes, and no target arch is given; 2) you're targeting a virtual ISA such as WebAssembly for which the actual ISA is unknown (then you get best-effort). On constexpr: it's required to constexpr for the value to be usable in determining struct layout. Runtime values are useful in other circumstances.Voight
I'm surprised that there is such a massive difference between running this in Debug vs Release.Eggnog
@JFBastien: Another other factor that can cause destructive interference but not as reliable constructive interference is an adjacent-line prefetcher like in Intel's L2 cache, that tries to complete a 128-byte aligned pair of 64-byte cache lines. If there's spare bandwidth for off-core requests.Norseman
Coincidentally, a new question got asked the day after my last comment: Should the cache padding size of x86-64 be 128 bytes? is a failed attempt at a benchmark of L2 spatial prefetch causing false sharing; failed because it quickly settles down into each core owning the line it's modifying so only perf counters reveal the initial chaos of extra lines bouncing around. Ongoing bouncing that causes L2 requests is required for destructive interference, not L1d hits.Norseman
D
27

I would almost always expect these values to be the same.

Regarding above, I would like to make a minor contribution to the accepted answer. A while ago, I saw a very good use-case where these two should be defined separately in the folly library. Please see the caveat about Intel Sandy Bridge processor.

https://github.com/facebook/folly/blob/3af92dbe6849c4892a1fe1f9366306a2f5cbe6a0/folly/lang/Align.h

//  Memory locations within the same cache line are subject to destructive
//  interference, also known as false sharing, which is when concurrent
//  accesses to these different memory locations from different cores, where at
//  least one of the concurrent accesses is or involves a store operation,
//  induce contention and harm performance.
//
//  Microbenchmarks indicate that pairs of cache lines also see destructive
//  interference under heavy use of atomic operations, as observed for atomic
//  increment on Sandy Bridge.
//
//  We assume a cache line size of 64, so we use a cache line pair size of 128
//  to avoid destructive interference.
//
//  mimic: std::hardware_destructive_interference_size, C++17
constexpr std::size_t hardware_destructive_interference_size =
    kIsArchArm ? 64 : 128;
static_assert(hardware_destructive_interference_size >= max_align_v, "math?");

//  Memory locations within the same cache line are subject to constructive
//  interference, also known as true sharing, which is when accesses to some
//  memory locations induce all memory locations within the same cache line to
//  be cached, benefiting subsequent accesses to different memory locations
//  within the same cache line and heping performance.
//
//  mimic: std::hardware_constructive_interference_size, C++17
constexpr std::size_t hardware_constructive_interference_size = 64;
static_assert(hardware_constructive_interference_size >= max_align_v, "math?");
Diaconate answered 4/9, 2018 at 4:29 Comment(2)
Yup, Intel's L2 spatial prefetcher (Nehalem and later, including all of Sandybridge-family) tries to complete aligned pairs of cache lines (if there's spare bandwidth). prefetching data at L1 and L2 has an excerpt of Intel's optimization manual. In which condition DCU prefetcher start prefetching? has some details about when exactly they trigger.Norseman
Note that having the value change depending on -march=sandybridge vs. -march=znver1 (ryzen) within x86 could cause struct layout incompatibility if linking differently-compiled objects or libraries. (clang-developers.42468.n3.nabble.com/…). This is why clang still doesn't implement either constant. Using destructive=128 for x86 in general is a good idea; a conservative setting is safe everywhere.Norseman
T
0

I've tested the above code but I think there is a minor error preventing us from understanding the underlying functionning, a single cache line should not be shared between two distinct atomics in order to prevent false sharing. I've changed the definition of those structs.

struct naive_int
{
    alignas ( sizeof ( int ) ) atomic < int >               value;
};

struct cache_int
{
    alignas ( hardware_constructive_interference_size ) atomic < int >  value;
};

struct bad_pair
{
    // two atomics sharing a single 64 bytes cache line 
    alignas ( hardware_constructive_interference_size ) atomic < int >  first;
    atomic < int >                              second;
};

struct good_pair
{
    // first cache line begins here
    alignas ( hardware_constructive_interference_size ) atomic < int >  
                                                first;
    // That one is still in the first cache line
    atomic < int >                              first_s; 
    // second cache line starts here
    alignas ( hardware_constructive_interference_size ) atomic < int >
                                                second;
    // That one is still in the second cache line
    atomic < int >                              second_s;
};

And the resulting run:

Hardware concurrency := 40
sizeof(naive_int)    := 4
alignof(naive_int)   := 4
sizeof(cache_int)    := 64
alignof(cache_int)   := 64
sizeof(bad_pair)     := 64
alignof(bad_pair)    := 64
sizeof(good_pair)    := 128
alignof(good_pair)   := 64
Running naive_int test.
Average time: 0.060303 seconds, useless result: 8212147
Running cache_int test.
Average time: 0.0109432 seconds, useless result: 8113799
Running bad_pair test.
Average time: 0.162636 seconds, useless result: 16289887
Running good_pair test.
Average time: 0.129472 seconds, useless result: 16420417

I experienced a lot of variance in the last result but never dedicated precisely any core to that specific problem. Anyway this ran out of 2 Xeon 2690V2 and from various run using 64 or 128 for hardware_constructive_interference_size = 128 I found 64 to be more than enought and 128 a very poor use of available cache.

I suddently realized that your question helps me understand what's Jeff Preshing was talking, all about payload !?

Tattered answered 1/2, 2020 at 0:0 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.