Suppose I have two vectors std::vector<uint_32> a, b;
that I know to be of the same size.
Is there a C++11 paradigm for doing a bitwise-AND
between all members of a
and b
, and putting the result in std::vector<uint_32> c;
?
Suppose I have two vectors std::vector<uint_32> a, b;
that I know to be of the same size.
Is there a C++11 paradigm for doing a bitwise-AND
between all members of a
and b
, and putting the result in std::vector<uint_32> c;
?
A lambda should do the trick:
#include <algorithm>
#include <iterator>
std::transform(a.begin(), a.end(), // first
b.begin(), // second
std::back_inserter(c), // output
[](uint32_t n, uint32_t m) { return n & m; } );
Even better, thanks to @Pavel and entirely C++98:
#include <functional>
std::transform(a.begin(), a.end(), b.begin(),
std::back_inserter(c), std::bit_and<uint32_t>());
b.begin()
a.end() - a.begin()
steps. –
Rorie reserve
the result before running this! –
Borszcz #include <iterator>
too. ;-] –
Hulett std::bit_and<uint32_t>
(from <functional>
) rather than lambda, and it'll then work on C++03 just as well :) –
Finitude uint32_t
with decltype(*a.begin())
or decltype(a)::value_type
or what have you for genericity. –
Roadwork uint32_t
with nothing at all, and you'll have full genericity for free! In C++14, std::bit_and<>{}
computes the &
of any two values, of any two types. –
Drud If you're going to be doing this a lot, on large arrays, check out the linear algebra libraries mentioned in https://stackoverflow.com/search?q=valarray. Many of them will take advantage of special instructions to get the answer faster.
Just an idea, not C++11 specific: Maybe you could step through the arrays 8 bytes at a time using uint_64, even though the actual array is composed of 32-bit integers? Then you would not rely on e.g. SSE, but still get fast execution on many CPUs that have 64-bit wide registers.
© 2022 - 2024 — McMap. All rights reserved.