Boost asio thread_pool join does not wait for tasks to be finished
Asked Answered
L

3

8

Consider the functions

#include <iostream>
#include <boost/bind.hpp>
#include <boost/asio.hpp>

void foo(const uint64_t begin, uint64_t *result)
{
    uint64_t prev[] = {begin, 0};
    for (uint64_t i = 0; i < 1000000000; ++i)
    {
        const auto tmp = (prev[0] + prev[1]) % 1000;
        prev[1] = prev[0];
        prev[0] = tmp;
    }
    *result = prev[0];
}

void batch(boost::asio::thread_pool &pool, const uint64_t a[])
{
    uint64_t r[] = {0, 0};
    boost::asio::post(pool, boost::bind(foo, a[0], &r[0]));
    boost::asio::post(pool, boost::bind(foo, a[1], &r[1]));

    pool.join();
    std::cerr << "foo(" << a[0] << "): " << r[0] << " foo(" << a[1] << "): " << r[1] << std::endl;
}

where foo is a simple "pure" function that performs a calculation on begin and writes the result to the pointer *result. This function gets called with different inputs from batch. Here dispatching each call to another CPU core might be beneficial.

Now assume the batch function gets called several 10 000 times. Therefore a thread pool would be nice which is shared between all the sequential batch calls.

Trying this with (for the sake of simplicity only 3 calls)

int main(int argn, char **)
{
    boost::asio::thread_pool pool(2);

    const uint64_t a[] = {2, 4};
    batch(pool, a);

    const uint64_t b[] = {3, 5};
    batch(pool, b);

    const uint64_t c[] = {7, 9};
    batch(pool, c);
}

leads to the result

foo(2): 2 foo(4): 4
foo(3): 0 foo(5): 0
foo(7): 0 foo(9): 0

Where all three lines appear at the same time, while the computation of foo takes ~3s. I assume that only the first join really waits for the pool to complete all jobs. The others have invalid results. (The not initialized values) What is the best practice here to reuse the thread pool?

Lute answered 20/4, 2020 at 17:29 Comment(0)
N
3

I just ran into this advanced executor example which is hidden from the documentation:

I realized just now that Asio comes with a fork_executor example which does exactly this: you can "group" tasks and join the executor (which represents that group) instead of the pool. I've missed this for the longest time since none of the executor examples are listed in the HTML documentation – sehe 21 mins ago

So without further ado, here's that sample applied to your question:

Live On Coliru

#define BOOST_BIND_NO_PLACEHOLDERS
#include <boost/asio/thread_pool.hpp>
#include <boost/asio/ts/executor.hpp>
#include <condition_variable>
#include <memory>
#include <mutex>
#include <queue>
#include <thread>

// A fixed-size thread pool used to implement fork/join semantics. Functions
// are scheduled using a simple FIFO queue. Implementing work stealing, or
// using a queue based on atomic operations, are left as tasks for the reader.
class fork_join_pool : public boost::asio::execution_context {
  public:
    // The constructor starts a thread pool with the specified number of
    // threads. Note that the thread_count is not a fixed limit on the pool's
    // concurrency. Additional threads may temporarily be added to the pool if
    // they join a fork_executor.
    explicit fork_join_pool(std::size_t thread_count = std::thread::hardware_concurrency()*2)
            : use_count_(1), threads_(thread_count)
    {
        try {
            // Ask each thread in the pool to dequeue and execute functions
            // until it is time to shut down, i.e. the use count is zero.
            for (thread_count_ = 0; thread_count_ < thread_count; ++thread_count_) {
                boost::asio::dispatch(threads_, [&] {
                    std::unique_lock<std::mutex> lock(mutex_);
                    while (use_count_ > 0)
                        if (!execute_next(lock))
                            condition_.wait(lock);
                });
            }
        } catch (...) {
            stop_threads();
            threads_.join();
            throw;
        }
    }

    // The destructor waits for the pool to finish executing functions.
    ~fork_join_pool() {
        stop_threads();
        threads_.join();
    }

  private:
    friend class fork_executor;

    // The base for all functions that are queued in the pool.
    struct function_base {
        std::shared_ptr<std::size_t> work_count_;
        void (*execute_)(std::shared_ptr<function_base>& p);
    };

    // Execute the next function from the queue, if any. Returns true if a
    // function was executed, and false if the queue was empty.
    bool execute_next(std::unique_lock<std::mutex>& lock) {
        if (queue_.empty())
            return false;
        auto p(queue_.front());
        queue_.pop();
        lock.unlock();
        execute(lock, p);
        return true;
    }

    // Execute a function and decrement the outstanding work.
    void execute(std::unique_lock<std::mutex>& lock,
                 std::shared_ptr<function_base>& p) {
        std::shared_ptr<std::size_t> work_count(std::move(p->work_count_));
        try {
            p->execute_(p);
            lock.lock();
            do_work_finished(work_count);
        } catch (...) {
            lock.lock();
            do_work_finished(work_count);
            throw;
        }
    }

    // Increment outstanding work.
    void
    do_work_started(const std::shared_ptr<std::size_t>& work_count) noexcept {
        if (++(*work_count) == 1)
            ++use_count_;
    }

    // Decrement outstanding work. Notify waiting threads if we run out.
    void
    do_work_finished(const std::shared_ptr<std::size_t>& work_count) noexcept {
        if (--(*work_count) == 0) {
            --use_count_;
            condition_.notify_all();
        }
    }

    // Dispatch a function, executing it immediately if the queue is already
    // loaded. Otherwise adds the function to the queue and wakes a thread.
    void do_dispatch(std::shared_ptr<function_base> p,
                     const std::shared_ptr<std::size_t>& work_count) {
        std::unique_lock<std::mutex> lock(mutex_);
        if (queue_.size() > thread_count_ * 16) {
            do_work_started(work_count);
            lock.unlock();
            execute(lock, p);
        } else {
            queue_.push(p);
            do_work_started(work_count);
            condition_.notify_one();
        }
    }

    // Add a function to the queue and wake a thread.
    void do_post(std::shared_ptr<function_base> p,
                 const std::shared_ptr<std::size_t>& work_count) {
        std::lock_guard<std::mutex> lock(mutex_);
        queue_.push(p);
        do_work_started(work_count);
        condition_.notify_one();
    }

    // Ask all threads to shut down.
    void stop_threads() {
        std::lock_guard<std::mutex> lock(mutex_);
        --use_count_;
        condition_.notify_all();
    }

    std::mutex mutex_;
    std::condition_variable condition_;
    std::queue<std::shared_ptr<function_base>> queue_;
    std::size_t use_count_;
    std::size_t thread_count_;
    boost::asio::thread_pool threads_;
};

// A class that satisfies the Executor requirements. Every function or piece of
// work associated with a fork_executor is part of a single, joinable group.
class fork_executor {
  public:
    fork_executor(fork_join_pool& ctx)
            : context_(ctx), work_count_(std::make_shared<std::size_t>(0)) {}

    fork_join_pool& context() const noexcept { return context_; }

    void on_work_started() const noexcept {
        std::lock_guard<std::mutex> lock(context_.mutex_);
        context_.do_work_started(work_count_);
    }

    void on_work_finished() const noexcept {
        std::lock_guard<std::mutex> lock(context_.mutex_);
        context_.do_work_finished(work_count_);
    }

    template <class Func, class Alloc>
    void dispatch(Func&& f, const Alloc& a) const {
        auto p(std::allocate_shared<exFun<Func>>(
            typename std::allocator_traits<Alloc>::template rebind_alloc<char>(a),
            std::move(f), work_count_));
        context_.do_dispatch(p, work_count_);
    }

    template <class Func, class Alloc> void post(Func f, const Alloc& a) const {
        auto p(std::allocate_shared<exFun<Func>>(
            typename std::allocator_traits<Alloc>::template rebind_alloc<char>(a),
            std::move(f), work_count_));
        context_.do_post(p, work_count_);
    }

    template <class Func, class Alloc>
    void defer(Func&& f, const Alloc& a) const {
        post(std::forward<Func>(f), a);
    }

    friend bool operator==(const fork_executor& a, const fork_executor& b) noexcept {
        return a.work_count_ == b.work_count_;
    }

    friend bool operator!=(const fork_executor& a, const fork_executor& b) noexcept {
        return a.work_count_ != b.work_count_;
    }

    // Block until all work associated with the executor is complete. While it
    // is waiting, the thread may be borrowed to execute functions from the
    // queue.
    void join() const {
        std::unique_lock<std::mutex> lock(context_.mutex_);
        while (*work_count_ > 0)
            if (!context_.execute_next(lock))
                context_.condition_.wait(lock);
    }

  private:
    template <class Func> struct exFun : fork_join_pool::function_base {
        explicit exFun(Func f, const std::shared_ptr<std::size_t>& w)
                : function_(std::move(f)) {
            work_count_ = w;
            execute_ = [](std::shared_ptr<fork_join_pool::function_base>& p) {
                Func tmp(std::move(static_cast<exFun*>(p.get())->function_));
                p.reset();
                tmp();
            };
        }

        Func function_;
    };

    fork_join_pool& context_;
    std::shared_ptr<std::size_t> work_count_;
};

// Helper class to automatically join a fork_executor when exiting a scope.
class join_guard {
  public:
    explicit join_guard(const fork_executor& ex) : ex_(ex) {}
    join_guard(const join_guard&) = delete;
    join_guard(join_guard&&) = delete;
    ~join_guard() { ex_.join(); }

  private:
    fork_executor ex_;
};

//------------------------------------------------------------------------------

#include <algorithm>
#include <iostream>
#include <random>
#include <vector>
#include <boost/bind.hpp>

static void foo(const uint64_t begin, uint64_t *result)
{
    uint64_t prev[] = {begin, 0};
    for (uint64_t i = 0; i < 1000000000; ++i) {
        const auto tmp = (prev[0] + prev[1]) % 1000;
        prev[1] = prev[0];
        prev[0] = tmp;
    }
    *result = prev[0];
}

void batch(fork_join_pool &pool, const uint64_t (&a)[2])
{
    uint64_t r[] = {0, 0};
    {
        fork_executor fork(pool);
        join_guard join(fork);
        boost::asio::post(fork, boost::bind(foo, a[0], &r[0]));
        boost::asio::post(fork, boost::bind(foo, a[1], &r[1]));
        // fork.join(); // or let join_guard destructor run
    }
    std::cerr << "foo(" << a[0] << "): " << r[0] << " foo(" << a[1] << "): " << r[1] << std::endl;
}

int main() {
    fork_join_pool pool;

    batch(pool, {2, 4});
    batch(pool, {3, 5});
    batch(pool, {7, 9});
}

Prints:

foo(2): 2 foo(4): 4
foo(3): 503 foo(5): 505
foo(7): 507 foo(9): 509

Things to note:

  • executors can overlap/nest: you can use several joinable fork_executors on a single fork_join_pool and they will join the distinct groups of tasks for each executor

You can get that sense easily when looking at the library example (which does a recursive divide-and-conquer merge sort).

Nabalas answered 16/6, 2020 at 0:44 Comment(5)
You said: "The best practice is not to reuse the pool". Does that hold for use cases where you use the thread pool extensively? Won't creating a new thread pool every time vrs creating new threads not be the same in terms of execution time?Diapositive
@Diapositive Of course, that's why you would not create pools nor threads all the time. Instead, you design your pools to be durable. Since the time I answered I did learn a new function: boost.org/doc/libs/1_78_0/doc/html/boost_asio/reference/… This at least makes it possible to grow the pool.Nabalas
I never asked the reason why it is a bad to reuse pool.Diapositive
@Diapositive Good, that's why answered the question you asked in the comment ¯\_(ツ)_/¯ Perhaps I misunderstood what you meantNabalas
unless I didn't quite get it because I thought not reusing a pool means you have to recreate the pool after each use which will defeat the purpose. How do you design your pools to durable as you said earlier. Also will boost::wait_for_all(..) fix the issue in this post: "waiting for tasks to finish" or it has some drawbacks.Diapositive
N
6

The best practice is not to reuse the pool (what would be the use of pooling, if you keep creating new pools?).

If you want to be sure you "time" the batches together, I'd suggest using when_all on futures:

Live On Coliru

#define BOOST_THREAD_PROVIDES_FUTURE_WHEN_ALL_WHEN_ANY
#include <iostream>
#include <boost/bind.hpp>
#include <boost/asio.hpp>
#include <boost/thread.hpp>

uint64_t foo(uint64_t begin) {
    uint64_t prev[] = {begin, 0};
    for (uint64_t i = 0; i < 1000000000; ++i) {
        const auto tmp = (prev[0] + prev[1]) % 1000;
        prev[1] = prev[0];
        prev[0] = tmp;
    }
    return prev[0];
}

void batch(boost::asio::thread_pool &pool, const uint64_t a[2])
{
    using T = boost::packaged_task<uint64_t>;

    T tasks[] {
        T(boost::bind(foo, a[0])),
        T(boost::bind(foo, a[1])),
    };

    auto all = boost::when_all(
        tasks[0].get_future(),
        tasks[1].get_future());

    for (auto& t : tasks)
        post(pool, std::move(t));

    auto [r0, r1] = all.get();
    std::cerr << "foo(" << a[0] << "): " << r0.get() << " foo(" << a[1] << "): " << r1.get() << std::endl;
}

int main() {
    boost::asio::thread_pool pool(2);

    const uint64_t a[] = {2, 4};
    batch(pool, a);

    const uint64_t b[] = {3, 5};
    batch(pool, b);

    const uint64_t c[] = {7, 9};
    batch(pool, c);
}

Prints

foo(2): 2 foo(4): 4
foo(3): 503 foo(5): 505
foo(7): 507 foo(9): 509

I would consider

  • generalizing
  • message queuing

Generalized

Make it somewhat more flexible by not hardcoding batch sizes. After all, the pool size is already fixed, we don't need to "make sure batches fit" or something:

Live On Coliru

#define BOOST_THREAD_PROVIDES_FUTURE_WHEN_ALL_WHEN_ANY
#include <iostream>
#include <boost/bind.hpp>
#include <boost/asio.hpp>
#include <boost/thread.hpp>
#include <boost/thread/future.hpp>

struct Result { uint64_t begin, result; };

Result foo(uint64_t begin) {
    uint64_t prev[] = {begin, 0};
    for (uint64_t i = 0; i < 1000000000; ++i) {
        const auto tmp = (prev[0] + prev[1]) % 1000;
        prev[1] = prev[0];
        prev[0] = tmp;
    }
    return { begin, prev[0] };
}

void batch(boost::asio::thread_pool &pool, std::vector<uint64_t> const a)
{
    using T = boost::packaged_task<Result>;
    std::vector<T> tasks;
    tasks.reserve(a.size());

    for(auto begin : a)
        tasks.emplace_back(boost::bind(foo, begin));

    std::vector<boost::unique_future<T::result_type> > futures;
    for (auto& t : tasks) {
        futures.push_back(t.get_future());
        post(pool, std::move(t));
    }

    for (auto& fut : boost::when_all(futures.begin(), futures.end()).get()) {
        auto r = fut.get();
        std::cerr << "foo(" << r.begin << "): " << r.result << " ";
    }
    std::cout << std::endl;
}

int main() {
    boost::asio::thread_pool pool(2);

    batch(pool, {2});
    batch(pool, {4, 3, 5});
    batch(pool, {7, 9});
}

Prints

foo(2): 2 
foo(4): 4 foo(3): 503 foo(5): 505 
foo(7): 507 foo(9): 509 

Generalized2: Variadics Simplify

Contrary to popular believe (and honestly, what usually happens) this time we can leverage variadics to get rid of all the intermediate vectors (every single one of them):

Live On Coliru

void batch(boost::asio::thread_pool &pool, T... a)
{
    auto launch = [&pool](uint64_t begin) {
        boost::packaged_task<Result> pt(boost::bind(foo, begin));
        auto fut = pt.get_future();
        post(pool, std::move(pt));
        return fut;
    };

    for (auto& r : {launch(a).get()...}) {
        std::cerr << "foo(" << r.begin << "): " << r.result << " ";
    }

    std::cout << std::endl;
}

If you insist on outputting the results in time, you can still add when_all into the mix (requiring a bit more heroics to unpack the tuple):

Live On Coliru

template <typename...T>
void batch(boost::asio::thread_pool &pool, T... a)
{
    auto launch = [&pool](uint64_t begin) {
        boost::packaged_task<Result> pt(boost::bind(foo, begin));
        auto fut = pt.get_future();
        post(pool, std::move(pt));
        return fut;
    };

    std::apply([](auto&&... rfut) {
        Result results[] {rfut.get()...};
        for (auto& r : results) {
            std::cerr << "foo(" << r.begin << "): " << r.result << " ";
        }
    }, boost::when_all(launch(a)...).get());

    std::cout << std::endl;
}

Both still print the same result

Message Queuing

This is very natural to boost, and sort of skips most complexity. If you also want to report per batched group, you'd have to coordinate:

Live On Coliru

#include <iostream>
#include <boost/asio.hpp>
#include <memory>

struct Result { uint64_t begin, result; };

Result foo(uint64_t begin) {
    uint64_t prev[] = {begin, 0};
    for (uint64_t i = 0; i < 1000000000; ++i) {
        const auto tmp = (prev[0] + prev[1]) % 1000;
        prev[1] = prev[0];
        prev[0] = tmp;
    }
    return { begin, prev[0] };
}

using Group = std::shared_ptr<size_t>;
void batch(boost::asio::thread_pool &pool, std::vector<uint64_t> begins) {
    auto group = std::make_shared<std::vector<Result> >(begins.size());

    for (size_t i=0; i < begins.size(); ++i) {
        post(pool, [i,begin=begins.at(i),group] {
              (*group)[i] = foo(begin);
              if (group.unique()) {
                  for (auto& r : *group) {
                      std::cout << "foo(" << r.begin << "): " << r.result << " ";
                      std::cout << std::endl;
                  }
              }
          });
    }
}

int main() {
    boost::asio::thread_pool pool(2);

    batch(pool, {2});
    batch(pool, {4, 3, 5});
    batch(pool, {7, 9});
    pool.join();
}

Note this is having concurrent access to group, which is safe due to the limitations on element accesses.

Prints:

foo(2): 2 
foo(4): 4 foo(3): 503 foo(5): 505 
foo(7): 507 foo(9): 509 
Nabalas answered 23/4, 2020 at 16:23 Comment(3)
Added a "group-wise message-queue" approach with just Asio, no futures, let alone Boost extensions.Nabalas
In the interest of overkill, non-grouping message-queued (this time serializing access to the console): coliru.stacked-crooked.com/a/0dac9dd96da9c1f2Nabalas
I realized just now that Asio comes with a fork_executor example which does exactly this: you can "group" tasks and join the executor (which represents that group) instead of the pool. I've missed this for the longest time since none of the executor examples are listed in the HTML documentationNabalas
N
3

I just ran into this advanced executor example which is hidden from the documentation:

I realized just now that Asio comes with a fork_executor example which does exactly this: you can "group" tasks and join the executor (which represents that group) instead of the pool. I've missed this for the longest time since none of the executor examples are listed in the HTML documentation – sehe 21 mins ago

So without further ado, here's that sample applied to your question:

Live On Coliru

#define BOOST_BIND_NO_PLACEHOLDERS
#include <boost/asio/thread_pool.hpp>
#include <boost/asio/ts/executor.hpp>
#include <condition_variable>
#include <memory>
#include <mutex>
#include <queue>
#include <thread>

// A fixed-size thread pool used to implement fork/join semantics. Functions
// are scheduled using a simple FIFO queue. Implementing work stealing, or
// using a queue based on atomic operations, are left as tasks for the reader.
class fork_join_pool : public boost::asio::execution_context {
  public:
    // The constructor starts a thread pool with the specified number of
    // threads. Note that the thread_count is not a fixed limit on the pool's
    // concurrency. Additional threads may temporarily be added to the pool if
    // they join a fork_executor.
    explicit fork_join_pool(std::size_t thread_count = std::thread::hardware_concurrency()*2)
            : use_count_(1), threads_(thread_count)
    {
        try {
            // Ask each thread in the pool to dequeue and execute functions
            // until it is time to shut down, i.e. the use count is zero.
            for (thread_count_ = 0; thread_count_ < thread_count; ++thread_count_) {
                boost::asio::dispatch(threads_, [&] {
                    std::unique_lock<std::mutex> lock(mutex_);
                    while (use_count_ > 0)
                        if (!execute_next(lock))
                            condition_.wait(lock);
                });
            }
        } catch (...) {
            stop_threads();
            threads_.join();
            throw;
        }
    }

    // The destructor waits for the pool to finish executing functions.
    ~fork_join_pool() {
        stop_threads();
        threads_.join();
    }

  private:
    friend class fork_executor;

    // The base for all functions that are queued in the pool.
    struct function_base {
        std::shared_ptr<std::size_t> work_count_;
        void (*execute_)(std::shared_ptr<function_base>& p);
    };

    // Execute the next function from the queue, if any. Returns true if a
    // function was executed, and false if the queue was empty.
    bool execute_next(std::unique_lock<std::mutex>& lock) {
        if (queue_.empty())
            return false;
        auto p(queue_.front());
        queue_.pop();
        lock.unlock();
        execute(lock, p);
        return true;
    }

    // Execute a function and decrement the outstanding work.
    void execute(std::unique_lock<std::mutex>& lock,
                 std::shared_ptr<function_base>& p) {
        std::shared_ptr<std::size_t> work_count(std::move(p->work_count_));
        try {
            p->execute_(p);
            lock.lock();
            do_work_finished(work_count);
        } catch (...) {
            lock.lock();
            do_work_finished(work_count);
            throw;
        }
    }

    // Increment outstanding work.
    void
    do_work_started(const std::shared_ptr<std::size_t>& work_count) noexcept {
        if (++(*work_count) == 1)
            ++use_count_;
    }

    // Decrement outstanding work. Notify waiting threads if we run out.
    void
    do_work_finished(const std::shared_ptr<std::size_t>& work_count) noexcept {
        if (--(*work_count) == 0) {
            --use_count_;
            condition_.notify_all();
        }
    }

    // Dispatch a function, executing it immediately if the queue is already
    // loaded. Otherwise adds the function to the queue and wakes a thread.
    void do_dispatch(std::shared_ptr<function_base> p,
                     const std::shared_ptr<std::size_t>& work_count) {
        std::unique_lock<std::mutex> lock(mutex_);
        if (queue_.size() > thread_count_ * 16) {
            do_work_started(work_count);
            lock.unlock();
            execute(lock, p);
        } else {
            queue_.push(p);
            do_work_started(work_count);
            condition_.notify_one();
        }
    }

    // Add a function to the queue and wake a thread.
    void do_post(std::shared_ptr<function_base> p,
                 const std::shared_ptr<std::size_t>& work_count) {
        std::lock_guard<std::mutex> lock(mutex_);
        queue_.push(p);
        do_work_started(work_count);
        condition_.notify_one();
    }

    // Ask all threads to shut down.
    void stop_threads() {
        std::lock_guard<std::mutex> lock(mutex_);
        --use_count_;
        condition_.notify_all();
    }

    std::mutex mutex_;
    std::condition_variable condition_;
    std::queue<std::shared_ptr<function_base>> queue_;
    std::size_t use_count_;
    std::size_t thread_count_;
    boost::asio::thread_pool threads_;
};

// A class that satisfies the Executor requirements. Every function or piece of
// work associated with a fork_executor is part of a single, joinable group.
class fork_executor {
  public:
    fork_executor(fork_join_pool& ctx)
            : context_(ctx), work_count_(std::make_shared<std::size_t>(0)) {}

    fork_join_pool& context() const noexcept { return context_; }

    void on_work_started() const noexcept {
        std::lock_guard<std::mutex> lock(context_.mutex_);
        context_.do_work_started(work_count_);
    }

    void on_work_finished() const noexcept {
        std::lock_guard<std::mutex> lock(context_.mutex_);
        context_.do_work_finished(work_count_);
    }

    template <class Func, class Alloc>
    void dispatch(Func&& f, const Alloc& a) const {
        auto p(std::allocate_shared<exFun<Func>>(
            typename std::allocator_traits<Alloc>::template rebind_alloc<char>(a),
            std::move(f), work_count_));
        context_.do_dispatch(p, work_count_);
    }

    template <class Func, class Alloc> void post(Func f, const Alloc& a) const {
        auto p(std::allocate_shared<exFun<Func>>(
            typename std::allocator_traits<Alloc>::template rebind_alloc<char>(a),
            std::move(f), work_count_));
        context_.do_post(p, work_count_);
    }

    template <class Func, class Alloc>
    void defer(Func&& f, const Alloc& a) const {
        post(std::forward<Func>(f), a);
    }

    friend bool operator==(const fork_executor& a, const fork_executor& b) noexcept {
        return a.work_count_ == b.work_count_;
    }

    friend bool operator!=(const fork_executor& a, const fork_executor& b) noexcept {
        return a.work_count_ != b.work_count_;
    }

    // Block until all work associated with the executor is complete. While it
    // is waiting, the thread may be borrowed to execute functions from the
    // queue.
    void join() const {
        std::unique_lock<std::mutex> lock(context_.mutex_);
        while (*work_count_ > 0)
            if (!context_.execute_next(lock))
                context_.condition_.wait(lock);
    }

  private:
    template <class Func> struct exFun : fork_join_pool::function_base {
        explicit exFun(Func f, const std::shared_ptr<std::size_t>& w)
                : function_(std::move(f)) {
            work_count_ = w;
            execute_ = [](std::shared_ptr<fork_join_pool::function_base>& p) {
                Func tmp(std::move(static_cast<exFun*>(p.get())->function_));
                p.reset();
                tmp();
            };
        }

        Func function_;
    };

    fork_join_pool& context_;
    std::shared_ptr<std::size_t> work_count_;
};

// Helper class to automatically join a fork_executor when exiting a scope.
class join_guard {
  public:
    explicit join_guard(const fork_executor& ex) : ex_(ex) {}
    join_guard(const join_guard&) = delete;
    join_guard(join_guard&&) = delete;
    ~join_guard() { ex_.join(); }

  private:
    fork_executor ex_;
};

//------------------------------------------------------------------------------

#include <algorithm>
#include <iostream>
#include <random>
#include <vector>
#include <boost/bind.hpp>

static void foo(const uint64_t begin, uint64_t *result)
{
    uint64_t prev[] = {begin, 0};
    for (uint64_t i = 0; i < 1000000000; ++i) {
        const auto tmp = (prev[0] + prev[1]) % 1000;
        prev[1] = prev[0];
        prev[0] = tmp;
    }
    *result = prev[0];
}

void batch(fork_join_pool &pool, const uint64_t (&a)[2])
{
    uint64_t r[] = {0, 0};
    {
        fork_executor fork(pool);
        join_guard join(fork);
        boost::asio::post(fork, boost::bind(foo, a[0], &r[0]));
        boost::asio::post(fork, boost::bind(foo, a[1], &r[1]));
        // fork.join(); // or let join_guard destructor run
    }
    std::cerr << "foo(" << a[0] << "): " << r[0] << " foo(" << a[1] << "): " << r[1] << std::endl;
}

int main() {
    fork_join_pool pool;

    batch(pool, {2, 4});
    batch(pool, {3, 5});
    batch(pool, {7, 9});
}

Prints:

foo(2): 2 foo(4): 4
foo(3): 503 foo(5): 505
foo(7): 507 foo(9): 509

Things to note:

  • executors can overlap/nest: you can use several joinable fork_executors on a single fork_join_pool and they will join the distinct groups of tasks for each executor

You can get that sense easily when looking at the library example (which does a recursive divide-and-conquer merge sort).

Nabalas answered 16/6, 2020 at 0:44 Comment(5)
You said: "The best practice is not to reuse the pool". Does that hold for use cases where you use the thread pool extensively? Won't creating a new thread pool every time vrs creating new threads not be the same in terms of execution time?Diapositive
@Diapositive Of course, that's why you would not create pools nor threads all the time. Instead, you design your pools to be durable. Since the time I answered I did learn a new function: boost.org/doc/libs/1_78_0/doc/html/boost_asio/reference/… This at least makes it possible to grow the pool.Nabalas
I never asked the reason why it is a bad to reuse pool.Diapositive
@Diapositive Good, that's why answered the question you asked in the comment ¯\_(ツ)_/¯ Perhaps I misunderstood what you meantNabalas
unless I didn't quite get it because I thought not reusing a pool means you have to recreate the pool after each use which will defeat the purpose. How do you design your pools to durable as you said earlier. Also will boost::wait_for_all(..) fix the issue in this post: "waiting for tasks to finish" or it has some drawbacks.Diapositive
T
1

I had a similar problem and ended up using latches. In this case the code would would be (I also switched from bind to lambdas):

void batch(boost::asio::thread_pool &pool, const uint64_t a[])
{
    uint64_t r[] = {0, 0};
    boost::latch latch(2);
    boost::asio::post(pool, [&](){ foo(a[0], &r[0]); latch.count_down();});
    boost::asio::post(pool, [&](){ foo(a[1], &r[1]); latch.count_down();});
    

    latch.wait();
    std::cerr << "foo(" << a[0] << "): " << r[0] << " foo(" << a[1] << "): " << r[1] << std::endl;
}

https://godbolt.org/z/oceP6jjs7

Tanga answered 27/6, 2022 at 23:41 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.