Can boost::asio::thread_pool be used instead of combining boost::asio::io_context with a boost::thread::thread_group?
Asked Answered
T

2

16

I'm trying to clear up some confusion I have. I stumbled over boost::asio::thread_pool and I thought that one could use to somehow automatically combine boost::asio::io_context and boost::thread::thread_group like is often suggested (here or here). It appears that this asio-specific pool can be used to post tasks to but, on the other hand, some networking types like resolver need to be passed an object io_context as a constructor parameter which thread_pool isn't and doesn't derive from.

Triceps answered 18/9, 2018 at 0:21 Comment(0)
H
5

Say you have a single io_context object, named ioc.

You can create several threads, and call ioc.run() in each one of them. This is a blocking operation that blocks on epoll/select/kqueue. Note that ioc is shareable, and by calling ioc.run() in several threads, they implicitly belong to a thread pool to be used by ioc. Let's call this pool io_threadpool.

Now create a separate pool of threads called compute, that do other things. Here is what is possible:

  • You can use ioc in threads from both pools (except for a few, like restart(), which require that ioc is not actively running).

  • You can make sync I/O calls from any thread.

  • You can invoke an async call like async_read( ..., handler) from any thread. However, the handler is only invoked in one of the io_threadpool threads.

  • You can dispatch tasks in either thread pool, but if the task is not going to do any I/O, I expect it to be more efficient to dispatch it in the compute pool, because the system doesn't have to wake up the epoll()/kqueue()/select() call on which it is blocked.

Hardwood answered 25/11, 2018 at 13:48 Comment(0)
S
2

You should post your io_context.run() into the thread_pool.

Saunderson answered 18/9, 2018 at 16:34 Comment(2)
So... the answer is no?Triceps
Yes, the answer is no. You can learn more about concept-model.Saunderson

© 2022 - 2024 — McMap. All rights reserved.