Threading and scaling model for TCP server with epoll
Asked Answered
R

2

9

I've read the C10K doc as well as many related papers on scaling up a socket server. All roads point to the following:

  1. Avoid the classic mistake of "thread per connection".

  2. Prefer epoll over select.

  3. Likewise, legacy async io mechanism in unix may be hard to use.

My simple TCP server just listens for client connections on a listen socket on a dedicated port. Upon receiving a new connection, parses the request, and sends a response back. Then gracefully closes the socket.

I think I have a good handle on how to scale this up on a single thread using epoll. Just one loop that calls epoll_wait for the listen socket as well as for the existing client connections. Upon return, the code will handle new creating new client connections as well as managing state of existing connections depending on which socket just got signaled. And perhaps some logic to manage connection timeouts, graceful closing of sockets, and efficient resource allocation for each connection. Seems straightforward enough.

But what if I want to scale this to take advantage of multiple threads and multiple cpu cores? The core idea that springs to mind is this:

One dedicated thread for listening for incoming connections on the TCP listen socket. Then a set of N threads (or thread pool) to handle all the active concurrent client connections. Then invent some thread safe way in which the listen thread will "dispatch" the new connection (socket) to one of the available worker threads. (ala IOCP in Windows). The worker thread will use an epoll loop on all the connections it is handling to do what the single threaded approach would do.

Am I on the right track? Or is there a standard design pattern for doing a TCP server with epoll on multiple threads?

Suggestions on how the listen thread would dispatch a new connection to the thread pool?

Rhyton answered 29/11, 2011 at 8:11 Comment(1)
If your choice of language is flexible, you might like to try vibed.org which abstracts the asynchronous nature of async programming so you still get to program in a synchronous way. e.g. ubyte[] buf = new ubyte[](1024); auto data = conn.read(buf); conn.write(data);Fibrinolysis
C
2
  1. Firstly, note that it's C*10K*. Don't concern yourself if you're less than about 100 (on a typical system). Even then it depends on what your sockets are doing.
  2. Yes, but keep in mind that epoll manipulation requires system calls, and their cost may or may not be more expensive than the cost of managing a few fd_sets yourself. The same goes for poll. At low counts its cheaper to be doing the processing in user space each iteration.
  3. Asynchronous IO is very painful when you're not constrained to just a few sockets that you can juggle as required. Most people cope by using event loops, but this fragments and inverts your program flow. It also usually requires making use of large, unwieldy frameworks for this purpose since a reliable and fast event loop is not easy to get right.

The first question is, do you need this? If you're handily coping with the existing traffic by spawning off threads to handle each incoming request, then keep doing it this way. The code will be simpler for it, and all your libraries will play nicely.

As I mentioned above, juggling simultaneous requests can be complex. If you want to do this in a single loop, you'll also need to make guarantees about CPU starvation when generating your responses.

The dispatch model you proposed is the typical first step solution if your responses are expensive to generate. You can either fork or use threads. The cost of forking or generating a thread should not be a consideration in selecting a pooling mechanism: rather you should use such a mechanism to limit or order the load placed on the system.

Batching sockets onto multiple epoll loops is excessive. Use multiple processes if you're this desperate. Note that it's possible to accept on a socket from multiple threads and processes.

Churning answered 29/11, 2011 at 8:32 Comment(1)
Matt, I actually haven't written the TCP networking core yet. So I obviously don't see any reason to start with the "thread per connection" model if there is a better design pattern to consider first. Are saying that "select" is cheaper than epoll for low socket counts? Can you elaborate on the "cpu starvation" issue? I agree with the load balancing design point. And I have considered the multiple threads all blocking on accept.Rhyton
B
-1

I would guess you are on the right track. But I also think details depend upon the particular situation (bandwidh, request patterns, indifidual request processing, etc.). I think you should try, and benchmark carefully.

Boiardo answered 29/11, 2011 at 8:31 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.