How can a single-threaded NGINX handle so many connections?
Asked Answered
B

1

16

NGNIX uses epoll notification to know if there is any data on the socket to read.

Let assume: There are two requests to the server. nginx is notificated about this two requests and starts to:

  • receive the first request

  • parse ist headers

  • check the boudary (body size)

  • send the first request to upstream server

  • etc.

nginx is singe-threaded and can do only one operation at the same time.

But what happens with the second request?

  1. Does nginx receive the second request during parsing the first one?

  2. Or it begins to handle the second request after getting the first done?

  3. Or something else that I don't understand.

If 1. is correct than I don't understand how it is possible within a single thread.

If 2. is correct than how can nginx be so fast? because nginx handles all incoming requests sequentially. At any given time only ONE request handling is possible.

Please help me to understand. Thanks

Breastplate answered 29/4, 2015 at 17:4 Comment(2)
Might be related: unix sockets are proven slower than using TCP. And another thing: nginx is single-treaded but creates few workers, 4 by default, is your question is about what happens inside one particular worker?Pledgee
See my answer here for an explanation: https://mcmap.net/q/686020/-what-does-it-mean-to-say-apache-spawns-a-thread-per-request-but-node-js-does-notPredestinarian
S
17

Nginx is not a single threaded application. It does not start a thread for each connection but it starts several worker threads during start. The nginx architecture is well described in the http://www.aosabook.org/en/nginx.html.

Actually a single threaded non-blocking application is the most efficient design for a single processor hardware. When we have only one CPU and the application is completely non-blocking the application can fully utilize the CPU power. Non-blocking application means that application does not call any function that might wait for an event. All IO operation are asynchronous. That means application does not call simple read() from socket because the call might wait till data is available. Non-blocking application uses some mechanism how to notify application that data is available and it can call read() without risk that the call will wait for something. So ideal non-blocking application needs only one thread for one CPU in the system. As nginx uses non-blocking calls the processing in multiple threads has no meaning because there would be no CPU to execute additional threads.

The real data receiving from a network card to a buffer is done in the kernel when network card issue an interrupt. Then nginx gets a request in a buffer and process it. It has no meaning to start processing another request till the current request processing is done or till the current request processing requires an action that might block (for example disk read).

Scoggins answered 29/4, 2015 at 19:32 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.