How many clients can an http-server handle?
Asked Answered
B

2

10

I built a web application with Angular2 as client and NodeJS as server. I want to serve it with npm's http-server application without any configuration but I wonder how many clients it can handle simultaneously?

Brigitte answered 24/11, 2016 at 14:13 Comment(2)
Did my answer below help you? Any comments?Churchlike
@Churchlike Actually my question was incomplete. I couldn't ask and find answer to what I have in my mind, then I moved on. But your answer include some effort and may be useful for other users so I will up-vote it. Sorry for not responding before.Berns
C
30

Instead of speculating I decided to make some benchmarks that you can run on your own server, to see what's the answer to that question in your case. I will also include the results of those tests that I got on my computer which are quite interesting.

Preparing for tests

First, what I did and how anyone can repeat it:

Make some new directory and install the http-server module - this part can be skipped if you already have a running server but I included it here so anyone could repeat those tests:

mkdir httptest
cd httptest
npm install http-server

Starting the server

Now you will have to start the server. We'll do it with root because that way it will be easiest to increase the open files limit.

Become root to be able to increase the open files limit later:

sudo -s

Now as root:

ulimit -a 100000

And now run the server, still as root:

./node_modules/.bin/http-server

or run it however you would normally run if you have http-server already installed.

You should see something like:

Starting up http-server, serving ./
Available on:
  http://127.0.0.1:8080

Running benchmarks

Now, in another terminal, become root as well:

sudo -s

You will need to install the ab tool from Apache. On Ubuntu you can install it with:

apt-get install apache2-utils

Now, still as root, increase the open files limit:

ulimit -n 100000

And start the benchmark with:

ab -n 10000 -c 10000 -k http://localhost:8080/

Which means to make 10,000 requests, 10,000 of which (all of them) made concurrently.

Test results

The result I got was:

# ab -n 10000 -c 10000 -k http://localhost:8080/
This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests


Server Software:        
Server Hostname:        localhost
Server Port:            8080

Document Path:          /
Document Length:        538 bytes

Concurrency Level:      10000
Time taken for tests:   17.247 seconds
Complete requests:      10000
Failed requests:        0
Keep-Alive requests:    0
Total transferred:      7860000 bytes
HTML transferred:       5380000 bytes
Requests per second:    579.82 [#/sec] (mean)
Time per request:       17246.722 [ms] (mean)
Time per request:       1.725 [ms] (mean, across all concurrent requests)
Transfer rate:          445.06 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0  255 321.2    141    1000
Processing:   143 2588 1632.6   3073   16197
Waiting:      143 2588 1632.7   3073   16197
Total:        143 2843 1551.8   3236   17195

Percentage of the requests served within a certain time (ms)
  50%   3236
  66%   3386
  75%   3455
  80%   3497
  90%   3589
  95%   3636
  98%   3661
  99%   3866
 100%  17195 (longest request)

Answer to your question

This is what I got on a very busy system with very little free RAM available so your mileage may vary. But it served 10,000 connections at the same time so the answer to your question is: it can handle a lot of requests, at least 10,000. I wonder what you will be able to achieve on your own server - please comment if you get some interesting results.

Conclusion

If you use http-server then you don't have to worry about complexity of the requests because all of them will do the same thing - serve a single static file from the disk. The only difference would be the size of the files but serving bigger files should not be in the number of possible concurrent connections but with the time it takes to transfer the data.

You should make those tests on your own real files that you're actually serving so that you could see the numbers for your own specific case.

The results are interesting because it shows how many connections you can handle with such a simple server written in Node. Try that with Apache.

Churchlike answered 24/11, 2016 at 14:37 Comment(1)
In the example above, the mean time per request of 17seconds is ludicrously high. I handled all the requests, but it queue'd a large number of them, resulting in tremendous delays to process all the requests. That it did process all of them is laudable. Instead of crushing the system, test with just a single concurrent path, but many requests (100 to 1000) and review the mean time for a request to process. Especially for node.js (which is single threaded), that will give you the # of ms it takes to process a single request, 1/(# ms) is the rough number of requests that system is handling.Tetracycline
Y
4

The maximum throughput depends of the hardware you are using and complexity of the requests (cpu/io/eventloop blocks...).

You can measure it yourself with some http benchmark tools or find some examples here: https://raygun.com/blog/2016/06/node-performance/

Some http benchmark tools:

Yesima answered 24/11, 2016 at 14:35 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.