How to optimize performance for a docker container?
Asked Answered
B

2

17

I tested redis container based on. https://index.docker.io/u/dockerfile/redis/

With same redis-benchmark, redis-server runs inside a container much slower, than run on hosted OS, the actual statistics shown below. ( The first benchmark is for a docker container )

So, is there a way to optimize the performance for a docker container?

vagrant@precise64:/tmp$ redis-benchmark -p 49153 -q -n 100000
PING (inline): 5607.27 requests per second
PING: 6721.79 requests per second
MSET (10 keys): 6085.69 requests per second
SET: 6288.91 requests per second
GET: 6627.78 requests per second
INCR: 6454.11 requests per second
LPUSH: 6449.12 requests per second
LPOP: 5355.90 requests per second
SADD: 6237.91 requests per second
SPOP: 6794.40 requests per second
LPUSH (again, in order to bench LRANGE): 6089.76 requests per second
LRANGE (first 100 elements): 6000.24 requests per second
LRANGE (first 300 elements): 4660.70 requests per second
LRANGE (first 450 elements): 4276.79 requests per second
LRANGE (first 600 elements): 3710.85 requests per second

vagrant@precise64:/tmp$
vagrant@precise64:/tmp$ sudo /etc/init.d/redis-server start
Starting redis-server: redis-server.
vagrant@precise64:/tmp$ redis-benchmark -q -n 100000
PING (inline): 19357.34 requests per second
PING: 19175.46 requests per second
MSET (10 keys): 16697.28 requests per second
SET: 19146.08 requests per second
GET: 19175.46 requests per second
INCR: 19135.09 requests per second
LPUSH: 19168.10 requests per second
LPOP: 14976.79 requests per second
SADD: 16638.93 requests per second
SPOP: 18079.91 requests per second
LPUSH (again, in order to bench LRANGE): 18268.18 requests per second
LRANGE (first 100 elements): 16136.84 requests per second
LRANGE (first 300 elements): 11528.71 requests per second
LRANGE (first 450 elements): 9237.88 requests per second
LRANGE (first 600 elements): 8864.46 requests per second
Belgravia answered 11/2, 2014 at 1:52 Comment(0)
S
15

The container appears to be slower because you are going through an extra network layer.

In that case, instead of connecting directly to Redis, to connect to the Docker userland proxy, which itself connects back to the container (and instead of going over a local interface, this connection goes over a veth interface).

This adds a little bit of latency (not measurable compared to, e.g, a 10ms webpage generation; but 50µs is still faster than 150µs, if you see what I mean).

If you want to do a more "apples to apples" comparison, you could:

  • run redis-benchmark inside the container (to connect directly to Redis from within the container);
  • run redis-benchmark on another machine (but keep in mind that you will still have an extra network layer for the port translation mechanism);
  • run redis-benchmark on another machine and use a mechanism like pipework to give the container a macvlan interface with (almost) zero overhead.
Silent answered 11/2, 2014 at 16:44 Comment(2)
should really also add running the benchmark from another container that is --link'ed to the redis server, given that this what we recomend.Finella
@Silent In the second bullet item, I assume you are saying you will get comparable numbers when connecting from another host, irrespective redis-server is running inside or outside of a docker container in a host machne. In both the cases there is only one network layer (host network layer OR docker network layer, not both)Fissiparous
C
-1

The extra network layer of the container is the performance bottleneck in your scenario, and communicating from docker-to-docker will not help too much (some optimizations apply, but then also some extra overhead has to be confronted).

Also, running redis-benchmark in the same container as redis server will give you host-level performance but this is not the use case you are looking for, probably you would like to know the performance that can be provided by a dockerized redis server.

We, at Torusware, have done some tests to assess the overhead of dockerized applications and we have realized that the network layer of the container is limiting the performance.

In fact, running a dockerized redis-benchmark and a dockerized redis server in the same host, only achieves 38k GET and 46k SET requests per second.

We have a solution for accelerating this scenario, in a non intrusive way (no change neither in Docker nor the applications). It is our product Speedus Plug&Run, a high-performance socket library.

By using the Redis+Speedus Lite docker (available in the Docker registry for free), you will be able to reduce significantly the peak times (worst case scenario), from almost 1 second to below 1 millisecond.

Furthermore, Redis+Speedus Lite multiplies the performance by 2.5X for SET (from 46k TPS to 113k TPS) and multiplies by 3X for GET (from 39k TPS to 121k TPS).

Check this post in our blog for further details.

But if you really look for extreme performance, our Speedus Extreme Performance version will make your redis servers run really fast. Thanks to our technology, a dockerized redis server can provide other dockerized applications with 717k SET requests and 415k GET requests per second!

Check details in this post.

Combination answered 2/4, 2015 at 8:31 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.