Assigning vhosts to Docker ports
Asked Answered
D

3

86

I have a wildcard DNS set up so that all web requests to a custom domain (*.foo) map to the IP address of the Docker host. If I have multiple containers running Apache (or Nginx) instances, each container maps the Apache port (80) to some external inbound port.

What I would like to do is make a request to container-1.foo, which is already mapped to the correct IP address (of the Docker host) via my custom DNS server, but proxy the default port 80 request to the correct Docker external port such that the correct Apache instance from the specified container is able to respond based on the custom domain. Likewise, container-2.foo would proxy to a second container's apache, and so on.

Is there a pre-built solution for this, is my best bet to run an Nginx proxy on the Docker host, or should I write up a node.js proxy with the potential to manage Docker containers (start/stop/reuild via the web), or...? What options do I have that would make using the Docker containers more like a natural event and not something with extraneous ports and container juggling?

Dasteel answered 28/8, 2013 at 20:27 Comment(2)
I have this question too - as far as I can tell, running each app in a Docker container and then doing the routing at the host using an nginx server (perhaps in it's own container) is the way to do it. I'm wondering whether I should run the app server standalone (i.e. expose a php-fpm, puma, etc. server) or include a (pointless?) nginx instance as well.Antimasque
Take a look at github.com/dotcloud/hipache, which is a reverse-proxy configurable through redis.Perambulate
S
83

This answer might be a bit late, but what you need is an automatic reverse proxy. I have used two solutions for that:

  • jwilder/nginx-proxy
  • Traefik

With time, my preference is to use Traefik. Mostly because it is well documented and maintained, and comes with more features (load balancing with different strategies and priorities, healthchecks, circuit breakers, automatic SSL certificates with ACME/Let's Encrypt, ...).


Using jwilder/nginx-proxy

When running a Docker container Jason Wilder's nginx-proxy Docker image, you get a nginx server set up as a reverse proxy for your other containers with no config to maintain.

Just run your other containers with the VIRTUAL_HOST environment variable and nginx-proxy will discover their ip:port and update the nginx config for you.

Let say your DNS is set up so that *.test.local maps to the IP address of your Docker host, then just start the following containers to get a quick demo running:

# start the reverse proxy
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock jwilder/nginx-proxy

# start a first container for http://tutum.test.local
docker run -d -e "VIRTUAL_HOST=tutum.test.local" tutum/hello-world

# start a second container for http://deis.test.local
docker run -d -e "VIRTUAL_HOST=deis.test.local" deis/helloworld

Using Traefik

When running a Traefik container, you get a reverse proxy server set up which will reconfigure its forwarding rules given docker labels found on your containers.

Let say your DNS is set up so that *.test.local maps to the IP address of your Docker host, then just start the following containers to get a quick demo running:

# start the reverse proxy
docker run --rm -it -p 80:80 -v /var/run/docker.sock:/var/run/docker.sock traefik:1.7 --docker

# start a first container for http://tutum.test.local
docker run -d -l "traefik.frontend.rule=Host:tutum.test.local" tutum/hello-world

# start a second container for http://deis.test.local
docker run -d -l "traefik.frontend.rule=Host:deis.test.local" deis/helloworld
Sinew answered 1/6, 2014 at 1:15 Comment(7)
-v /var/run/docker.sock:/tmp/docker.sock Is it dangerous solution? Container this nginx proxy has access to docker host daemon? Can this be possible security hole?Stanislas
possibly. Also note that not sharing /var/run/docker.sock isn't either a warranty that the docker host cannot be exploited from a container. Docker security is a subject in its own.Sinew
Is there any known security issues? When you can reach docker host from container.Stanislas
An exploit existed in the past and the issue is now fixed but new exploits could be found in the future. Docker isn't about adding security, it is about ease of deploymentSinew
You can also run nginx-proxy and docker-gen separately so that the docker socket is not mounted on the nginx container.Differentia
Plus one. I've been using this excellent method/software almost exclusively in multi-container environments for months. Mui bueno!Alviani
if this is what I've been spending days looking for - thank you, thank you, thank you.Selfexpression
K
42

Here are two possible answers: (1) setup ports directly with Docker and use Nginx/Apache to proxy the vhosts, or (2) use Dokku to manage ports and vhosts for you (which is how I learned to do Method 1).

Method 1a (directly assign ports with docker)

Step 1: Setup nginx.conf or Apache on the host, with the desired port number assignments. This web server, running on the host, will do the vhost proxying. There's nothing special about this with regard to Docker - it is normal vhost hosting. The special part comes next, in Step 2, to make Docker use the correct host port number.

Step 2: Force port number assignments in Docker with "-p" to set Docker's port mappings, and "-e" to set custom environment variables within Docker, as follows:

port=12345 # <-- the vhost port setting used in nginx/apache
IMAGE=myapps/container-1
id=$(docker run -d -p :$port -e PORT=$port $IMAGE)
# -p :$port will establish a mapping of 12345->12345 from outside docker to
# inside of docker.
# Then, the application must observe the PORT environment variable
# to launch itself on that port; This is set by -e PORT=$port.

# Additional goodies:
echo $id # <-- the running id of your container
echo $id > /app/files/CONTAINER # <-- remember Docker id for this instance
docker ps # <-- check that the app is running
docker logs $id # <-- look at the output of the running instance
docker kill $id # <-- to kill the app

Method 1b Hard-coded application port

...if you're application uses a hardcoded port, for example port 5000 (i.e. cannot be configured via PORT environment variable, as in Method 1a), then it can be hardcoded through Docker like this:

publicPort=12345
id=$(docker run -d -p $publicPort:5000 $IMAGE)
# -p $publicPort:5000 will map port 12345 outside of Docker to port 5000 inside
# of Docker. Therefore, nginx/apache must be configured to vhost proxy to 12345,
# and the application within Docker must be listening on 5000.

Method 2 (let Dokku figure out the ports)

At the moment, a pretty good option for managing Docker vhosts is Dokku. An upcoming option may be to use Flynn, but as of right now Flynn is just getting started and not quite ready. Therefore we go with Dokku for now: After following the Dokku install instructions, for a single domain, enable vhosts by creating the "VHOST" file:

echo yourdomain.com > /home/git/VHOST
# in your case: echo foo > /home/git/VHOST

Now, when an app is pushed via SSH to Dokku (see Dokku docs for how to do this), Dokku will look at the VHOST file and for the particular app pushed (let's say you pushed "container-1"), it will generate the following file:

/home/git/container-1/nginx.conf

And it will have the following contents:

upstream container-1 { server 127.0.0.1:49162; }
server {
  listen      80;
  server_name container-1.yourdomain.com;
  location    / {
    proxy_pass  http://container-1;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_set_header Host $http_host;
    proxy_set_header X-Forwarded-For $remote_addr;
  }
}

When the server is rebooted, Dokku will ensure that Docker starts the application with the port mapped to its initially deployed port (49162 here), rather than getting assigned randomly another port. To achieve this deterministic assignment, Dokku saves the initially assigned port into /home/git/container-1/PORT and on the next launch it sets the PORT environment to this value, and also maps Docker's port assignments to be this port on both the host-side and the app-side. This is opposed to the first launch, when Dokku will set PORT=5000 and then figure out whatever random port Dokku maps on the VPS side to 5000 on the app side. It's round about (and might even change in the future), but it works!

The way VHOST works, under the hood, is: upon doing a git push of the app via SSH, Dokku will execute hooks that live in /var/lib/dokku/plugins/nginx-vhosts. These hooks are also located in the Dokku source code here and are responsible for writing the nginx.conf files with the correct vhost settings. If you don't have this directory under /var/lib/dokku, then try running dokku plugins-install.

Kingsly answered 4/9, 2013 at 19:41 Comment(0)
S
3

With docker, you want the internal ips to remain normal (e.g. 80) and figure out how to wire up the random ports.

One way to handle them, is with a reverse proxy like hipache. Point your dns at it, and then you can reconfigure the proxy as your containers come up and down. Take a look at http://txt.fliglio.com/2013/09/protyping-web-stuff-with-docker/ to see how this could work.

If you're looking for something more robust, you may want to take a look at "service discovery." (a look at service discovery with docker: http://txt.fliglio.com/2013/12/service-discovery-with-docker-docker-links-and-beyond/)

Schlesinger answered 14/12, 2013 at 23:20 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.