Docker + fig / compose + nginx + node.js + mysql + redis
Asked Answered
H

2

16

I'm trying to set up a Node.js app on Docker multiple containers. My app is currently on one Ubuntu DO droplet & uses:

  1. Node.js (express 4)
  2. mysql for the app database
  3. redis for a key-value store
  4. nginx for load balancing and serve static files.

I need to dockerize the different parts, one for each container obviously, then use Docker-Compose (Previously known as Fig) to simply describe the different containers & setup the links between them. I'm steal unclear about the multi-container appraoch.
One for nginx
one for Node.js & my express app
one for MySql
and one for Redis

How would the Docker-compose.yml will look like? I'm guessing the nginx, mysql & redis will be unmodified official images? While the node.js will have a build directive to point to a Dockerfile, which will note it is based on the node.js official image along with configuration instructions? I will need to configure / provision mysql & redis for example so does it mean that each needs to be separate with its own Dockerfile?

What would be the way to link between the containers? use volumes to copy files into them, setting ports, adjusting the hosts file to map some.domain.com to the nginx ip?

I will then need to install globally some npm packages like nodemon & PM2 and set some cron jobs... (on the Node.js container?)

here is a first draft, I would appreciate any help to better understand this new setup:

Docker-compose.yml

nginx:
  image: nginx
  links:
    - "node"

node:
  build: .
  volumes:
    - "app:/src/app"
  ports:
    - "3030:3000"
  links:
    - "db:mysql"

db:
  image: mysql:5.6
  environment:
    - MYSQL_ROOT_PASSWORD=mypassword

Dockerfile

FROM node:0.12

RUN mkdir /src

RUN npm install nodemon pm2 -g

WORKDIR /src

ADD app/package.json /src/package.json

RUN npm install

ADD app/nodemon.json /src/nodemon.json

EXPOSE 3000

CMD npm start

I'm using this simple project as a base, though my app needs

Hepburn answered 27/2, 2015 at 12:51 Comment(0)
C
27

Before the configuration of docker-compose part, you must decide on the architecture of the system.

The parts you have -

  • MySQL executable version X listening on port 3306
  • MySQL data stored on disk
  • Redis executable version Y listening on port 6379
  • Redis backup data on disk
  • Node.js executable version Z listening on port 3000
  • Files of your Express.js application
  • Nginx executable version Q listening on port 80

Additional infrastructure considerations -

  • Single instance, one cpu/core
  • Single instance, multiple cpus/cores
  • Multiple instances
  • Which load-balancer is used (if at all)

For a single instance that will run all the components, you probably don't even need a load balancer - so unless you need to serve static files alongside your application, there is little sense here to have nginx because it wouldn't be doing anything useful.

When you have multiple containers running your express.js application either on one instance (for multi-core/CPU) or multiple instances then you need to have some kind of load balancing going on, maybe using nginx.

Handling data inside the container is not recommended since the container filesystem is not very good at handling highly mutating data. So for MySQL and Redis you probably want to have external mount points where the data resides.

Your Express.js application needs configuration of which Redis and MySQL servers it needs to connect with, this can be done using Docker links.

Thus, your Docker Compose will look something like this -

redis:
  image: redis
  volumes:
    - /data/redis:/data

mysql:
  image: mysql:5.6
  environment:
    - MYSQL_ROOT_PASSWORD=verysecret
  volumes:
    - /data/mysql:/var/lib/mysql

application:
  image: node:0.12
  working_dir: /usr/src/myapp
  volumes:
    - /src/app:/usr/src/myapp
  ports:
    - 80:3000
  links:
    - redis
    - mysql

This assumes you will store the data of MySQL and Redis on the host filesystem in /data, and your application on the host filesystem is at /src/app.

I recommend you look for the Docker Compose YAML file reference for all the various options that can be used https://docs.docker.com/compose/yml/.

Since the images used are the blessed images from Docker HUB, their readme files are important to take note of for more configuration -

Adding more instances of the application is easy, but then you will need to add nginx to load balance the incoming traffic to the multiple application containers.

Then when you want to run this kind of setup using multiple hosts, it becomes much more complex since docker-links will not work and you need another way to discover the container IP addresses and ports. And a load balancer will be required to have a single endpoint that accepts traffic for multiple instances of the application. Here I would recommend to take a good look on https://consul.io for help.

Cogon answered 1/3, 2015 at 13:6 Comment(0)
C
0

I think that you don't need to split nginx and nodejs to different instances. Currently I'm also setting up nodejs on docker, so will be back here soon and try to put complex response on you question.

There you can find a useful article about setting up similar architecture on docker http://blog.stxnext.com/posts/2015/01/development-with-docker-and-fig/ , the difference django / nodejs should not be a problem.

Counterstamp answered 27/2, 2015 at 14:17 Comment(7)
Thank you Damian. Regarding the nginx & node.js on the same container - How would you go about adding more "boxes" on the upstream config in nginx then? If I understood correctly - you need to separate it to allow a scalable structure... going through your article - thanks! and I'm looking forward to your elaborated response... AjarHepburn
We also run Nginx and Node.JS in the same container, and we scale up via the AWS ELBRawboned
Hey @Shimon :) This is very interesting - if your scaling using Amazon's Elastic Load Balancing (using auto-scaling?) - why do you need nginx along with Node.js in every single container you spin up? What is the essence of having multiple nginx instances? is it for static files serving? you have S3 for that... if you're not using nginx as a single load balancer to handle traffic routing between Node containers, why is it there in the first place? Thank you for your comment.Hepburn
We are using aws c3.large instance type which has 2 CPU cores. so we run two Node.JS processes and Nginx to load balance themRawboned
Ok, so does that mean that if you run a machine with 1 CPU you will be able to remove the overhead of yet another layer, remove nginx and scale with AWS ELB simply by spinning up more node containers? How many docker containers do you have per the c3.large instance?Hepburn
Correct if we had one CPU core we could just run one Node.JS process. We run one container per c3.larege instanceRawboned
The party line is that you SHOULD split different applications into separate containers. You don't actually gain much by putting them in the same one unless they need to share files, which Node and nginx don't really need to do. But you'll want to run them separately anyway because sooner or later you'll want two, three instances of Nodejs running, and that only requires tweaking your upstreams and invoking docker-compose a bit differently.Rundgren

© 2022 - 2024 — McMap. All rights reserved.