If you want to start all of your apps with a single command using docker, the only option is docker-compose.
Using docker-compose is just for test purposes or a very limited production infrastructure. Best approach is to have your artifacts in different host each one.
Please read these to understand some points:
When you use docker-compose, all the services are deployed in the same machine, but each one in a container. And just one process is running inside a container.
Why localhost don't work with docker
If you enter into a container (for example a web in nodejs) and list the process, you will see something like this:
nodejs .... 3001
And into another container like a database postgres:
postgres .... 5432
So, if the nodejs web needs to connect to the database, from inside, must need the ip instead localhost of postgress database because inside of nodejs container, just one process is running in the localhost:
localhost 3001
So, use localhost:5432
won't work inside of nodejs container. Solution is to use the ip of postgres instead localhost : 10.10.100.101:5432
Solutions
When we have several containers (docker-compose) with dependencies between them, docker proposes us :
As a summary, with these features, docker create a kind of "special network" in which all your container leave in peace without complications of ips!
#1 --net=hosT
With this parameter, I was able to use localhost inside of my container to connect to mysql
docker run -it -p 5000:5000 --network=host -e ...
More details here:
What does --net=host option in Docker command really do?
#2 host.docker.internal
Just for test, quickly deploy or in a very limited production environment you could use a new feature in latest version of docker-compose(1.29.2) and docker.
Add this at the end of your docker-compose
networks:
mynetwork:
driver: bridge
this to all of your containers
networks:
- mynetwork
And if some container needs the host ip, use host.docker.internal instead of the ip
environment:
- DATABASE_HOST=host.docker.internal
- API_BASE_URL=host.docker.internal:8020/api
Finally in the containers that use host.docker.internal add this:
extra_hosts:
- "host.docker.internal:host-gateway"
Note: This was tested on ubuntu, not on mac or windows, because no bodies deploy its real applications on that operative systems
#3 Environment variables
In my opinion, Docker links or networks are a kind of illusion or deceit because this only works in one machine (develop or staging), hiding dependencies from us and other complex topics, which are required when your apps leave your laptop and go to your real servers ready to be used by your users.
Anyway if you you will use docker-compose for developer or real purposes, these steps will help you to manage the ips between your containers:
- get the local ip of your machine and store in a var like $MACHINE_HOST in a script like : startup.sh
- remove links or networks from docker-compose.json
- use $MACHINE_HOST to refer another container in your container.
Example:
db:
image: mysql:5.7.22
container_name: db_ecommerce
ports:
- "5003:3306"
environment:
MYSQL_DATABASE: lumen
MYSQL_ROOT_PASSWORD: ${DATABASE_PASSWORD}
api-php:
container_name: api_ecommerce
ports:
- "8020:80"
- "445:443"
environment:
- DATABASE_HOST=$MACHINE_HOST
- DATABASE_USER=$DATABASE_USER
- DATABASE_PASSWORD=$DATABASE_PASSWORD
- ETC=$ETC
web-react:
container_name: react_ecommerce
ports:
- 3001:3000
environment:
- API_BASE_URL=$MACHINE_HOST:8020/api
- Finally just run your startup.sh which has the variables and the classic
docker-compose up -d
Here you can find a more detailed steps of how use MACHINE_HOST variable and its use:
Miscellaneous
Also in your react app to read the url of your api using a var instead proxy in package.json, you could use:
process.env.REACT_APP_API_BASE_URL
Check this to learn how read environment variables from react app.
Advices
- Use variables instead hardcoded values in your docker-compose.json file
- Separate your environments : development, testing and production
- Build is just in development stage. In other words, don't use build in your docker-compose.json. Maybe for local development could be an alternative
- For testing and production stages, just run your containers, built and uploaded in development stage (docker registry)
- If you use proxy or environment variable to read the url of your api in your react app, your build just will work in one machine. If you need to move it between several environment like: testing, staging, uat, etc you must perform a new build because proxy or environment var in react is hardcoded inside of your bundle.js.
- This is not a problem just for react, also exist in angular, vue, etc : Check Limitation 1: Every environment requires a separate build section in this page
- You can evaluate https://github.com/utec/geofrontend-server to fix the previous explained problem (and others like authentication) if apply for you.
- If your plan is to show your web to real users, web and api must have a different domains and of course with https. Example
- ecomerce.zenit.com for your react app
- api.zenit.com or ecomerce-api.zenit.com for your php api
- Finally if you want to avoid this headache of infrastructure complications and you don't have a team of devops and syadmins,you can use heroku, digital ocean, openshift or another platforms like them. Almost all of them are docker compatible. So you just need to perform a git push of each repo with its Dockerfile inside. That platform will interpret your Dockerfile, deploy and assign you a ready to use http domain for testing or a cool domain for production (prior acquisition of the domain and certificate).