Using Docker for multiple php applications
Asked Answered
B

3

10

I recently moved from Wamp (wampserver) to Docker (Windows host). While using wamp, I was able to have multiple projects like in following file structure

- wamp64
  - www/
    - project1/ 
    - project2/
    - ....

On the wamp's Apache, I had defined a couple of virtual hosts, and all of the projects, used wamp's database, each one each own schema.

So it was quite common within the day to switch context when necessary from project1, to project2 to project3 etc.. by visiting either url like http://localhost/projectX or the corresponding virtual host.

This does not seem so straight forward on Docker, as I have seen so far. My first approach was to have a distinct Docker set up on each project

- www/
  - project1/
       - dockerfile & docker-compose
  - project2/
       - dockerfile & docker-compose
  - projectX/
       - dockerfile & docker-compose
- data // this is where mysql data lie

I think that this does not seem too efficient, compared to what I was used to using wamp, since every time I want to change context I have to use docker-compose stop the project I am currently working and docker-compose up to the project I want to switch to and vice versa.

I tried another approach, to run all projects in a single apache-php container (the entire www folder)

- www/
    dockerfile & docker-compose
    - project1/
    - project2/

which would let me have all projects available at once, but with this approach, I face two serious issues.

  1. docker build is taking too long, probably because of the increased number of files, instead of smaller amount on a single project
  2. i could not have more that one db schema initialized in mysql, so even though i managed to get 2 or 3 projects running, only 1 would be able to communicate with the corresponding db.

My docker-compose file in the first approach looks like this

version: '3'

services:

  project1:
    build:
      context: . // contents of specific project directory
      dockerfile: .docker/Dockerfile

    image: project1

    ports:
      – 80:80

    volumes:
      – .:/app/project1

   links:
      – mysql

  mysql:

    image: mysql:5.7

    ports:
      – 13306:3306

    environment:

      MYSQL_DATABASE: docker
      MYSQL_USER: docker
      MYSQL_PASSWORD: docker
      MYSQL_ROOT_PASSWORD: docker

    volumes:
      - ../data:var/lib/mysql

while my docker-compose file in the second approach looks like this

version: '3'

services:

  web-project:
    build:
      context: . // contents of www directory
      dockerfile: .docker/Dockerfile

    image: web-project

    ports:
      – 80:80

    volumes:
      – /project1:/app/project1
      – /project2:/app/project2
      – /projectX:/app/projectX

   links:
      – mysql

  mysql:

    image: mysql:5.7

    ports:
      – 13306:3306

    volumes:
      - /data:var/lib/mysql

ref for mysql data persist Docker-Compose persistent data MySQL

Burbot answered 25/11, 2019 at 22:18 Comment(5)
Are you rebuilding your Docker images for every code change? If so, you can use volumes in development to mitigate this.Kelleekelleher
Why are you having to stop one Docker project and start another one - port conflict?Kelleekelleher
What specific benefits are you hoping to get from using Docker? Can you use your working non-Docker setup for day-to-day development, and only use the Docker setup for production deployment?Dight
@DavidMaze I originally migrated from Wamp to Docker, due to the fact that i faced an incompatibility issue regarding memcache extension and PHP 7.3. Since my production environment uses LAMP environment, i think it is more appropriate to be as close as possible regarding my local setup to production's setupBurbot
@Kelleekelleher it seems that getting to know Docker for a single project differs from using it on different projects. I m doing something wrong obviously, that is why i started this thread, because there are proable ways to improve way I am using Docker.Burbot
D
8

I think the best solution for you would be to run each project in it's own container. Since containers are (should be) lightweight and easy to bring up and down, the overhead of doing this should be minimal.

The difference in what I will show versus first approach is that the docker-compose file is going to orchestrate your containers for you. As a result it should allow all of your containers (projects) to communicate with your database at the same time. (Given your projects do not overwrite each-other constantly, and cause deadlock)

Folder Structure:

- www/
    docker-compose.yml
    - project1/
      Dockerfile
    - project2/
      Dockerfile

Docker Compose

version: '3'
services:
  project1:
    build:
      context: /project1 #automatically finds Dockerfile
    container_name: project1
    ports:
      – 8081:80
    volumes:
      – .:/app/project
   links:
      – mysql
  project2:
    build:
      context: /project2 #automatically finds Dockerfile
    container_name: project2
    ports:
      – 8082:80
    volumes:
      – .:/app/project
   links:
      – mysql
  ...
  mysql:
    image: mysql:5.7
    ports:
      – 13306:3306
    volumes:
      - /data:var/lib/mysql

Then when you run docker-compose up it will bring up two project containers and a database containers within the same network. Note that each project is run on it's own port. So you will need to remember which port is linked to which container.

Deering answered 25/11, 2019 at 23:17 Comment(6)
The setup above can be expanded using a Traefik proxy with labels on each of your web project services to expose them on specific hostnames of your choosing. I have this running on my own local setup. Its almost midnight here right now so I'll come back here in the morning and send you an idea to do this so you don't have to remember ports and can just use specific host names as you require.Colossal
I am getting an error message "Cannot locate specified Dockerfile: Dockerfile" even if i specify explicitly .docker/Docker file locationBurbot
I included dockerfile directive and build worked ok!Burbot
Reading back through your problem statement I did miss the fact that you were keeping your dockerfile in .docker/Dockerfile. You can use a shared Dockerfile for each project, I would still use the context directive to make it clear that each container volume is in it's own directory.Deering
I agree, although it can automatically find its own Dockerfile, i think it is more clear to use the directive i guess for maintainance reasons etc..Burbot
if i have let suppose office mgmt system with 5 clients, then i need to create the same project 5 times with different configurations and i am having trouble managing git also. how to do in such condition? Please help.Equivoque
E
1

Create a top level folder and put this Dockerfile in it

FROM webdevops/php-apache-dev:7.2

# Add microsoft SQL support to PHP

# add microsoft packages to apt sources
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/debian/9/prod.list > /etc/apt/sources.list.d/mssql-release.list
# install needed system packages as well as a few nice to have utils
RUN DEBIAN_FRONTEND=noninteractive apt-get update && \
  DEBIAN_FRONTEND=noninteractive apt-get -y upgrade && \
  ACCEPT_EULA=Y apt-get -y install msodbcsql17 unixodbc-dev less joe iputils-ping traceroute telnet && \
  apt-get purge -y --auto-remove && \
  rm -rf /var/lib/apt/lists/*
# install sqlsrv php extensions
RUN pecl install sqlsrv pdo_sqlsrv
# load the pdo extension 
RUN echo extension=pdo_sqlsrv.so >> `php --ini | grep "Scan for additional .ini files" | sed -e "s|.*:\s*||"`/30-pdo_sqlsrv.ini
# load the mssql extension 
RUN echo extension=sqlsrv.so >> `php --ini | grep "Scan for additional .ini files" | sed -e "s|.*:\s*||"`/20-sqlsrv.ini

# our docroot
WORKDIR /app

In the top level folder run the following command:

docker build -t myphpdev:latest . -f Dockerfile

Now in the same top level folder create a docker-compose.yml and put the following in it:

version: '3'
services:
  web_debug:
    image: myphpdev:latest
    ports:
      - 1080:80
    volumes:
      - .:/app
    environment:
      - PHP_XDEBUG_ENABLED=1
      - PHP_DATE_TIMEZONE="America/Los_Angeles"
      - PHP_MEMORY_LIMIT="512M"
      - PHP_MAX_EXECUTION_TIME="600"
      - PHP_MAX_INPUT_TIME="60"
      - PHP_POST_MAX_SIZE="512M"
      - PHP_UPLOAD_MAX_FILESIZE="512M"
      - PHP_ERROR_REPORTING="E_ALL & ~E_DEPRECATED & ~E_STRICT"
      - PHP_DISPLAY_ERRORS="1"
      - PHP_DISPLAY_STARTUP_ERRORS="1"
      - PHP_DEBUGGER="xdebug"
      - XDEBUG_CONFIG=remote_host=host.docker.internal

  mysql:
    image: mysql:5.7
    ports:
      - 13306:3306
    volumes:
      - ./data:/var/lib/mysql
    environment:
      - MYSQL_ROOT_PASSWORD=password

# if you get errors about invalid or needing to use absolute paths
# you may need to down your stack then run the next line then up the stack
# SET COMPOSE_CONVERT_WINDOWS_PATHS=1

In that same top level folder create a folder named data for your database persistence and one for each one of you sites. Each site will be available at http://localhost:1080/site_folder

adjust / add any PHP_ env vars as needed the image will update the php.ini based on the env vars when it comes up.

When running this way each site will have to be setup with relative links instead of absolute and they will all be sharing the same db instance.

The least resource intensive config is for each site to share the same db instance and have each site with its own db on the instance. You could also use table prefixing if for some reason all the site had to use the same db name.

If you don't mind the extra resource usage you could copy the mysql block making sure you have a unique port number and data folder for each and bring up a totally separate db server for each site.

Mounting the host inside the container will allow you edit the contents of the site folders from the host side as well as debug with xdebug in real-time without having to rebuild your docker environment every time you want to iterate or context switch.

You can also setup virtual hosts by adding some Apache configs if you don’t want to use relative links in your sites by mounting a config into /opt/docker/etc/httpd/vhost.common.d/

Check out the docs for the image referenced in Dockerfile. They have tons of variations in terms of php version web server and base os as well as dev vs production setups. I have included a link to their docs below.

WebDevOps ApachePHP Docker Docs

Emersion answered 26/11, 2019 at 1:9 Comment(6)
Hi Paul, where is my.docker/Dockerfile in this setup? i don't see it referenced anywhereBurbot
This setup allows you to use raw folders with an IDE. No need for a docker file rebuild to iterate. That’s what makes it so fast. If you do require a custom docker env you can build and tag your docker image then replace the image referenced in the compose file with yours and still mount the folders onto your image for fast iteration without having to rebuild every time you edit a file. Does that make sense?Emersion
I didn't mention in my initial question that in my docker file,i install some extra modules for php like pdo mysql memcache etc. How are these going to be initialized if no docker file is included?Burbot
The image in the compose file has just about every every php module enabled. If it is missing one you need then you will have to use the build and tag method I mentioned in my previous comment. If you are not clear on how that would work, I can update my answer to demonstrate how that would look / work.Emersion
I would appreciate if you would update your response Paul, it might be useful for me or other people facing similar concernsBurbot
Omg, is it accurate solution? Is it the way Docker "simplify" things? UnbelievableDiphase
N
0

I'd be super careful with Kyle's answer.. Consider this example

./project1/Dockerfile
./project2/Dockerfile
./shared.php

By using his suggestions you will not be able to use the shared.php in either ./project1/Dockerfile or ./project2/Dockerfile

So instead of

build:
  context: /project1 #automatically finds Dockerfile

Just do

build:
  context: .
  dockerfile: ./project1/Dockerfile

This tells the system that the context folder which may contain shared files is different from the folder where the Dockerfile resides which is a very useful separation.

More details about context here https://mcmap.net/q/53554/-how-to-include-files-outside-of-docker-39-s-build-context

Numbing answered 12/2, 2021 at 4:28 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.