Why is postgres container ignoring /docker-entrypoint-initdb.d/* in Gitlab CI
Asked Answered
D

4

9

Gitlab CI keeps ignoring the sql-files in /docker-entrypoint-initdb.d/* in this project.

here is docker-compose.yml:

version: '3.6'

services:

  testdb:
    image: postgres:11
    container_name: lbsn-testdb
    restart: always
    ports:
      - "65432:5432"
    volumes:
      - ./testdb/init:/docker-entrypoint-initdb.d

here is .gitlab-ci.yml:

stages:
  - deploy

deploy:
  stage: deploy
  image: debian:stable-slim
  script:
    - bash ./deploy.sh

The deployment script basically uses rsync to deploy the content of the repository to to the server via SSH:

rsync -rav --chmod=Du+rwx,Dgo-rwx,u+rw,go-rw -e "ssh -l gitlab-ci" --exclude=".git" --delete ./ "gitlab-ci@$DEPLOY_SERVER:test/"

and then ssh's into the server to stop and restart the container:

ssh "gitlab-ci@$DEPLOY_SERVER" "cd test && docker-compose down && docker-compose up --build --detach"

This all goes well, but when the container starts up, it is supposed to run all the files that are in /docker-entrypoint-initdb.d/* as we can see here.

But instead, when doing docker logs -f lbsn-testdb on the server, I can see it stating

/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*

and I have no clue, why that happens. When running this container locally or even when I ssh to that server, clone the repo and bring up the containers manually, it all goes well and parses the sql-files. Just not when the Gitlab CI does it.

Any ideas on why that is?

Deen answered 8/5, 2019 at 17:20 Comment(7)
Weird, but here's my shot in the dark: This line indicates that there doesn't seem to be any file that could be run. Could you add something like docker exec your_db_container ls -l /docker-entrypoint-initdb.d/ to your CI config, just to make sure that the SQL file is really there?Stypsis
yes it is really thereDeen
Did you check whether there are still volumes around by running docker volume ls? If so, try to delete them (if there is nothing important there, of course – if this is the case, just rename the service in the docker-compose.yml). If this helps, try to change docker-compose down in your script to docker-compose down -v.Stypsis
Also, is there a specific reason why you've added --build to your command? You didn't specify any build part in your docker-compose.yml, so...Stypsis
sorry, the --build parameter was a leftover from breaking this down to a minimal example. But it's not that what went wrong, but something really stupid, see my own answer.Deen
So one note is that your file extension should be .sql not anything else such as .ddl. Check out the link @Stypsis sent to see why.Magnificat
bellackn gave good advice. docker exec your_db_container ls -l /docker-entrypoint-initdb.d/ showed me the dir was empty- I forgot I had moved its location in my repoObstinate
D
6

This has been easier than I expected, and fatally nothing to do with Gitlab CI but with file permissions.

I passed --chmod=Du+rwx,Dgo-rwx,u+rw,go-rw to rsync which looked really secure because only the user can do stuff. I confess that I propably copypasted it from somewhere on the internet. But then the files are mounted to the Docker container, and in there they have those permissions as well:

-rw------- 1 1005 1004 314 May  8 15:48 100-create-database.sql

On the host my gitlab-ci user owns those files, they are obviously also owned by some user with ID 1005 in the container as well, and no permissions are given to other users than this one.

Inside the container the user who does things is postgres though, but it can't read those files. Instead of complaining about that, it just ignores them. That might be something to create an issue about…

Now that I pass --chmod=D755,F644 it looks like that:

-rw-r--r--  1 1005 1004  314 May  8 15:48 100-create-database.sql

and the docker logs say

/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/100-create-database.sql

Too easy to think of in the first place :-/

Deen answered 9/5, 2019 at 11:51 Comment(0)
S
5

If you already run the postgres service before, the init files will be ignored when you restart it so try to use --build to build the image again

docker-compose up --build -d

and before you run again :

Check the existing volumes with

docker volume ls

Then remove the one that you are using for you pg service with

docker volume rm {volume_name}

-> Make sure that the volume is not used by a container, if so then remove the container as well

Summons answered 13/4, 2020 at 8:30 Comment(0)
S
0

I found this topic discovering a similar problem with PostgreSQL installation using the docker-compose tool.

The solution is basically the same. For the provided configuration:

version: '3.6'

services:

  testdb:
    image: postgres:11
    container_name: lbsn-testdb
    restart: always
    ports:
      - "65432:5432"
    volumes:
      - ./testdb/init:/docker-entrypoint-initdb.d

Your deployment script should set 0755 permissions to your postgres container volume, like chmod -R 0755 ./testdb in this case. It is important to make all subdirectories visible, so chmod -R option is required.

Official Postgres image is running under internal postgres user with the UID 70. Your application user in the host is most likely has different UID like 1000 or something similar. That is the reason for postgres init script to miss installation steps due to permissions error. This issue appears several years, but still exist in the latest PostgreSQL version (currently is 12.1)

Please be aware of security vulnerability when having readable for all init files in the system. It is good to use shell environment variables to pass secrets into the init scrip.

Here is a docker-compose example:

 postgres:
    image: postgres:12.1-alpine
    container_name: app-postgres
    environment:
      - POSTGRES_USER
      - POSTGRES_PASSWORD
      - APP_POSTGRES_DB
      - APP_POSTGRES_SCHEMA
      - APP_POSTGRES_USER
      - APP_POSTGRES_PASSWORD
    ports:
      - '5432:5432'
    volumes:
      - $HOME/app/conf/postgres:/docker-entrypoint-initdb.d
      - $HOME/data/postgres:/var/lib/postgresql/data

Corresponding script create-users.sh for creating users may looks like:

#!/bin/bash

set -o nounset
set -o errexit
set -o pipefail

POSTGRES_USER="${POSTGRES_USER:-postgres}"
POSTGRES_PASSWORD="${POSTGRES_PASSWORD}"
APP_POSTGRES_DB="${APP_POSTGRES_DB:-app}"
APP_POSTGRES_SCHEMA="${APP_POSTGRES_SCHEMA:-app}"
APP_POSTGRES_USER="${APP_POSTGRES_USER:-appuser}"
APP_POSTGRES_PASSWORD="${APP_POSTGRES_PASSWORD:-app}"

DATABASE="${APP_POSTGRES_DB}"

# Create single database.
psql --variable ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --command "CREATE DATABASE ${DATABASE}"

# Create app user.
psql --variable ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --command "CREATE USER ${APP_POSTGRES_USER} SUPERUSER PASSWORD '${APP_POSTGRES_PASSWORD}'"
psql --variable ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --command "GRANT ALL PRIVILEGES ON DATABASE ${DATABASE} TO ${APP_POSTGRES_USER}"
psql --variable ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --dbname "${DATABASE}" --command "CREATE SCHEMA ${APP_POSTGRES_SCHEMA} AUTHORIZATION ${APP_POSTGRES_USER}"
psql --variable ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --command "ALTER USER ${APP_POSTGRES_USER} SET search_path = ${APP_POSTGRES_SCHEMA},public"
Seaddon answered 12/1, 2020 at 22:38 Comment(0)
C
0

I had the same error, but I used MongoDB as a database. The initialization principle is the same, so I will describe my solution.

The solution I found is to create a Dockerfile with a forced copy of the database initialization file directly to the folder used by the database. In this case, I specify the Dockerfile for assembly directly in docker-compose and use the docker compose build command in gitlab-ci.

The project structure is shown below. I have omitted the details and left only the most necessary.

# Project structure
|-pkg/
|-tests/
| |-docker/
| | |-mongo_init_data/
| | | |-test_data.js # script for DB
| | |-compose.yml # docker-compose file docker-compose file that will be launched in pipeline
| | |-Dockerfile.mongo # Dockerfile for creating a DB container 
| |-unit_tests/
|-auth_module.py
|-Dockerfile

The Dockerfile contains the base image and file copying. You can specify more settings here, but this was enough for me.

# Dockerfile.mongo
FROM mongo:latest

COPY ./mongo_init_data /docker-entrypoint-initdb.d

The mongo_init_data folder contains scripts that the database must execute during initialization. IMPORTANT: the docker-compose file will run exactly where it is located in the project structure. For this reason, sometimes you have to go up several directories when building a project.

Below you can see that I am assembling a mongo container and a program container from Dockerfiles.

# compose.yml
name: auth_module_test

services:
  mongo:
    build: 
      dockerfile: ./Dockerfile.mongo #<-----------------------
    container_name: mongo
    environment:
      MONGO_INITDB_ROOT_USERNAME: ${MONGO_USERNAME}
      MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASSWORD}
    command: mongod --port ${MONGO_PORT}
    restart: always
    ports:
      - ${MONGO_PORT}:${MONGO_PORT}
    
  auth_module:
    image: auth_module:test
    build:
      context: ../../. #<-----------------------
    container_name: auth_module_test
    environment:
      MONGO_PASSWORD: ${MONGO_PASSWORD}
      MONGO_USERNAME: ${MONGO_USERNAME}
      MONGO_PORT: ${MONGO_PORT}
      MONGO_HOST: ${MONGO_HOST}
    command: >
      bash -c "sleep 5 && python3 -m unittest discover ./tests/unit_tests/"

Also omitting the implementation details. This is how running a docker-compose file looks like. The most important thing here is to destroy the volumes after executing the pipeline. The -v flag is used for this. If you do not do this, runners can use old volumes to start containers. Then the database will be considered initialized and the startup script will not be executed.

--abort-on-container-exit stops all containers if any container was stopped.

# .gitlab-ci.yml
stages:
  - Unittests

unittest:
  stage: Unittests
  script: 
    - docker compose -f ./tests/docker/compose.yml down -v
    - docker compose -f ./tests/docker/compose.yml build #<-----------------------
    - docker compose -f ./tests/docker/compose.yml up --abort-on-container-exit
  after_script:
    - docker compose -f ./tests/docker/compose.yml down -v

This way I managed to initialize the database with the initial data.

Cozza answered 12/9 at 19:46 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.