I have a Django project that is managed by multiple repositories:
- main repo - the Django project (incl.
manage.py
) - multiple submodules - each represents an app that is installed in the Django project.
I adopted this structure using this tutorial, which allows me to manage my code in a better way.
However using submodules appears to be a major issue when it comes to GitLab's CICD, which provides quite limited documentation on this topic.
I managed to get this to work for another similar project but just for the PyPI package building job:
GitLab CI job for PyPI packaging for repo with submodules
build-pypi-pkg:
stage: build
image: python:latest
variables: !reference [.git_vars, variables]
before_script:
- git config --global credential.helper store
- echo "Login URL https://${CI_REGISTRY_USER}:${CI_JOB_TOKEN}@gitlab.example.com"
- echo "https://${CI_REGISTRY_USER}:${CI_JOB_TOKEN}@gitlab.example.com" > ~/.git-credentials
- git submodule sync --recursive
- git submodule update --init --recursive
#- mkdir dist
script:
- echo Installing Twine for publishing PyPI package
- pip install build twine
- echo Building PyPI packages
- |
#!/bin/bash
for i in `git submodule foreach --quiet 'echo "$name"'`
do
echo "Building PyPI package for submodule '$i'"
python -m build "components/$i"
done
- echo Publishing packages in ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/packages/pypi
- |
#!/bin/bash
for i in `git submodule foreach --quiet 'echo "$name"'`
do
echo "Publishing PyPI package '$i'"
TWINE_PASSWORD=${CI_JOB_TOKEN} TWINE_USERNAME=gitlab-ci-token python -m twine upload --verbose --repository-url ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/packages/pypi "components/$i/dist/*"
done
The code above allows me to handle theoretically unlimited number of submodules as long as they provide a valid packaging structure (include setup.py
in the root and so on) and use the same credentials (can be adapted for multiple credentials of course).
Now I am trying to adopt this to an image building job (either using Docker in Docker or Kaniko). The final image needs to contain the full project including all installed apps (submodules).
So far I was unable to find a way of providing the credentials that I need when submodules are being cloned (tried with both static and relative URLs). I would end up with the following error for every submodule:
fatal: could not read Username for 'https://gitlab.example.com': No such device or address
I have control over the following things:
- main repo
- all submodule repos - however a possible solution should assume that I am e.g. referencing at least on submodule that I do not own (but can get a CI job token in order to execute CI/CD jobs for it)
- cluster - a microk8s (Canonical flavour of Kubernetes). Here I have created a secret that I used with
helm
when deploying my other projects. I can add more if required.
One possible solution is to package all submodules in a separate job as PyPI packages and then install those in my Dockerfile using pip
. However this has numerous issues including setting up testing environment, running tests (I'd prefer not to package tests) and easy of trying things out once the project is deployed (interactive shell in the running pod).
The other one is to create a custom image that has both Kaniko as well as git, clone my repos (I can use e.g. sed
to temporarily append credentials to the Dockerfile
) and then cleanup (uninstall git, remove credentials etc.) before I build the final image. This sounds like too much work.