How to pass google cloud application credentials file to docker container
Asked Answered
B

4

7

I would like to pass my Google Cloud Platform's service account JSON credentials file to a docker container so that the container can access a cloud storage bucket. So far I tried to pass the file as an environment parameter on the run command like this:

  • Using the --env flag: docker run -p 8501:8501 --env GOOGLE_APPLICATION_CREDENTIALS=/Users/gcp_credentials.json" -t -i image_name
  • Using the -e flag and even exporting the same env variable in the command line: docker run -p 8501:8501 -e GOOGLE_APPLICATION_CREDENTIALS=/Users/gcp_credentials.json" -t -i image_name

But nothing worked, and I always get the following error when running the docker container:

W external/org_tensorflow/tensorflow/core/platform/cloud/google_auth_provider.cc:184] All attempts to get a Google authentication bearer token failed, returning an empty token. Retrieving token from files failed with "Not found: Could not locate the credentials file.".

How to pass the google credentials file to a container running locally on my personal laptop?

Breastsummer answered 22/5, 2022 at 22:8 Comment(8)
If you are running on Compute Engine, use a volume mount. Then you can specify **GOOGLE_APPLICATION_CREDENTIALS=/volume/mount/path" as a normal environment variable inside your container.Detachment
Does this answer your question? Add a file in a docker imageCarnay
@JohnHanley It rather seems to be the situation, to connect from a local container to GCS in order to run TF2, despite the question doesn't literally state that.Carnay
@MartinZeitler - Hi Martin, I am not sure what you mean. The only Google service that supports running Docker is Compute Engine. That is why I said, "If you are running on Compute Engine".Detachment
@JohnHanley The question does not tell where the container runs, while the docker command seemingly had been issued in a local shell ...that's why I'd assume this scenario. Ir probably doesn't even matter where it runs, while the task is to add a config file into it.Carnay
Does my article help? medium.com/google-cloud/…Wake
Hi @JohnHanley, like Martin said, Im running the container in my personal laptop. Sorry for forgetting to mention it on my post, which I have just edited accordingly. Thank youBreastsummer
Docker volume mount work on your laptop as well.Detachment
Z
7

I log into gcloud in my local environment then share that json file as a volume in the same location in the container.

Here is great post on how to do it with relevant extract below: Use Google Cloud user credentials when testing containers locally

Login locally

To get your default user credentials on your local environment, you have to use the gcloud SDK. You have 2 commands to get authentication:

gcloud auth login to get authenticated on all subsequent gcloud commands gcloud auth application-default login to create your ADC locally, in a “well-known” location.

Note location of credentials

The Google auth library tries to get a valid credentials by performing checks in this order

Look at the environment variable GOOGLE_APPLICATION_CREDENTIALS value. If exists, use it, else… Look at the metadata server (only on Google Cloud Platform). If it returns correct HTTP codes, use it, else… Look at “well-know” location if a user credential JSON file exists The “well-known” locations are

On linux: ~/.config/gcloud/application_default_credentials.json On Windows: %appdata%/gcloud/application_default_credentials.json

Share volume with container

Therefore, you have to run your local docker run command like this

ADC=~/.config/gcloud/application_default_credentials.json \ docker run \
-e GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/FILE_NAME.json
-v ${ADC}:/tmp/keys/FILE_NAME.json:ro \ <IMAGE_URL>

NB: this is only for local development, on Google Cloud Platform the credentials for the service are automatically inserted for you.

Zoophilous answered 26/12, 2022 at 18:51 Comment(0)
C
3

You cannot "pass" an external path, but have to add the JSON into the container.

Carnay answered 22/5, 2022 at 22:29 Comment(0)
E
0

Two ways to do it:

secrets - work with docker swarm mode.

  • create docker secrets
  • use secret with a container using --secret

Advantage being, secrets are encrypted. Secrets are decrypted when mounted to containers.

Encapsulate answered 23/5, 2022 at 9:9 Comment(0)
J
0

For Windows OS, the following works:

Create and run a new container from the latest image named "data-platform-pub" Option -v bind volume mounts the Google SDK folder Option -i keeps STDIN open even if not attached Option -t allocates a pseudo-TTY

docker run -it -v "%appdata%//gcloud"://root/.config/gcloud data-platform-pub:latest

NOTE: The command below is more minimal and also works:

docker run -it -v "%appdata%//gcloud//application_default_credentials.json"://root/.config/gcloud/application_default_credentials.json data-platform-pub:latest

I have a complete detailed example on how to do this in this free and public article: https://medium.com/@markwkiehl/containerization-using-docker-469a0fa9dd69

DO NOT set the environment variable GOOGLE_APPLICATION_CREDENTIALS !!! This overrides the ADC flow. You want the ADC flow to ultimately resolve the required credentials by using the metadata server, and the service account impersonation you have configured.

My articles also show how to run a single Python script that uses the Google SKD and Google services both locally in a Docker container (as just described), and as a Google Cloud Run Job from the Docker image uploaded to Google Artifact Registry.

Jeremy answered 4/10 at 19:42 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.