Access google container registry without the gcloud client
Asked Answered
P

6

34

I have a CoreOS docker host that I want to start running containers on, but when trying to use the docker command to fetch the image from the google container private registry (https://cloud.google.com/tools/container-registry/), I get a 403. I did some searching, but I'm not sure how to attach authentication (or where to generate the user+pass bundle to use with the docker login command).

Has anybody had any luck pulling from the google private containers? I can't install the gcloud command because coreos doesn't come with python, which is a requirement

docker run -p 80:80 gcr.io/prj_name/image_name
Unable to find image 'gcr.io/prj_name/image_name:latest' locally
Pulling repository gcr.io/prj_name/image_name
FATA[0000] HTTP code: 403

Update: after getting answers from @mattmoor and @Jesse:

The machine that I'm pulling from does have devaccess

curl -H 'Metadata-Flavor: Google' http://metadata.google.internal./computeMetadata/v1/instance/service-accounts/default/scopes
https://www.googleapis.com/auth/bigquery
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/compute
https://www.googleapis.com/auth/datastore
----> https://www.googleapis.com/auth/devstorage.read_only
https://www.googleapis.com/auth/logging.admin
https://www.googleapis.com/auth/sqlservice.admin
https://www.googleapis.com/auth/taskqueue
https://www.googleapis.com/auth/userinfo.email

Additionally, I tried using the _token login method

jenkins@riskjenkins:/home/andre$ ACCESS_TOKEN=$(curl -H 'Metadata-Flavor: Google' 'http://metadata.google.internal./computeMetadata/v1/instance/service-accounts/default/token' | cut -d'"' -f 4)
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   142  100   142    0     0  14686      0 --:--:-- --:--:-- --:--:-- 15777
jenkins@riskjenkins:/home/andre$ echo $ACCESS_TOKEN
**************(redacted, but looks valid)
jenkins@riskjenkins:/home/andre$ docker login -e [email protected] -u _token -p $ACCESS_TOKEN http://gcr.io
Login Succeeded
jenkins@riskjenkins:/home/andre$ docker run gcr.io/prj_name/image_name
Unable to find image 'gcr.io/prj_name/image_name:latest' locally
Pulling repository gcr.io/prj_name/image_name
FATA[0000] HTTP code: 403
Porthole answered 27/3, 2015 at 0:58 Comment(0)
P
57

The Google Container Registry authentication scheme is to simply use:

username: '_token'
password: {oauth access token}

On Google Compute Engine you can login without gcloud with:

$ METADATA=http://metadata.google.internal./computeMetadata/v1
$ SVC_ACCT=$METADATA/instance/service-accounts/default
$ ACCESS_TOKEN=$(curl -H 'Metadata-Flavor: Google' $SVC_ACCT/token \
    | cut -d'"' -f 4)
$ docker login -e [email protected] -u '_token' -p $ACCESS_TOKEN https://gcr.io

Update on {asia,eu,us,b}.gcr.io

To access a repository hosted in a localized repository, you should login to the appropriate hostname in the above docker login command.

Update on quotes around _token

As of docker version 1.8, docker login requires the -u option to be in qoutes or start with a letter.

Some diagnostic tips...

Check that you have the Cloud Storage scope via:

$ curl -H 'Metadata-Flavor: Google' $SVC_ACCT/scopes
...
https://www.googleapis.com/auth/devstorage.full_control
https://www.googleapis.com/auth/devstorage.read_write
https://www.googleapis.com/auth/devstorage.read_only
...

NOTE: "docker pull" requires "read_only", but "docker push" requires "read_write".

To give this robot access to a bucket in another project, there are a few steps.

First, find out the VM service account (aka robot)'s identity via:

$ curl -H 'Metadata-Flavor: Google' $SVC_ACCT/email
[email protected]

Next, there are three important ACLs to update:

1) Bucket ACL (needed to list objects, etc)

PROJECT_ID=correct-answer-42
[email protected]
gsutil acl ch -u $ROBOT:R gs://artifacts.$PROJECT_ID.appspot.com

2) Bucket Default ACL (template for future #3)

gsutil defacl ch -u $ROBOT:R gs://artifacts.$PROJECT_ID.appspot.com

3) Object ACLs (only needed when the bucket is non-empty)

gsutil -m acl ch -R -u $ROBOT:R gs://artifacts.$PROJECT_ID.appspot.com

Part of why this isn't in our official documentation yet is that we want a better high-level story for it, but tl;dr we respect GCS ACLs.

Photometry answered 27/3, 2015 at 22:36 Comment(10)
This looks promising, I'll give it a shot tomorrowPorthole
FWIW, we require the GCS read scope for this to work. You can check this with: curl -H 'Metadata-Flavor: Google' metadata.google.internal./computeMetadata/v1/instance/… /default/scopesPhotometry
The GCE instance does have devstorage read-only capabilities, and although login succeeded, I was unable to pull the image. I edited the original question with my attempts at both yours and @Jesse's suggestions. Let me know if you have any other suggestionsPorthole
It looks like you have "http ://gcr.io" can you try https?Photometry
I tried doing the login with https but I got the same errorPorthole
I just tested this on the coreos-beta-633-1-0-v20150401 image on GCE. Aside from a line-break that stackoverflow added to my copy/paste, it worked for me. I get a 403 before the login, I log in, I then pull successfully. A few things to bear in mind: 1) access tokens expire 2) the VM must be in the same project (or the ACLs must have been updated accordingly) Feel free to reach out to [email protected] and I'm happy to discuss this a bit more synchronously to get this resolved.Photometry
Thanks Matt. I'll contact you guys via emailPorthole
Matt. I was able to get it to work. I spawned machines on an alternate project and was trying to access the project across ACLs. Thank you for the details -- I was wondering if this is anywhere in the public documentation?Porthole
Ah, that makes sense now. Not in our docs, but we respect Google Cloud Storage ACLs. As the formatting of these comments sucks, I'll amend my answer with some instructions. :)Photometry
it worked for me, but this login token is SHORT LIVED TOKEN, you have to login for every session.Kindergartner
D
20

The answers here deal with accessing docker from within a Google Compute Engine instance.

If you want to work with the Google Container Registry on a machine not in the Google Compute Engine (i.e. local) using vanilla docker you can follow Google's instructions.

The two main methods are using an access token or a JSON key file.

Note that _token and _json_key are the actual values you provide for the username (-u)

Access Token

$ docker login -e [email protected] -u _token -p "$(gcloud auth print-access-token)" https://gcr.io

JSON Key File

$ docker login -e [email protected] -u _json_key -p "$(cat keyfile.json)" https://gcr.io

To create a key file you can follow these instructions:

  1. Open the Credentials page.
  2. To set up a new service account, do the following:
    • Click Add credentials > Service account.
    • Choose whether to download the service account's public/private key as a standard P12 file, or as a JSON file that can be loaded by a Google API client library.
    • Your new public/private key pair is generated and downloaded to your machine; it serves as the only copy of this key. You are responsible for storing it securely.

You can view Google's documentation on generating a key file here.

Derward answered 18/11, 2015 at 4:21 Comment(2)
this answer you provided gave me some hope that I could get things working, after flushing a day or more down the drain trying to do something so simple as pushing an image up to the repo, but still no luck.Jackstay
I DID IT!!! I think part of the problem is that gcr.io is so similar to grc.com, Steve Gibson/Security Now's website. Probably typo'd that many many times. When I finally got it right, the Access Token means worked for me.Jackstay
S
6

There are two official ways:

  1. $ docker login -e [email protected] -u oauth2accesstoken -p "$(gcloud auth print-access-token)" https://gcr.io
  2. $ docker login -e [email protected] -u _json_key -p "$JSON_KEY" https://gcr.io

Note: The e-mail is not used, so you can put whatever you want in it.

Change gcr.io to whatever is your domain shown in your Google Container Registry (e.g. eu.gcr.io).

Option (1) only gives a temporary token, so you probably want option (2). To get that $JSON_KEY:

  1. Go to API Manager > Credentials
  2. Click "Create credentials" > Service account key:
    • Service account: New service account
      • Name: Anything you want, like Docker Registry (read-only)
      • Role: Storage (scroll down) > Storage Object Viewer
    • Key type: JSON
  3. Download as keyfile.json
  4. JSON_KEY=$(cat keyfile.json | tr '\n' ' ')
  5. Now you can use it.

Once logged in you can just run docker pull. You can also copy the updated ~/.dockercfg to preserve the settings.

Sos answered 31/3, 2017 at 14:27 Comment(2)
unknown shorthand flag: 'e' in -e See 'docker login --help'.Basic
You saved my time, Able to login with first command.Euphemie
H
1

When you created your VM did you give it the necessary scopes in order to be able to read from the registry?

gcloud compute instances create INSTANCE \ --scopes https://www.googleapis.com/auth/devstorage.read_write

If you did so no further authentication is required.

Humic answered 29/3, 2015 at 15:44 Comment(1)
I dont think this is true, because "If this flag is not provided, the following scopes are used: googleapis.com/auth/devstorage.read_only, googleapis.com/auth/logging.write" I'm pulling from the repo, so read_only should be enough, but I still get a 403Porthole
H
0

There is an official Google Container Registry Auth Plugin published. You are welcome to try it and leave feedback/report issues.

Haematocryal answered 27/6, 2015 at 17:28 Comment(2)
Thanks, I'll try it out. I was able to get it to a good enough state by logging in to both the gcr and gcr endpoints with the access tokens. For some reason I needed to login to bothPorthole
@Andre: Are you using Google Container Registry Auth Plugin with Docker Build Step plugin or other plugin? I am also a bit confused by your statement "logging into both the gcr and gcr endpoints". Can you clarify? If you are using Docker Build Step plugin with GCR auth plugin, then you need to add the provided credentials to each individual Docker Build Step command.Haematocryal
A
-1

I have developed a jenkins plugin that allows a slave running on GCE to login into google's registry using @mattmoor's solution. It might be useful to others. :)

It's available at https://github.com/Byclosure/gcr.io-login-plugin.

Affirmatory answered 12/5, 2015 at 15:44 Comment(1)
Please don't post link only answers as links may breakHarbor

© 2022 - 2024 — McMap. All rights reserved.