Since the Mesos docs are pretty poor on this, I'm going to answer this wiki-style and update this answer as I go.
Strategies that should work
Host it on S3 (with networking-based access restrictions)
Host the .dockercfg file on S3. For better security, you should consider putting it in its own bucket, or otherwise a bucket dedicated to storing secrets. This presents some interesting challenges in creating a security policy that will actually work to lock the S3 bucket down such that only Mesos can see it, but it can be done.
Mesos task configuration:
{
...
"uris": ["https://s3-eu-west-1.amazonaws.com/my-s3-bucket-name/.dockercfg"]
...
}
S3 bucket policy (using a VPC Endpoint):
Note: this policy lets the allowed principal do anything, which is too sloppy for production, but should help when debugging in a test cluster.
{
"Id": "Policy123456",
"Version": "2012-10-17",
"Statement": [{
"Sid": "Stmt123456",
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::my-s3-bucket",
"arn:aws:s3:::my-s3-bucket/*"
],
"Condition": {
"StringEquals": {
"aws:sourceVpce": "vpce-my-mesos-cluster-vpce-id"
}
},
"Principal": "*"
}]
}
You'll also need a VPCE configuration, to give you a VPCE ID to plug into the S3 bucket condition above. (I guess if you don't use VPC endpoints you could just match on a VPC id instead?)
You can check whether this is working by going to the Mesos UI (if you are using DCOS, this is not the pretty DCOS UI) and observing whether tasks with the name of your app appear in either the Active Tasks or Completed Tasks lists.
Tempting strategies that don't work (yet)
Host it on S3 (with signed URLs)
In this S3 variant, rather than use networking-based access restrictions, we use a signed URL to the .dockercfg file instead.
The Mesos task config should look like:
{
...
"uris": ["https://my-s3-bucket/.dockercfg?AWSAccessKeyId=foo&Expires=bar&Signature=baz"]
...
}
Unfortunately the above S3 signed URL strategy does not work due to Mesos-1686 which observes that any downloaded file retains the remote filename exactly, including the query string, leading to a filename like ".dockercfg?AWSAccessKeyId=foo&Expires=bar&Signature=baz". Since the Docker client does not recognise the file unless it is exactly named ".dockercfg" it fails to see the auth credentials.
Transfer the .dockercfg file directly to each slave
One could SCP the .dockercfg to each Mesos slave. While this is a quick fix, it:
- requires knowing all the slaves in advance
- does not scale as new slaves are added to the cluster
- requires SSH access to the slaves, which are provisioned inside their own VPC (hence their IP addresses are often in the 10.0.[blah] range).
This could be turned into a more viable production approach if automated with a Configuration Management tool like Chef, which would run on the slaves, and pull the .dockercfg file in to the right place.
This will lead to a config like:
{
...
"uris": ["file:///home/core/.dockercfg"]
...
}
Since 'core' is the default user on the CoreOS based Mesos slaves, and the .dockercfg is expected by convention to be in the home directory of the current user that wants to use Docker.
Update: this should have been the most reliable approach, but I have not found a way to do it yet. the app is still eternally stuck in the 'Deploying' phase as far as Marathon is concerned.
Use a keystore service
As we are dealing with usernames and passwords, the AWS Key Management Service (or even CloudHSM at the extreme) thing seems like it should be a good idea - but AFAIK Mesos has no built-in support for this, and we are not handling individual variables but a file.
Troubleshooting
After you have set up your solution of choice, you may find that the .dockercfg file is being pulled down OK but your app is still stuck in the 'Deploying' phase. Check these things...
Ensure your .dockercfg is the right format for the Mesos Docker version
At some point, the format for the 'auth' field was changed. If the .dockercfg you supply doesn't match this format then the docker pull will silently fail. The format that the Mesos Docker version on the cluster slaves expects is:
{
"https://index.docker.io/v1/": {
"auth": [base64 of the username:password],
"email": "[email protected]"
}
}
Do not use port 80 for your app
If you are trying to deploy a Web app, make sure you did not use the host port 80 - it's not written anywhere in the docs, but Mesos Web services require port 80 for themselves, and if you try and take 80 for your own app it will just hang forever. The astute reader will notice that, among other reasons, this is why the Mesosphere "Oinker" Web app binds to the slightly unusual choice of port 0 instead.