How do I deploy updated Docker images to Amazon ECS tasks?
Asked Answered
U

12

237

What is the right approach to make my Amazon ECS tasks update their Docker images, once said images have been updated in the corresponding registry?

Universe answered 17/1, 2016 at 15:36 Comment(0)
U
5

I created a script for deploying updated Docker images to a staging service on ECS, so that the corresponding task definition refers to the current versions of the Docker images. I don't know for sure if I'm following best practices, so feedback would be welcome.

For the script to work, you need either a spare ECS instance or a deploymentConfiguration.minimumHealthyPercent value so that ECS can steal an instance to deploy the updated task definition to.

My algorithm is like this:

  1. Tag Docker images corresponding to containers in the task definition with the Git revision.
  2. Push the Docker image tags to the corresponding registries.
  3. Deregister old task definitions in the task definition family.
  4. Register new task definition, now referring to Docker images tagged with current Git revisions.
  5. Update service to use new task definition.

My code pasted below:

deploy-ecs

#!/usr/bin/env python3
import subprocess
import sys
import os.path
import json
import re
import argparse
import tempfile

_root_dir = os.path.abspath(os.path.normpath(os.path.dirname(__file__)))
sys.path.insert(0, _root_dir)
from _common import *


def _run_ecs_command(args):
    run_command(['aws', 'ecs', ] + args)


def _get_ecs_output(args):
    return json.loads(run_command(['aws', 'ecs', ] + args, return_stdout=True))


def _tag_image(tag, qualified_image_name, purge):
    log_info('Tagging image \'{}\' as \'{}\'...'.format(
        qualified_image_name, tag))
    log_info('Pulling image from registry in order to tag...')
    run_command(
        ['docker', 'pull', qualified_image_name], capture_stdout=False)
    run_command(['docker', 'tag', '-f', qualified_image_name, '{}:{}'.format(
        qualified_image_name, tag), ])
    log_info('Pushing image tag to registry...')
    run_command(['docker', 'push', '{}:{}'.format(
        qualified_image_name, tag), ], capture_stdout=False)
    if purge:
        log_info('Deleting pulled image...')
        run_command(
            ['docker', 'rmi', '{}:latest'.format(qualified_image_name), ])
        run_command(
            ['docker', 'rmi', '{}:{}'.format(qualified_image_name, tag), ])


def _register_task_definition(task_definition_fpath, purge):
    with open(task_definition_fpath, 'rt') as f:
        task_definition = json.loads(f.read())

    task_family = task_definition['family']

    tag = run_command([
        'git', 'rev-parse', '--short', 'HEAD', ], return_stdout=True).strip()
    for container_def in task_definition['containerDefinitions']:
        image_name = container_def['image']
        _tag_image(tag, image_name, purge)
        container_def['image'] = '{}:{}'.format(image_name, tag)

    log_info('Finding existing task definitions of family \'{}\'...'.format(
        task_family
    ))
    existing_task_definitions = _get_ecs_output(['list-task-definitions', ])[
        'taskDefinitionArns']
    for existing_task_definition in [
        td for td in existing_task_definitions if re.match(
            r'arn:aws:ecs+:[^:]+:[^:]+:task-definition/{}:\d+'.format(
                task_family),
            td)]:
        log_info('Deregistering task definition \'{}\'...'.format(
            existing_task_definition))
        _run_ecs_command([
            'deregister-task-definition', '--task-definition',
            existing_task_definition, ])

    with tempfile.NamedTemporaryFile(mode='wt', suffix='.json') as f:
        task_def_str = json.dumps(task_definition)
        f.write(task_def_str)
        f.flush()
        log_info('Registering task definition...')
        result = _get_ecs_output([
            'register-task-definition',
            '--cli-input-json', 'file://{}'.format(f.name),
        ])

    return '{}:{}'.format(task_family, result['taskDefinition']['revision'])


def _update_service(service_fpath, task_def_name):
    with open(service_fpath, 'rt') as f:
        service_config = json.loads(f.read())
    services = _get_ecs_output(['list-services', ])[
        'serviceArns']
    for service in [s for s in services if re.match(
        r'arn:aws:ecs:[^:]+:[^:]+:service/{}'.format(
            service_config['serviceName']),
        s
    )]:
        log_info('Updating service with new task definition...')
        _run_ecs_command([
            'update-service', '--service', service,
            '--task-definition', task_def_name,
        ])


parser = argparse.ArgumentParser(
    description="""Deploy latest Docker image to staging server.
The task definition file is used as the task definition, whereas
the service file is used to configure the service.
""")
parser.add_argument(
    'task_definition_file', help='Your task definition JSON file')
parser.add_argument('service_file', help='Your service JSON file')
parser.add_argument(
    '--purge_image', action='store_true', default=False,
    help='Purge Docker image after tagging?')
args = parser.parse_args()

task_definition_file = os.path.abspath(args.task_definition_file)
service_file = os.path.abspath(args.service_file)

os.chdir(_root_dir)

task_def_name = _register_task_definition(
    task_definition_file, args.purge_image)
_update_service(service_file, task_def_name)

_common.py

import sys
import subprocess


__all__ = ['log_info', 'handle_error', 'run_command', ]


def log_info(msg):
    sys.stdout.write('* {}\n'.format(msg))
    sys.stdout.flush()


def handle_error(msg):
    sys.stderr.write('* {}\n'.format(msg))
    sys.exit(1)


def run_command(
        command, ignore_error=False, return_stdout=False, capture_stdout=True):
    if not isinstance(command, (list, tuple)):
        command = [command, ]
    command_str = ' '.join(command)
    log_info('Running command {}'.format(command_str))
    try:
        if capture_stdout:
            stdout = subprocess.check_output(command)
        else:
            subprocess.check_call(command)
            stdout = None
    except subprocess.CalledProcessError as err:
        if not ignore_error:
            handle_error('Command failed: {}'.format(err))
    else:
        return stdout.decode() if return_stdout else None
Universe answered 20/1, 2016 at 10:25 Comment(4)
This is overkill. Should be possible to deploy via terraform or just single ecs-cli line.Tsuda
@Tsuda I'm using Terraform to update the ECS task image. That's as overkill as the above python-code. The steps required are as complicated.Quoit
Really overkill, I put a simple script in my answer do what the highest rated answers are proposing. Have a look.Pulchia
github.com/silinternational/ecs-deploy looks like overkill that is being maintained. :)Coarsegrained
B
218

If your task is running under a service you can force a new deployment. This forces the task definition to be re-evaluated and the new container image to be pulled.

aws ecs update-service --cluster <cluster name> --service <service name> --force-new-deployment
Bustard answered 1/2, 2018 at 21:48 Comment(15)
I think for this to work you need to make sure that there are enough resources on your ECS instances to deploy an additional task of the same size. I assume that AWS tries to essentially perform a hotswap, waiting fo a new task instance to be pre-booted, before terminating the old one. It just keeps adding "deployments" entries with 0 running instances, if you don't.Fogle
@AlexFedulov, yep, I think you are correct. In order to not incur downtime when creating a new deployment you can either 1) Provision enough instances to deploy the new version alongside the old version. This can be achieved with autoscaling. 2) Use the Fargate deployment type. You can avoid allocating extra resources by setting the service's "minimum healthy percent" parameter to 0 to allow ECS to remove your old service before deploying the new one. This will incur some downtime, though.Bustard
Unknown options: --force-new-deploymentPatti
Unknown options: --force-new-deployment: upgrade awscliGowon
You might also need to add --region <region> flagParaboloid
Searched for half of internet, that's all I really needed, just updating to latest docker image, nothing else. Btw if someone is interested run-task works similar way you can override even task definition command without creating new taskTsuda
i tried this command, it does not update the container with new image, it spins up another container with same old image. So I end up having two containers running even though in service i have specifid desired count =1Accomplice
For me this command is creating a new deployment, however, it is not picking the new image with same tag.Lexicostatistics
@Accomplice you need to wait for the old container to be terminated (or you can terminate it yourself), for you to see changes that came with the new image. Otherwise if you still have 2 containers running, the old one will still be handling trafficMonorail
You can set the "minimum healthy percent" param to 0 and ECS will swap out the old containers for the new ones.Bustard
Bear in mind this only applies if you are overwriting the image, not if you tag it with a new version.Disprize
Works for me when I have AWS::ECS::Service with DeploymentConfiguration: MinimumHealthyPercent: 50 MaximumPercent: 200Scroll
In recent versions of the awscli, you'll need to use --no-cli-pager to prevent the command from prompting the user for input.Dexamethasone
Is there any way to do this from AWS console ? And I don't understand, what good is ECS if this is what it takes to deploy new changes every time. I mean I want to push my code on prod almost every day, then this doesn't look so good. '--force-new-deployment' does not look good to do each and every time I want to push my code on prod. Is there any better way for CI CD on ECS ? Any links will be helpful.Mcarthur
Agreed, @TusharJDudhatra, I thought this was pretty annoying as well. Haven't used ECS since 2018 so maybe there's a better way now. Let me know if you find something and I will edit the answer.Bustard
M
89

Every time you start a task (either through the StartTask and RunTask API calls or that is started automatically as part of a Service), the ECS Agent will perform a docker pull of the image you specify in your task definition. If you use the same image name (including tag) each time you push to your registry, you should be able to have the new image run by running a new task. Note that if Docker cannot reach the registry for any reason (e.g., network issues or authentication issues), the ECS Agent will attempt to use a cached image; if you want to avoid cached images from being used when you update your image, you'll want to push a different tag to your registry each time and update your task definition correspondingly before running the new task.

Update: This behavior can now be tuned through the ECS_IMAGE_PULL_BEHAVIOR environment variable set on the ECS agent. See the documentation for details. As of the time of writing, the following settings are supported:

The behavior used to customize the pull image process for your container instances. The following describes the optional behaviors:

  • If default is specified, the image is pulled remotely. If the image pull fails, then the container uses the cached image on the instance.

  • If always is specified, the image is always pulled remotely. If the image pull fails, then the task fails. This option ensures that the latest version of the image is always pulled. Any cached images are ignored and are subject to the automated image cleanup process.

  • If once is specified, the image is pulled remotely only if it has not been pulled by a previous task on the same container instance or if the cached image was removed by the automated image cleanup process. Otherwise, the cached image on the instance is used. This ensures that no unnecessary image pulls are attempted.

  • If prefer-cached is specified, the image is pulled remotely if there is no cached image. Otherwise, the cached image on the instance is used. Automated image cleanup is disabled for the container to ensure that the cached image is not removed.

Manipular answered 17/1, 2016 at 22:59 Comment(4)
Are you sure? I've seen instances where old docker images get run even after I've pushed a new image to Dockerhub (using the same tag name). I guess perhaps I should just bump the tag name each time a new image is built. However, this has been pretty rare in my experience, so maybe it was just momentary network issues. (I'm aware that you work on ECS, so you're the best person to answer this, but this isn't exactly what I've experienced. Apologies if this comes off as rude, not my intention!)Fructificative
Yes, the current behavior is that it will attempt a pull every time. If the pull fails (network issues, lack of permissions, etc), it will attempt to use a cached image. You can find more details in the agent log files which are usually in /var/log/ecs.Manipular
I agree with @Ibrahim, in many cases the new image (even if properly loaded into ECR) will not be pulled and used, when called with a run_task() from Lambda. CloudWatch logs show no errors; it just insists on using the old image. Very frustrating indeed!Mendiola
It's worth noting that Fargate doesn't cache images at all and will always pull from the registry, so no configuration is required: AWS DocsEpistaxis
P
52

Registering a new task definition and updating the service to use the new task definition is the approach recommended by AWS. The easiest way to do this is to:

  1. Navigate to Task Definitions
  2. Select the correct task
  3. Choose create new revision
  4. If you're already pulling the latest version of the container image with something like the :latest tag, then just click Create. Otherwise, update the version number of the container image and then click Create.
  5. Expand Actions
  6. Choose Update Service (twice)
  7. Then wait for the service to be restarted

This tutorial has more detail and describes how the above steps fit into an end-to-end product development process.

Full disclosure: This tutorial features containers from Bitnami and I work for Bitnami. However the thoughts expressed here are my own and not the opinion of Bitnami.

Phlegm answered 26/1, 2017 at 2:2 Comment(7)
This works, but you may have to alter your service min/max values. If you only have one EC2 instance you have to set the min healthy percent to zero, otherwise it will never kill the task (making your service temporarily offline) in order to deploy the updated container.Bracci
@Bracci Good point! In the ECS setup section of the tutorial, I describe exactly that. Here's the recommended configuration from that section: Number of tasks - 1, Minimum healthy percent - 0, Maximum percent - 200.Phlegm
@Phlegm I tried your approach as stated here...still no joyAuricular
@Auricular If you need help to get this figured out, you should describe how far you got and what error you hit.Phlegm
This only works for services, not tasks without services.Comedietta
For tasks without services: after step 4 you stop the current task and start another one, choosing the new task definition (choose correct revision).Ufo
Actually, I noticed that if the Docker image was updated (using the same version), you just need stop the task and start a new one with the same task definition (no need to create another task definition if no parameters need to be changed).Ufo
D
14

There are two ways to do this.

First, use AWS CodeDeploy. You can config Blue/Green deployment sections in ECS service definition. This includes a CodeDeployRoleForECS, another TargetGroup for switch, and a test Listener (optional). AWS ECS will create CodeDeploy application and deployment group and link these CodeDeploy resources with your ECS Cluster/Service and your ELB/TargetGroups for you. Then you can use CodeDeploy to initiate a deployment, in which you need to enter an AppSpec that specifies using what task/container to update what service. Here is where you specify your new task/container. Then, you will see new instances are spin up in the new TargetGroup and the old TargetGroup is disconnected to the ELB, and soon the old instances registered to the old TargetGroup will be terminated.

This sounds very complicated. Actually, since/if you have enabled auto scaling on your ECS service, a simple way to do it is to just force a new deployment using console or cli, like a gentleman here pointed out:

aws ecs update-service --cluster <cluster name> --service <service name> --force-new-deployment

In this way you can still use the "rolling update" deployment type, and ECS will simply spin up new instances and drain the old ones with no downtime of your service if everything is OK. The bad side is you lose fine control on the deployment and you cannot roll back to previous version if there is an error and this will break the ongoing service. But this is a really simple way to go.

BTW, don't forget to set proper numbers for Minimum healthy percent and Maximum percent, like 100 and 200.

Diplegia answered 7/10, 2019 at 19:18 Comment(5)
Is there a way to do this without having to change the IP? In mine when I ran this it worked but it changed the Private IP I was runningBitt
@Bitt I had a similar issue when needing a proxy NLB. In short the only way to keep an EC2 instance IP the same is to use either elastic IP addresses or use a different approach. I do not know your use case but linking Global Accelerator to the ECS linked ALB provided me with static IP addresses, this solved my use case. If you want to know dynamic internal IPs you will need to query the ALB with a lambda. This was a lot of effort. Link below: aws.amazon.com/blogs/networking-and-content-delivery/…Heimdall
aws ecs update-service --cluster <cluster name> --service <service name> --force-new-deployment worked for me!Grayback
--force-new-deployment closes all current connection and set my api down until new version startsEnunciate
@Enunciate sounds like you have your task configured to only ever run a single instance?Manns
U
13

Ran into same issue. After spending hours, have concluded these simplified steps for automated deployment of updated image:

1.ECS task definition changes: For a better understanding, let's assume you have created a task definition with below details (note: these numbers would change accordingly as per your task definition):

launch_type = EC2

desired_count = 1

Then you need to make the following changes:

deployment_minimum_healthy_percent = 0  //this does the trick, if not set to zero the force deployment wont happen as ECS won't allow to stop the current running task

deployment_maximum_percent = 200  //for allowing rolling update

2.Tag your image as <your-image-name>:latest . The latest key takes care of getting pulled by the respective ECS task.

sudo docker build -t imageX:master .   //build your image with some tag
sudo -s eval $(aws ecr get-login --no-include-email --region us-east-1)  //login to ECR
sudo docker tag imageX:master <your_account_id>.dkr.ecr.us-east-1.amazonaws.com/<your-image-name>:latest    //tag your image with latest tag

3.Push to the image to ECR

sudo docker push  <your_account_id>.dkr.ecr.us-east-1.amazonaws.com/<your-image-name>:latest

4.apply force-deployment

sudo aws ecs update-service --cluster <your-cluster-name> --service <your-service-name> --force-new-deployment --region us-east-1

Note: I have written all the commands assuming the region to be us-east-1. Just replace it with your respective region while implementing.

Unobtrusive answered 11/6, 2020 at 16:48 Comment(2)
I noticed the parameters are terraform parameters; Any ideas how to achieve the same for CloudFormation: I have my AutoScalingGroup MinSize: 0 and MaxSize: 1; what else needs to be set?Scroll
sharing my two cents here about using latest all the time...Hydrometallurgy
L
11

Following worked for me in case the docker image tag is same:

  1. Go to cluster and service.
  2. Select service and click update.
  3. Set number of tasks as 0 and update.
  4. After deployment is finished, re-scale number of tasks to 1.

Following api works as well:

aws ecs update-service --cluster <cluster_name> --service <service_name> --force-new-deployment
Lexicostatistics answered 24/4, 2020 at 5:31 Comment(0)
U
5

I created a script for deploying updated Docker images to a staging service on ECS, so that the corresponding task definition refers to the current versions of the Docker images. I don't know for sure if I'm following best practices, so feedback would be welcome.

For the script to work, you need either a spare ECS instance or a deploymentConfiguration.minimumHealthyPercent value so that ECS can steal an instance to deploy the updated task definition to.

My algorithm is like this:

  1. Tag Docker images corresponding to containers in the task definition with the Git revision.
  2. Push the Docker image tags to the corresponding registries.
  3. Deregister old task definitions in the task definition family.
  4. Register new task definition, now referring to Docker images tagged with current Git revisions.
  5. Update service to use new task definition.

My code pasted below:

deploy-ecs

#!/usr/bin/env python3
import subprocess
import sys
import os.path
import json
import re
import argparse
import tempfile

_root_dir = os.path.abspath(os.path.normpath(os.path.dirname(__file__)))
sys.path.insert(0, _root_dir)
from _common import *


def _run_ecs_command(args):
    run_command(['aws', 'ecs', ] + args)


def _get_ecs_output(args):
    return json.loads(run_command(['aws', 'ecs', ] + args, return_stdout=True))


def _tag_image(tag, qualified_image_name, purge):
    log_info('Tagging image \'{}\' as \'{}\'...'.format(
        qualified_image_name, tag))
    log_info('Pulling image from registry in order to tag...')
    run_command(
        ['docker', 'pull', qualified_image_name], capture_stdout=False)
    run_command(['docker', 'tag', '-f', qualified_image_name, '{}:{}'.format(
        qualified_image_name, tag), ])
    log_info('Pushing image tag to registry...')
    run_command(['docker', 'push', '{}:{}'.format(
        qualified_image_name, tag), ], capture_stdout=False)
    if purge:
        log_info('Deleting pulled image...')
        run_command(
            ['docker', 'rmi', '{}:latest'.format(qualified_image_name), ])
        run_command(
            ['docker', 'rmi', '{}:{}'.format(qualified_image_name, tag), ])


def _register_task_definition(task_definition_fpath, purge):
    with open(task_definition_fpath, 'rt') as f:
        task_definition = json.loads(f.read())

    task_family = task_definition['family']

    tag = run_command([
        'git', 'rev-parse', '--short', 'HEAD', ], return_stdout=True).strip()
    for container_def in task_definition['containerDefinitions']:
        image_name = container_def['image']
        _tag_image(tag, image_name, purge)
        container_def['image'] = '{}:{}'.format(image_name, tag)

    log_info('Finding existing task definitions of family \'{}\'...'.format(
        task_family
    ))
    existing_task_definitions = _get_ecs_output(['list-task-definitions', ])[
        'taskDefinitionArns']
    for existing_task_definition in [
        td for td in existing_task_definitions if re.match(
            r'arn:aws:ecs+:[^:]+:[^:]+:task-definition/{}:\d+'.format(
                task_family),
            td)]:
        log_info('Deregistering task definition \'{}\'...'.format(
            existing_task_definition))
        _run_ecs_command([
            'deregister-task-definition', '--task-definition',
            existing_task_definition, ])

    with tempfile.NamedTemporaryFile(mode='wt', suffix='.json') as f:
        task_def_str = json.dumps(task_definition)
        f.write(task_def_str)
        f.flush()
        log_info('Registering task definition...')
        result = _get_ecs_output([
            'register-task-definition',
            '--cli-input-json', 'file://{}'.format(f.name),
        ])

    return '{}:{}'.format(task_family, result['taskDefinition']['revision'])


def _update_service(service_fpath, task_def_name):
    with open(service_fpath, 'rt') as f:
        service_config = json.loads(f.read())
    services = _get_ecs_output(['list-services', ])[
        'serviceArns']
    for service in [s for s in services if re.match(
        r'arn:aws:ecs:[^:]+:[^:]+:service/{}'.format(
            service_config['serviceName']),
        s
    )]:
        log_info('Updating service with new task definition...')
        _run_ecs_command([
            'update-service', '--service', service,
            '--task-definition', task_def_name,
        ])


parser = argparse.ArgumentParser(
    description="""Deploy latest Docker image to staging server.
The task definition file is used as the task definition, whereas
the service file is used to configure the service.
""")
parser.add_argument(
    'task_definition_file', help='Your task definition JSON file')
parser.add_argument('service_file', help='Your service JSON file')
parser.add_argument(
    '--purge_image', action='store_true', default=False,
    help='Purge Docker image after tagging?')
args = parser.parse_args()

task_definition_file = os.path.abspath(args.task_definition_file)
service_file = os.path.abspath(args.service_file)

os.chdir(_root_dir)

task_def_name = _register_task_definition(
    task_definition_file, args.purge_image)
_update_service(service_file, task_def_name)

_common.py

import sys
import subprocess


__all__ = ['log_info', 'handle_error', 'run_command', ]


def log_info(msg):
    sys.stdout.write('* {}\n'.format(msg))
    sys.stdout.flush()


def handle_error(msg):
    sys.stderr.write('* {}\n'.format(msg))
    sys.exit(1)


def run_command(
        command, ignore_error=False, return_stdout=False, capture_stdout=True):
    if not isinstance(command, (list, tuple)):
        command = [command, ]
    command_str = ' '.join(command)
    log_info('Running command {}'.format(command_str))
    try:
        if capture_stdout:
            stdout = subprocess.check_output(command)
        else:
            subprocess.check_call(command)
            stdout = None
    except subprocess.CalledProcessError as err:
        if not ignore_error:
            handle_error('Command failed: {}'.format(err))
    else:
        return stdout.decode() if return_stdout else None
Universe answered 20/1, 2016 at 10:25 Comment(4)
This is overkill. Should be possible to deploy via terraform or just single ecs-cli line.Tsuda
@Tsuda I'm using Terraform to update the ECS task image. That's as overkill as the above python-code. The steps required are as complicated.Quoit
Really overkill, I put a simple script in my answer do what the highest rated answers are proposing. Have a look.Pulchia
github.com/silinternational/ecs-deploy looks like overkill that is being maintained. :)Coarsegrained
H
4

If you use any IAC tool to setup your ECS tasks like terraform, then you could always do it with updating image versions in your task definition. Terraform would basically replace the old task definition and create new one and ECS service will start using the new task definition with updated image.

Other way around is always having aws ecs update command in your pipeline which builds your image to be used in ECS tasks and as soon as you built the images - just do a force deployment.

aws ecs update-service --cluster clusterName --service serviceName --force-new-deployment
Home answered 20/1, 2022 at 10:34 Comment(0)
P
2

since there has not been any progress at AWS side. I will give you the simple python script that exactly performs the steps described in the high rated answers of Dima and Samuel Karp.

First push your image into your AWS registry ECR then run the script:

import boto3, time

client = boto3.client('ecs')
cluster_name = "Example_Cluster"
service_name = "Example-service"
reason_to_stop = "obsolete deployment"

# Create new deployment; ECS Service forces to pull from docker registry, creates new task in service
response = client.update_service(cluster=cluster_name, service=service_name, forceNewDeployment=True)

# Wait for ecs agent to start new task
time.sleep(10)

# Get all Service Tasks
service_tasks = client.list_tasks(cluster=cluster_name, serviceName=service_name)

# Get meta data for all Service Tasks
task_meta_data = client.describe_tasks(cluster=cluster_name, tasks=service_tasks["taskArns"])

# Extract creation date
service_tasks = [(task_data['taskArn'], task_data['createdAt']) for task_data in task_meta_data["tasks"]]

# Sort according to creation date
service_tasks = sorted(service_tasks, key= lambda task: task[1])

# Get obsolete task arn
obsolete_task_arn = service_tasks[0][0]
print("stop ", obsolete_task_arn)

# Stop obsolete task
stop_response = client.stop_task(cluster=cluster_name, task=obsolete_task_arn, reason=reason_to_stop)

This code does:

  1. create a new task with the new image in the service
  2. stop the obsolete old task with the old image in the service
Pulchia answered 10/9, 2020 at 8:10 Comment(1)
Nicely done. Python makes it much more readable and modifiable. I went with a bash script of similar steps for my own deployment.Finger
O
1

AWS CodePipeline.

You can set ECR as a source, and ECS as a target to deploy to.

Overshine answered 8/3, 2019 at 16:17 Comment(1)
can you link to any documentation for this?Kylix
G
0

Using AWS cli I tried aws ecs update-service as suggested above. Did not pick up latest docker from ECR. In the end, I rerun my Ansible playbook that created the ECS cluster. The version of the task definition is bumped when ecs_taskdefinition runs. Then all is good. The new docker image is picked up.

Truthfully not sure if the task version change forces the redeploy, or if the playbook using the ecs_service causes the task to reload.

If anyone is interested, I'll get permission to publish a sanitized version of my playbook.

Gap answered 23/5, 2018 at 2:6 Comment(1)
I believe task definition revision is required only when you update actual task definition config. in this case if you're using image with a tag latest, there's no need to modify config? Of course having commit id as a tag is nice, and having separate task definition revision too so you could rollback, but then your CI will see all credentials you're using for container which is not the way I want to implement things.Tsuda
B
-4

The following commands worked for me

docker build -t <repo> . 
docker push <repo>
ecs-cli compose stop
ecs-cli compose start
Burchett answered 19/2, 2018 at 13:31 Comment(2)
What are these ecs-cli lines even from?Mycetozoan
@ramzi-c docs.aws.amazon.com/AmazonECS/latest/developerguide/…Weekender

© 2022 - 2024 — McMap. All rights reserved.