Is it possible to trigger an AWS Fargate task upon an item being added to an SQS queue?
Asked Answered
P

3

14

For clarification, what I'm trying to do is fire off a Fargate task when theres an item in a specific queue. I've used this tutorial to get pretty much where I am. This worked fine but the problem I ran into was every file upload (the structure of the s3 bucket is s3_bucket_name/{unknown_name}/known_file_names) was resulting in a task being triggered and I only want/need it to trigger once per {unknown_name} . I've since changed my configuration to add an item to a queue when it detects a test_file.txt file. Is it possible to trigger a fargate task on a queue like this? If so, how?

Pullet answered 26/2, 2021 at 15:26 Comment(1)
You can make use of SQS triggered lambda function which will trigger fargate task.Hexose
E
16

SQS doesn't trigger or "push" messages to anything. As mentioned in the comments, AWS Lambda has an SQS integration that can automatically poll SQS for you and trigger a Lambda function with new messages, which you could use to create your Fargate tasks.

However I would recommend refactoring your Fargate task like this:

  • Reconfigure the code running in your container to poll the SQS queue for messages.
  • Run your task as an ECS service.
  • Configure ECS service autoscaling to spin up instances of your task based on the depth of the SQS queue.
Elbert answered 26/2, 2021 at 16:15 Comment(6)
But then, ECS task can't poll the queue message due to Lambda function returns "success" which will delete the SQS message. As @Shitij Mathur mentioned, better take the arch triggers SNS and Lambda, SQS subscribes its SNS.Sands
@Jo.TLV my answer does not suggest using AWS Lambda at all.Elbert
The SQS queue depth is not suitable for autoscaling, since it can go very low even when the application(s) are busy and your scaling will bounce between very low and very high. Instead, use SQS messages sent in a given period. Divide this by the running task count to get a backlog-per-instance number that you can use for auto scaling.Edana
@Edana would you like to show how to build that into a CloudWatch metric that can be used to trigger Auto-Scaling? Also, I think your messages-sent-per-time-period solution would not take into account any sort of down-time that caused messages to build up in the queue.Elbert
@Edana also, auto-scaling doesn't bounce between very low and very high if you properly configure step scaling and cooldown docs.aws.amazon.com/autoscaling/application/userguide/…Elbert
Aws documentation on auto scaling based on sqs: aws.amazon.com/blogs/containers/…Bresnahan
D
7
  1. Firstly, if I understand correctly from your question, initially you wanted:
  • Object="known_file_names" is put into 's3_bucket_name/{unknown_name}' ---> EVENT NOT TRIGGERED

  • Object="unknown_name" is put into 's3_bucket_name' --> EVENT TRIGGERED

You want an event to be triggered when a file is put into an S3 Bucket without a prefix and not be triggered if there is a prefix. However, this is not permitted by S3. You can do the exact opposite. You can restrict the event to be triggered only when it matches the prefix.

  1. Secondly, You can't automatically trigger a Fargate task through SQS but you can trigger a Lambda function that will send a request to run your Fargate task. [Reference]

Here's a template python lambda function that sends the run task request: https://github.com/shitijkarsolia/ecs-Fargate-tasks-fatlambda/blob/master/lambda/run_fat_lambda.py

Note: There is one flaw with this architecture though. On a message arrival in SQS, the Lambda function runs(sends a Fargate run task request) and returns a success to the SQS. As a result, the message will be deleted from the queue. So, if the processing in the Fargate Container fails for any reason then the message is lost forever, and processing it cannot be retried.

A better Architecture would be: S3Put Event publishes a message to an SNS Topic. An SQS and a separate Lambda function are subscribers of that SNS. The Lambda function sends the run task request to Fargate. The Fargate container polls the SQS queue and deletes the message explicitly on successful processing.

Dreeda answered 1/3, 2021 at 18:10 Comment(0)
A
1

I've just come past this problem and am about to solve it like this:

  • S3 events (and other events) posted to SNS.
  • Two SQS queues ("trigger queue" and "task queue") subscribing to the SNS.
  • EventBridge pipe that consumes the "trigger queue" and triggers the fargate task to start.
  • Fargate task consumes the message on the "task queue" to figure out what to do.

diagram

Aircrewman answered 13/6, 2023 at 9:29 Comment(2)
Your answer could be improved with additional supporting information. Please edit to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers in the help center.Demolish
Just implemented it as described, and it works like a charm. The art of avoiding yet another lambda :DAircrewman

© 2022 - 2024 — McMap. All rights reserved.