For clarification, what I'm trying to do is fire off a Fargate task when theres an item in a specific queue. I've used this tutorial to get pretty much where I am. This worked fine but the problem I ran into was every file upload (the structure of the s3 bucket is s3_bucket_name/{unknown_name}/known_file_names) was resulting in a task being triggered and I only want/need it to trigger once per {unknown_name} . I've since changed my configuration to add an item to a queue when it detects a test_file.txt file. Is it possible to trigger a fargate task on a queue like this? If so, how?
SQS doesn't trigger or "push" messages to anything. As mentioned in the comments, AWS Lambda has an SQS integration that can automatically poll SQS for you and trigger a Lambda function with new messages, which you could use to create your Fargate tasks.
However I would recommend refactoring your Fargate task like this:
- Reconfigure the code running in your container to poll the SQS queue for messages.
- Run your task as an ECS service.
- Configure ECS service autoscaling to spin up instances of your task based on the depth of the SQS queue.
- Firstly, if I understand correctly from your question, initially you wanted:
Object="known_file_names" is put into 's3_bucket_name/{unknown_name}' ---> EVENT NOT TRIGGERED
Object="unknown_name" is put into 's3_bucket_name' --> EVENT TRIGGERED
You want an event to be triggered when a file is put into an S3 Bucket without a prefix and not be triggered if there is a prefix. However, this is not permitted by S3. You can do the exact opposite. You can restrict the event to be triggered only when it matches the prefix.
- Secondly, You can't automatically trigger a Fargate task through SQS but you can trigger a Lambda function that will send a request to run your Fargate task. [Reference]
Here's a template python lambda function that sends the run task request: https://github.com/shitijkarsolia/ecs-Fargate-tasks-fatlambda/blob/master/lambda/run_fat_lambda.py
Note: There is one flaw with this architecture though. On a message arrival in SQS, the Lambda function runs(sends a Fargate run task request) and returns a success to the SQS. As a result, the message will be deleted from the queue. So, if the processing in the Fargate Container fails for any reason then the message is lost forever, and processing it cannot be retried.
A better Architecture would be: S3Put Event publishes a message to an SNS Topic. An SQS and a separate Lambda function are subscribers of that SNS. The Lambda function sends the run task request to Fargate. The Fargate container polls the SQS queue and deletes the message explicitly on successful processing.
I've just come past this problem and am about to solve it like this:
- S3 events (and other events) posted to SNS.
- Two SQS queues ("trigger queue" and "task queue") subscribing to the SNS.
- EventBridge pipe that consumes the "trigger queue" and triggers the fargate task to start.
- Fargate task consumes the message on the "task queue" to figure out what to do.
© 2022 - 2024 — McMap. All rights reserved.