How to limit parallel execution of serverless lambda function
A

2

10

I am using AWS and using the serverless framework. My serverless lambda function gets triggered by event. Then I talk with Database and there is a limit in the number of connections I can open with DB.

So I want to only run 5 lambda functions at a time and queue other events. I know there is:

    provisionedConcurrency: 3 # optional, Count of provisioned lambda instances
    reservedConcurrency: 5 # optional, reserved concurrency limit for this function. By default, AWS uses account concurrency limit

So in this case, the specified number of long running jobs will be there and they will be serving the events.

But rather than that what I want is event queuing and the functions will be triggered such that at most 5 functions are running at a time.

I am wondering whether this notion of event queuing is supported in AWS?

Actinic answered 13/1, 2021 at 21:47 Comment(3)
you can use sqsTegucigalpa
I dont think sqs is the solution to this. Here I am trying to limit the number of parallel execution of Lambda function.Actinic
Which events trigger your lambda function?Academy
P
16

In AWS Lambda, a concurrency limit determines how many function invocations can run simultaneously in one region. You can set this limit though AWS Lambda console or through Serverless Framework.

AWS Lambda Concurrency

If your account limit is 1000 and you reserved 100 concurrent executions for a specific function and 100 concurrent executions for another, the rest of the functions in that region will share the remaining 800 executions.

If you reserve concurrent executions for a specific function, AWS Lambda assumes that you know how many to reserve to avoid performance issues. Functions with allocated concurrency can’t access unreserved concurrency.

The right way to set the reserved concurrency limit in Serverless Framework is the one you shared:

functions:
  hello:
    handler: handler.hello # required, handler set in AWS Lambda
    reservedConcurrency: 5 # optional, reserved concurrency limit for this function. By default, AWS uses account concurrency limit

I would suggest to use SQS to manage your Queue. One of the common architectural reasons for using a queue is to limit the pressure on a different part of your architecture. This could mean preventing overloading a database or avoiding rate-limits on a third-party API when processing a large batch of messages.

For example, let's think about your case where your SQS processing logic needs to connect to a database. You want to limit your workers to have no more than 5 open connections to your database at a time, with concurrency control, you can set proper limits to keep your architecture up.

In your case you could have a function, hello, that receives your requests and put them in a SQS queue. On the other side the function compute will get those SQS messages and compute them limiting the number of concurrent invocations to 5.

You can even set a batch size, that is the number of SQS messages that can be included in a single lambda.

functions:
  hello:
    handler: handler.hello

  compute:
    handler: handler.compute
    reservedConcurrency: 5
    events:
      - sqs:
          arn: arn:aws:sqs:region:XXXXXX:myQueue
          batchSize: 10 # how many SQS messages can be included in a single Lambda invocation
          maximumBatchingWindow: 60 # maximum amount of time in seconds to gather records before invoking the function
Puerperal answered 22/1, 2021 at 15:0 Comment(1)
This is the theory, but it seems that does not always works as expected, as reported into various posts like this zaccharles.medium.com/… or foxy.io/blog/…Wisp
L
0

Have you considered a proxy endpoint (acting like a pool) instead of limiting the concurrency of the lambda. Also, I think the way the lambda <-> SQS communication happens is via some event pool, and setting the concurrency lower than however many threads they have going will cause you to have to handle lost messages.

https://aws.amazon.com/rds/proxy/

Lutyens answered 22/2, 2022 at 20:45 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.