As part of an AWS CodePipeline in an AWS CodeBuild action I deploy resources created with the Serverless Framework to a "UAT" (user acceptance testing) stage. The pipeline runs in its own tooling AWS account, first deploying cross-account into a separate "UAT" account, then deploying cross-account into a separate "Production" account.
The first deployment to "UAT" completes successfully, whereas the succeeding deployment to "Production" fails with the error ...
Serverless Error ----------------------------------------
An error occurred: <some name>LambdaFunction - Resource handler returned message: "Code uncompressed size is greater than max allowed size of 272629760. (Service: Lambda, Status Code: 400, Request ID: <some request id>, Extended Request ID: null)" (RequestToken: <some request token>, HandlerErrorCode: InvalidRequest).
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information ---------------------------
Operating System: linux
Node Version: 14.17.2
Framework Version: 2.68.0 (local)
Plugin Version: 5.5.1
SDK Version: 4.3.0
Components Version: 3.18.1
This started to happen, once I introduced the usage of a private Lambda Layer. The total size of all files seems way less than the maximum allowed size.
This question isn't so much about the actual error (there already exists a similar question). I rather wonder why the behavior is inconsistent, varying with the deployment targets. Because the limits for the Lambda Function package size (including the usage of Lambda Layers) should be the same for all environments.
terraform apply
, I got the errorCode uncompressed size is greater than max allowed size of 272629760
only when the layer was in play. Had to deploy a dummy version of the lambda with no dependencies, add the layer, then redeploy. Annoyingly misleading error message, as it made it seem like the size of the layer was the problem. – Flaherty