Dealing with AWS Lambda deployment package size limits
I had an interesting conversation at work today around package size limits for AWS Lambda and it got me thinking about a number of strategies for dealing with these limits.
The limits
First, here are the limits. As per the documentation, your Lambda function, including all its source code, layers, and any custom runtime must fit within the size limits listed below:
- In the AWS Console Editor - 3MB
- Zipped for direct upload - 50MB
- Unzipped (using an S3 bucket) - 250MB
- Container image - 10GB
How to deal with the limits
Here's a few ideas I came up with on how to deal with these limitations.
- First, optimize the code as much as possible to shrink the dependency size and make sure you aren't importing unnecessary libraries. You can do this by building dependencies from source, and by making sure you are only importing the parts of the library you actually need vs. a global import.
- Consider splitting the lambda into separate steps and use AWS Step Functions to orchestrate. For example, a Lambda that fetches data from MySQL using PyMySQL, and then another Lambda that processes that data using Pandas would mean your package size could be split across two separate functions, where each function only loads the libraries required.
- Use a container - This will up the size limit to 10GB from 250MB. That's a big difference!
- Use Amazon Elastic File System (EFS) - You will need a VPC and an EC2 instance to write the initial data to the EFS volume, but this configuration can give your Lambda plenty of room to breathe.