I intend to perform some memory intensive operations on a very large csv file stored in S3 using Python with the intention of moving the script to AWS Lambda. I know I can read in the whole csv nto memory, but I will definitely run into Lambda's memory and storage limits with such a large filem is there any way to stream in or just read in chunks of a csv at a time into Python using boto3/botocore, ideally by spefifying row numbers to read in?
Here are some things I've already tried:
1) using the range
parameter in S3.get_object
to specify the range of bytes to read in. Unfortunately this means the last rows get cut off in the middle since there's no ways to specify the number of rows to read in. There are some messy workarounds like scanning for the last newline character, recording the index, and then using that as the starting point for the next bytes range, but I'd like to avoid this clunky solution if possible.
2) Using S3 select to write sql queries to selectively retrieve data from S3 buckets. Unfortunately the row_numbers
SQL function isn't supported and it doesn't look like there's a way to read in a a subset of rows.
body = s3.get_object(Bucket=bucket, Key=key).read()
needs to be replaced bybody = s3.get_object(Bucket=bucket, Key=key)['Body']
– Colchicum