What I want to achieve
To scrape an website using AWS Lambda and save the data on S3.
The issues I'm having
When I execute Lambda, the following error message appears.
{ "errorMessage": "Unable to import module 'lambda_function': cannot import name 'DEFAULT_CIPHERS' from 'urllib3.util.ssl_' (/opt/python/urllib3/util/ssl_.py)", "errorType": "Runtime.ImportModuleError", "requestId": "fb66bea9-cbad-4bd3-bd4d-6125454e21be", "stackTrace": [] }
Code
The minimum Lambda code is as follows.
import requests
import boto3
def lambda_handler(event, context):
s3 = boto3.client('s3')
upload_res = s3.put_object(Bucket='horserace-dx', Key='/raw/a.html', Body='testtext')
return event
An layer was added to the Lambda. Files were save in python
folder using the commands below , frozen in a zip file, then uploaded to AWS Lambda as a layer.
!pip install requests -t ./python --no-user
!pip install pandas -t ./python --no-user
!pip install beautifulsoup4 -t ./python --no-user
- The bucket
horserace-dx
exists - The folder
raw
exists - The role of the Lambda is properly set. It can read from and write to S3
- The runtime of the Lambda is Python 3.9. The python version of the local computer is 3.9.13.
What I did so far
I google "cannot import name 'DEFAULT_CIPHERS' from 'urllib3.util.ssl_'" and found some suggestions. I made the layer with the following code and tried again in vain.
!pip install requests -t ./python --no-user
!pip install pandas -t ./python --no-user
!pip install beautifulsoup4 -t ./python --no-user
!pip install urllib3==1.26.15 -t ./python --no-user
So what should I do to achieve what I want to achieve? Any suggestions would be greatly appreciated.
urllib3<2
– Lymphadenitis