AWS Lambda Alpine Python Container shows IMAGE Launch error exec format error
Asked Answered
H

5

14

I am writing a Lambda function to convert an excel file to PDF using unoconv with libreoffice, for this I am using alpine base image. The Dockerfile is as follows.

# Define global args
ARG FUNCTION_DIR="/home/app/"
ARG RUNTIME_VERSION="3.9"
ARG DISTRO_VERSION="3.12"

# Stage 1 - bundle base image + runtime
# Grab a fresh copy of the image and install GCC
FROM python:${RUNTIME_VERSION}-alpine${DISTRO_VERSION} AS python-alpine
# Install GCC (Alpine uses musl but we compile and link dependencies with GCC)
RUN apk add --no-cache \
    libstdc++

# Stage 2 - build function and dependencies
FROM python-alpine AS build-image
# Install aws-lambda-cpp build dependencies
RUN apk add --no-cache \
    build-base \
    libtool \
    autoconf \
    automake \
    libexecinfo-dev \
    make \
    cmake \
    libcurl
# Include global args in this stage of the build
ARG FUNCTION_DIR
ARG RUNTIME_VERSION
# Create function directory
RUN mkdir -p ${FUNCTION_DIR}
# Copy handler function
COPY app.py ${FUNCTION_DIR}
COPY requirements.txt ${FUNCTION_DIR}
# Optional – Install the function's dependencies
RUN python${RUNTIME_VERSION} -m pip install -r /home/app/requirements.txt --target ${FUNCTION_DIR}
# Install Lambda Runtime Interface Client for Python
RUN python${RUNTIME_VERSION} -m pip install awslambdaric --target ${FUNCTION_DIR}

# Stage 3 - final runtime image
# Grab a fresh copy of the Python image
FROM python-alpine
# Include global arg in this stage of the build
ARG FUNCTION_DIR
# Set working directory to function root directory
WORKDIR ${FUNCTION_DIR}
# Copy in the built dependencies
COPY --from=build-image ${FUNCTION_DIR} ${FUNCTION_DIR}

#
ARG PUID=1000
ARG PGID=1000
#
RUN set -xe \
    && apk add --no-cache --purge -uU \
        curl icu-libs unzip zlib-dev musl \
        mesa-gl mesa-dri-swrast \
        libreoffice libreoffice-base libreoffice-lang-uk \
        ttf-freefont ttf-opensans ttf-ubuntu-font-family ttf-inconsolata \
    ttf-liberation ttf-dejavu \
        libstdc++ dbus-x11 \
    && echo "http://dl-cdn.alpinelinux.org/alpine/edge/main" >> /etc/apk/repositories \
    && echo "http://dl-cdn.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories \
    && echo "http://dl-cdn.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories \
    && apk add --no-cache -U \
    ttf-font-awesome ttf-mononoki ttf-hack \
    && rm -rf /var/cache/apk/* /tmp/*

RUN pip install unoconv

# (Optional) Add Lambda Runtime Interface Emulator and use a script in the ENTRYPOINT for simpler local runs
ADD https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/latest/download/aws-lambda-rie /usr/bin/aws-lambda-rie
COPY entry.sh /
RUN chmod 755 /usr/bin/aws-lambda-rie /entry.sh
ENTRYPOINT [ "/entry.sh" ]
CMD [ "app.handler" ]

entry.sh file content is as follows.

#!/bin/sh
if [ -z "${AWS_LAMBDA_RUNTIME_API}" ]; then
    exec /usr/bin/aws-lambda-rie /usr/local/bin/python -m awslambdaric $1
else
    exec /usr/local/bin/python -m awslambdaric $1
fi

requirement.txt file content is as follows.

unotools
unoconv
boto3

the app.py file content is as follows.

import sys
import boto3
import subprocess
import json

def handler(event, context):       
    bucketname = "somebucket"
    filename = "Sample/example.xlsx"
    outputfilename = filename.rsplit('.', 1)[0] + '.pdf'

    s3 = boto3.client('s3')

    try:
        s3.download_file(bucketname, filename, "file.xlsx")
    except Exception as e:
        return str(e)

    try:
        result = subprocess.run(['unoconv', '-f', 'pdf', "file.xlsx"], stdout=subprocess.PIPE)
    except Exception as e:
        return str(e)

    try:
        with open("file.pdf", "rb") as f:
            s3.upload_fileobj(f, bucketname, outputfilename)
    except Exception as e:
        return str(e)

    body = {
        "message": "Converted excel to pdf"        
    }
    response = {
        "statusCode": 200,
        "event": json.dumps(event),
        "body": json.dumps(body),
        "path": "app.py"
    }
    return response

I built this container and run the container locally and this works without problems. However, when I push the image to the ECR and update the function with new latest image, and run the test, it shows this error.

{
  "errorMessage": "RequestId: SOME_ID_HERE Error: fork/exec /usr/local/bin/python awslambdaric: no such file or directory",
  "errorType": "Runtime.InvalidEntrypoint"
}
IMAGE   Launch error: fork/exec /usr/local/bin/python awslambdaric: no such file or directory   Entrypoint: [/usr/local/bin/python awslambdaric]    Cmd: [app.handler]  WorkingDir: [/home/app/]

Looking at the error I assume this is something related to the architecture. Can someone help me understand what is causing the issue ?

Hypsometer answered 4/7, 2021 at 18:23 Comment(0)
H
55

The problem was the architecture, I was building my image on a Mac Mini M1. When I built the image with providing --platform=linux/amd64 option, the error went away. And of course, the only writable folder in Lambda function would be /tmp so, I would have to change that as well to work it correctly.

Hypsometer answered 5/7, 2021 at 14:27 Comment(3)
Had to waste 2 hours of my life before getting to this answer ... Thank you!Lakin
Or you can use the same image and change the architecture in lambda, go to lambda->functions->deploy new image->select arch to arm64Turtleback
This helped me in my first steps on dockerized Lambdas. I am using an M2 chip, same trick works like a charm.Carthy
E
24

I had the same issue on M1 Pro and had to build with docker buildx to solve the problem.

docker buildx build --platform linux/amd64 -f ./Dockerfile -t myDockerTag .
Eoin answered 8/12, 2021 at 13:9 Comment(1)
Ty saved me some mental trauma....Ezzell
G
7

For anyone visiting this, the problem could also be due to the fact that the image has been built on a different CPU architecture than specified on AWS. For example, if you have built your image on an M1, you have to choose the option of arm64 in lambda image dialog.

Grau answered 21/7, 2022 at 14:53 Comment(0)
Q
0

Thanks a ton! These answers really pointed me in the right direction.

Since the M1 uses ARM64, another option is to let it build arm64 by default and have the lambda function us that architecture

docker build myDockerTag:latest .

then when using the Lambda CLI

aws lambda create-function --region <region_name> --architecture arm64 --function-name <function_name> --package-type Image --code ImageUri=<ECR Image URI> --role <iam_role_url>
Qnp answered 26/2, 2022 at 13:50 Comment(0)
G
0

If you're using Serverless framework, you can solve this by setting the platform parameter:

provider:
  name: aws
  ecr:
    images:
      my_example_image:
        path: ./
        platform: linux/amd64

Docs: https://www.serverless.com/framework/docs/providers/aws/guide/functions#referencing-container-image-as-a-target

Grouping answered 9/6, 2023 at 23:23 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.