FastAPI UploadFile is slow compared to Flask
Asked Answered
C

1

7

I have created an endpoint, as shown below:

@app.post("/report/upload")
def create_upload_files(files: UploadFile = File(...)):
        try:
            with open(files.filename,'wb+') as wf:
                wf.write(file.file.read())
                wf.close()
        except Exception as e:
            return {"error": e.__str__()}

It is launched with uvicorn:

../venv/bin/uvicorn test_upload:app --host=0.0.0.0 --port=5000 --reload

I am performing some tests of uploading a file of around 100 MB using Python requests, and that takes around 128 seconds:

f = open(sys.argv[1],"rb").read()
hex_convert = binascii.hexlify(f)
items = {"files": hex_convert.decode()}
start = time.time()
r = requests.post("http://192.168.0.90:5000/report/upload",files=items)
end = time.time() - start
print(end)

I tested the same upload script with an API endpoint using Flask and takes around 0.5 seconds:

from flask import Flask, render_template, request
app = Flask(__name__)


@app.route('/uploader', methods = ['GET', 'POST'])
def upload_file():
   if request.method == 'POST':
      f = request.files['file']
      f.save(f.filename)
      return 'file uploaded successfully'

if __name__ == '__main__':
    app.run(host="192.168.0.90",port=9000)

Is there anything I am doing wrong?

Coston answered 17/12, 2020 at 14:42 Comment(0)
T
15

You could write the file(s) using synchronous writing , after defining the endpoint with normal def, as shown in this answer, or using asynchronous writing (using aiofiles), after defining the endpoint with async defUploadFile methods are async methods, and thus, you would need to await them. Example is given below. For more details on def vs async def, and how choosing one over the other may affect your API's performance (depending on the nature of tasks performed inside an endpoint), please have a look at this answer.

Upload Single File

app.py

from fastapi import File, UploadFile
import aiofiles

@app.post("/upload")
async def upload(file: UploadFile = File(...)):
    try:
        contents = await file.read()
        async with aiofiles.open(file.filename, 'wb') as f:
            await f.write(contents)
    except Exception:
        return {"message": "There was an error uploading the file"}
    finally:
        await file.close()

    return {"message": f"Successfuly uploaded {file.filename}"}
Read the File in chunks

As explained in this answer, FastAPI/Starlette, under the hood, uses a SpooledTemporaryFile that has the max_size attribute set to 1 MB, meaning that data are spooled in memory until the file size exceeds 1 MB, at which point the data are written to a temporary file on disk, and hence, calling await file.read() would actually read the data from disk into memory (if the uploaded file was larger than 1 MB). Thus, you might want to use async in the chunked manner, to avoid loading the entire file into memory that may cause issues—if, for example, you have 8GB of RAM, you can't load a 50GB file (not to mention that the available RAM will always be less than the total amount that is installed, as the native OS and other applications running on your machine will use some of the RAM). Hence, in that case, you should rather load the file into memory in chunks and process the data one chunk at a time. This method, however, may take longer to complete, depending on the chunk size you choose; below, that is 1024 * 1024 bytes (= 1MB). You can adjust the chunk size as desired.

from fastapi import File, UploadFile
import aiofiles

@app.post("/upload")
async def upload(file: UploadFile = File(...)):
    try:
        async with aiofiles.open(file.filename, 'wb') as f:
            while contents := await file.read(1024 * 1024):
                await f.write(contents)
    except Exception:
        return {"message": "There was an error uploading the file"}
    finally:
        await file.close()

    return {"message": f"Successfuly uploaded {file.filename}"}

Alternatively, you could use shutil.copyfileobj(), which is used to copy the contents of a file-like object to another file-like object (see this answer as well). By default the data is read in chunks with the default buffer (chunk) size being 1MB (i.e., 1024 * 1024 bytes) for Windows and 64KB for other platforms (see source code here). You can specify the buffer size by passing the optional length parameter. Note: If negative length value is passed, the entire contents of the file will be read—see f.read() documentation as well, which .copyfileobj() uses under the hood. The source code of .copyfileobj() can be found here—there isn't really anything that different from the previous approach in reading/writing the file contents. However, .copyfileobj() uses blocking I/O operations behind the scenes, and this would result in blocking the entire server (if used inside an async def endpoint). Thus, to avoid that , you could use Starlette's run_in_threadpool() to run all the needed functions in a separate thread (that is then awaited) to ensure that the main thread (where coroutines are run) does not get blocked. The same exact function is used by FastAPI internally when you call the async methods of the UploadFile object, i.e., .write(), .read(), .close(), etc.—see source code here. Example:

from fastapi import File, UploadFile
from fastapi.concurrency import run_in_threadpool
import shutil
        
@app.post("/upload")
async def upload(file: UploadFile = File(...)):
    try:
        f = await run_in_threadpool(open, file.filename, 'wb')
        await run_in_threadpool(shutil.copyfileobj, file.file, f)
    except Exception:
        return {"message": "There was an error uploading the file"}
    finally:
        if 'f' in locals(): await run_in_threadpool(f.close)
        await file.close()

    return {"message": f"Successfuly uploaded {file.filename}"}

test.py

import requests

url = 'http://127.0.0.1:8000/upload'
file = {'file': open('images/1.png', 'rb')}
r = requests.post(url=url, files=file) 
print(r.json())

For an HTML <form> example, see here.

Upload Multiple Files

app.py

from fastapi import File, UploadFile
import aiofiles

@app.post("/upload")
async def upload(files: List[UploadFile] = File(...)):
    for file in files:
        try:
            contents = await file.read()
            async with aiofiles.open(file.filename, 'wb') as f:
                await f.write(contents)
        except Exception:
            return {"message": "There was an error uploading the file(s)"}
        finally:
            await file.close()

    return {"message": f"Successfuly uploaded {[file.filename for file in files]}"}  
Read the Files in chunks

To read the file(s) in chunks instead, see the approaches described earlier in this answer.

test.py

import requests

url = 'http://127.0.0.1:8000/upload'
files = [('files', open('images/1.png', 'rb')), ('files', open('images/2.png', 'rb'))]
r = requests.post(url=url, files=files) 
print(r.json())

For an HTML <form> example, see here.

Update

Digging into the source code, it seems that the latest versions of Starlette (which FastAPI uses underneath) use a SpooledTemporaryFile (for UploadFile data structure) with max_size attribute set to 1MB (1024 * 1024 bytes) - see here - in contrast to older versions where max_size was set to the default value, i.e., 0 bytes, such as the one here.

The above means that, in the past, data used to be fully loaded into memory regardless of file size (which could lead to issues, if a file couldn't fit into RAM), whereas, in the latest version, data is spooled in memory until the file size exceeds max_size (i.e., 1MB), at which point the contents are written to disk; more specifically, to the OS's temporary directory (Note: this also means that the maximum size of file you can upload is bound by the storage available to the system's temporary directory.. If enough storage (for your needs) is available on your system, there's nothing to worry about; otherwise, please have a look at this answer on how to change the default temporary directory). Thus, the process of writing the file multiple times—that is, initially loading the data into RAM, then, if the data exceeds 1MB in size, writing the file to temporary directory, then reading the file from temporary directory (using file.read()) and finally, writing the file to a permanent directory—is what makes uploading file slow compared to using Flask framework, as OP noted in their question (though, the difference in time is not that big, but just a few seconds, depending on the size of file).

Solution

The solution (if one needs to upload files quite larger than 1MB and uploading time is important to them) would be to access the request body as a stream. As per Starlette documentation, if you access .stream(), then the byte chunks are provided without storing the entire body to memory (and later to temporary directory, if the body contains file data that exceeds 1MB). Example is given below, where time of uploading is recorded on client side, and which ends up being the same as when using Flask framework with the example given in OP's question.

app.py

from fastapi import Request
import aiofiles

@app.post('/upload')
async def upload(request: Request):
    try:
        filename = request.headers['filename']
        async with aiofiles.open(filename, 'wb') as f:
            async for chunk in request.stream():
                await f.write(chunk)
    except Exception:
        return {"message": "There was an error uploading the file"}
     
    return {"message": f"Successfuly uploaded {filename}"}

In case your application does not require saving the file to disk, and all you need is the file to be loaded directly into memory, you can just use the below (make sure your RAM has enough space available to accommodate the accumulated data):

from fastapi import Request

@app.post('/upload')
async def upload(request: Request):
    body = b''
    try:
        filename = request.headers['filename']
        async for chunk in request.stream():
            body += chunk
    except Exception:
        return {"message": "There was an error uploading the file"}
    
    #print(body.decode())
    return {"message": f"Successfuly uploaded {filename}"}

test.py

import requests
import time

with open("images/1.png", "rb") as f:
    data = f.read()
   
url = 'http://127.0.0.1:8000/upload'
headers = {'filename': '1.png'}

start = time.time()
r = requests.post(url=url, data=data, headers=headers)
end = time.time() - start

print(f'Elapsed time is {end} seconds.', '\n')
print(r.json())

In case you had to upload a rather large file that wouldn't fit into your client's RAM (if, for instance, you had 2 GB available RAM on the client's device and attempted to load a 4 GB file), you should rather use a streaming upload on client side as well, which would allow you to send large streams or files without reading them into memory (might take a bit more time to upload though, depending on the chunk size, which you may customise by reading the file in chunks instead and setting the chunk size as desired). Examples are given in both Python requests and httpx (which might yield a better performance than requests).

test.py (using requests)

import requests
import time

url = 'http://127.0.0.1:8000/upload'
headers = {'filename': '1.png'}

start = time.time()

with open("images/1.png", "rb") as f:
    r = requests.post(url=url, data=f, headers=headers)
   
end = time.time() - start

print(f'Elapsed time is {end} seconds.', '\n')
print(r.json())

test.py (using httpx)

import httpx
import time

url = 'http://127.0.0.1:8000/upload'
headers = {'filename': '1.png'}

start = time.time()

with open("images/1.png", "rb") as f:
    r = httpx.post(url=url, data=f, headers=headers)
   
end = time.time() - start

print(f'Elapsed time is {end} seconds.', '\n')
print(r.json())

For more details and code examples (on uploading multiple Files and Form/JSON data) based on the approach above (i.e., using request.stream() method), please have a look at this answer.

Trinitytrinket answered 11/1, 2022 at 13:20 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.