I am trying to download a large file (.tar.gz
) from FastAPI backend. On server side, I simply validate the filepath, and I then use Starlette.FileResponse
to return the whole file—just like what I've seen in many related questions on StackOverflow.
Server side:
return FileResponse(path=file_name, media_type='application/octet-stream', filename=file_name)
After that, I get the following error:
File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 149, in serialize_response
return jsonable_encoder(response_content)
File "/usr/local/lib/python3.10/dist-packages/fastapi/encoders.py", line 130, in jsonable_encoder
return ENCODERS_BY_TYPE[type(obj)](obj)
File "pydantic/json.py", line 52, in pydantic.json.lambda
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte
I also tried using StreamingResponse
, but got the same error. Any other ways to do it?
The StreamingResponse
in my code:
@x.post("/download")
async def download(file_name=Body(), token: str | None = Header(default=None)):
file_name = file_name["file_name"]
# should be something like xx.tar
def iterfile():
with open(file_name,"rb") as f:
yield from f
return StreamingResponse(iterfile(),media_type='application/octet-stream')
Ok, here is an update to this problem. I found the error did not occur on this api, but the api doing forward request of this.
@("/")
def f():
req = requests.post(url ="/download")
return req.content
And here if I returned a StreamingResponse
with .tar
file, it led to (maybe) encoding problems.
When using requests, remember to set the same media-type. Here is media_type='application/octet-stream'
. And it works!
return StreamingResponse(iterfile())
And I still got error:No json object could be decoded
when downloading tar file – Wilheminawilhidemedia_type='application/octet-stream'
for the StreamingResponse to indicate that it's binary data? Do you have the example code that fails? – Wishboneyield from f
I found this could use a large amount of CPU. How can I solve it? Maybe the reason is that chunk size is small and lead to massive file operation? Can I increase the chunk size here? – Wilheminawilhide