How do I display output from Python application running inside Docker container?
Asked Answered
W

2

16

So, I've got some Python code running inside a Docker container. I started up my local env using Google's gcloud script. I'm seeing basic access style logs and health check info, but I'm not sure how I can pass through log messages I'm writing from my Python app to the console. Is there a parameter I can set to accomplish this with my gcloud script or is there something I can set in the Dockerfile that can help?

Warrant answered 4/12, 2014 at 18:37 Comment(4)
You can attach yourself to the running container or you can use docker logs. Also you can attach yourself when starting a container with docker run -a. I Hope these information helps you.Unchain
share the Dockerfile out to get more support. Where is the log now inside containers ? generally print the logs to the console (stdout/stderr) inside container, then you can use docker logs outside. You can always use docker exec command to jump inside to check logs like normal appDeter
Thanks for the help guys. "docker logs" was what I was looking for. I think the part I was missing was how to get the running docker process IDs (docker ps), so I could feed that to the logs command. If either of you can write out your answer, I'll mark it as correct.Warrant
Possible duplicate of Python app does not print anything when running detached in dockerAssumptive
F
28

For Python to log on your terminal/command line/console, when executed from a docker container, you should have this variable set in your docker-compose.yml

  environment:
    - PYTHONUNBUFFERED=0

This is also a valid solution if you're using print to debug.

Fascinate answered 12/6, 2015 at 0:35 Comment(5)
Glad could be of help :)Fascinate
I'm using docker-compose and one of the containers is running Django. Despite the messages from "mange.py runserver" are being printed (all GET and POST requests was showing) my print messages did not show up until I add this envirroment variable. Thanks!!Lathan
Thanks man, very helpful. You'd think when you run docker-compose up it would default to having the same output that your code producer pre-Dockerize, but maybe I'm missing something.Charpoy
@Fascinate And if you don`t use docker-compose file, just single run command?Maynor
- PYTHONUNBUFFERED=1Abroach
A
0

(Answer based on the comments)

You don't need to know the container ID if you wrap the app into docker-compose. Just add docker-compose.yml alongside your Dockerfile. It might sound as an extra level of indirection, but for a simple app it's as trivial as this:

version: "3.3"

services:
  build: .
  python_app:
    environment:
      - PYTHONUNBUFFERED=1

That's it. The benefit of having it is that you don't need to pass a lot of flags that docker require because they are added automatically. It also simplifies work with volumes and env vars if they become required later.

You can they view logs by service name:

docker-compose logs python_app

By the way, I'd rather set PYTHONUNBUFFERED=1 if I'm testing something locally. It disabled buffering, which makes logging more deterministic locally. I had a lot of logging problems, for example, when I tried to spin up grpc server in my python app because the logs that are flushed before the server starts were not all init logs I wanted to see. And once the server starts, you will not see the init logs because the logging reattaches to a different/spawned process.

Anastasio answered 22/6, 2021 at 8:18 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.