Google cloud logging shows python INFO messages as errors
Asked Answered
D

2

8

I have an application running in Kubernetes in google cloud. The application is written in python using fastapi. Logs from that application are visible via google cloud logging, however their "serverity" appears to be incorrectly translated: While fastapi's access logs are correctly written with "INFO" severity, any messages written from custom loggers appear as errors, even if they were written by a logger.info call.

I pass the following logging config to uvicorn via the --log-config command line option:

{
  "version": 1,
  "disable_existing_loggers": false,
  "formatters": {
    "simple": {
      "format": "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
    },
    "default": {
      "()": "uvicorn.logging.DefaultFormatter",
      "datefmt": "%Y-%m-%dT%H:%M:%S",
      "format": "[%(asctime)s.%(msecs)04dZ] %(name)s %(levelprefix)s %(message).400s"
    },
    "access": {
      "()": "uvicorn.logging.AccessFormatter",
      "datefmt": "%Y-%m-%dT%H:%M:%S",
      "format": "[%(asctime)s.%(msecs)04dZ] %(name)s %(levelprefix)s %(message)s"
    }
  },
  "handlers": {
    "default": {
      "formatter": "default",
      "class": "logging.StreamHandler",
      "stream": "ext://sys.stderr"
    },
    "access": {
      "formatter": "access",
      "class": "logging.StreamHandler",
      "stream": "ext://sys.stdout"
    }
  },
  "loggers": {
    "uvicorn.error": {
      "level": "INFO",
      "handlers": ["default"],
      "propagate": false
    },
    "uvicorn.access": {
      "level": "INFO",
      "handlers": ["access"],
      "propagate": false
    },
    "uvicorn": {
      "level": "INFO",
      "handlers": ["default"],
      "propagate": false
    },
    "jsonrpc": {
      "level": "INFO",
      "handlers": ["default"],
      "propagate": false
    },
    "api": {
      "level": "INFO",
      "handlers": ["default"],
      "propagate": false
    }
  },
  "root": {
    "level": "INFO",
    "handlers": ["default"]
  }
}

All the uvicorn* loggers are correctly handled, but the jsonrpc and api loggers always show as "ERROR" in google cloud.

Following the documentation for google-cloud-logging, I use the following to setup cloud logging:

import google.cloud.logging

client = google.cloud.logging.Client()
client.setup_logging()

What am I missing? Is anything wrong with my configuration file? Is this expected or known behavior?

Edit: It appears that part of the issue here is the streams that the console handlers go to. Switching the default handler to go to stdout and adding an error handler that prints to stderr seems to solve the problem.

Delirium answered 11/7, 2022 at 9:4 Comment(5)
Have you tried this solution?Wagonlit
Thanks @JavierRoger, but that doesn't really do what I want. For one, I would need to set up stackdriver and fluentd to get this to work and secondly, it seems to select severity based on stderr/stdout, meaning that warnings would still be either info or error depending on where they get sent.Delirium
Also having this issue. ``` import logging logging.basicConfig(level=logging.INFO) logger = logging.getLogger(name) logging.info("informational message") # shows as error in google cloud logs ``` even though it's clearly an info message, not an error: "INFO:root:informational message"Fr
@AlexChoi it's a year later, did you fix this? I'm thinking of using print() over logger...Arciniega
@bmiller, it's working now. I am not entirely sure what I did, but I made a change a few months back on the default logger to go to stdout and I added an extra error logger that goes to stderr. The date of that commit fits well with when it started working and I stopped thinking about it. Sorry for not reporting back.Delirium
T
3

Posting here as I had a similar issue with all logs being shown with ERROR severity for a Docker container deployed behind an endpoint in Vertex AI, and no solution is available here. The following code fixed the issue for me:

import logging
import sys

info_stream_handler = logging.StreamHandler(stream=sys.stdout)
info_stream_handler.setLevel(logging.INFO)
cloud_logger = logging.getLogger('cloudLogger')
cloud_logger.setLevel(logging.INFO)
cloud_logger.addHandler(info_stream_handler)
cloud_logger.propagate = False

cloud_logger.info("Info Log Message")

To fix the issue, I created a StreamHandler and specified that INFO logs are sent to stdout instead of stderr, and added this handler to the root logger using the addHandler() method.

After this, one final issue I had was duplicate logs on GCP, one with the correct INFO level, and a duplicate of this same log with ERROR severity.

The reason for this is that the logs were being passed to the INFO StreamHandler which was correctly writing these logs to stdout, but due to the nature of the python logging module, all these logs were also being passed up the logger “ancestry” to the root logger, which was writing the same logs to stderr, and causing the duplication. The python docs for the logging module explain this:

The fix for this was to set cloud_logger.propagate to False, and not pass logs to ancestor loggers.

Tacho answered 21/11, 2023 at 10:22 Comment(2)
Hi @melcon, that looks good. Can you explain a bit what you are doing here. I haven't tried it yet (may not do so anytime soon), but would like to understand the logic here.Delirium
Hi @Ingo, updated my answer with additional context/explanation, hope this helps.Tacho
S
1

The documentation says:

Severities: By default, logs written to the standard output are on the INFO level and logs written to the standard error are on the ERROR level. Structured logs can include a severity field, which defines the log's severity.

So @melcon's solution does not actually work. It just sets all log levels as INFO (because the StreamHandler takes stdout). Instead, you can use StructuredLogHandler:

import logging
import sys
import json
from google.cloud import logging as cloud_logging
from google.cloud.logging_v2.handlers import CloudLoggingHandler

class StructuredLogHandler(logging.StreamHandler):
    def emit(self, record):
        log_entry = {
            'message': record.getMessage(),
            'severity': record.levelname
        }
        self.stream.write(json.dumps(log_entry) + '\n')

# Initialize the Cloud Logging client
client = cloud_logging.Client()
cloud_handler = CloudLoggingHandler(client)

# Create a logger
logger = logging.getLogger('cloudLogger')
logger.setLevel(logging.DEBUG)  # Capture all levels of logging

# Create a structured log handler for stdout
structured_handler = StructuredLogHandler(stream = sys.stdout)
structured_handler.setLevel(logging.DEBUG)  # Capture all levels of logging

# Add the handlers to the logger
logger.addHandler(structured_handler)
logger.addHandler(cloud_handler)
logger.propagate = False

# Sample log messages with different severities
logger.debug("This is a DEBUG message")
logger.info("This is an INFO message")
logger.warning("This is a WARNING message")
logger.error("This is an ERROR message")
logger.critical("This is a CRITICAL message")

So the logs will appear in Google Cloud as:

enter image description here

Steffie answered 27/5 at 13:56 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.