Google Cloud Functions Python Logging issue
Asked Answered
O

6

50

I'm not sure how to say this but, I'm feeling like there is something under the hood that was changed by Google without me knowing about it. I used to get my logs from my python Cloud Functions in the Google Cloud Console within the logging dashboard. And now, it just stopped working.

So I went investigating for a long time, I just made a log hello world python Cloud Function:

import logging

def cf_endpoint(req):
    logging.debug('log debug')
    logging.info('log info')
    logging.warning('log warning')
    logging.error('log error')
    logging.critical('log critical')
    return 'ok'

So this is my main.py that I deploy as a Cloud Function with an http trigger.

Since I was having a log ingestion exclusion filter with all the "debug" level logs I wasn't seeing anything in the logging dashboard. But when I removed it I discovered this :

logging dashboard screenshot

So it seems like something that was parsing the python built-in log records into stackdriver stopped parsing the log severity parameter! I'm sorry if I look stupid but that's the only thing I can think about :/

Do you guys have any explanations or solutions for this ? am I doing it the wrong way ?

Thank you in advance for your help.

UPDATE 2022/01:

The output now looks for example like:

[INFO]: Connecting to DB ... 

And the drop-down menu for the severity looks like:

enter image description here

With "Default" as the filter that is needed to show the Python logging logs, which means to show just any log available, and all of the Python logs are under "Default", the severity is still dropped.

Osburn answered 11/1, 2019 at 13:26 Comment(3)
Same issue here, the level is not set anymore - used to work. A print() does not set the level (severity) to INFO either. It should per cloud.google.com/functions/docs/monitoring/logging#writing_logs Did you report to issuetracker already ?Starch
Me too; I even tried json-format log payloads with "severity":"{INFO,ERROR,etc.}", to no avail. All show up with the useless "Any" level.Stealth
FYI - issuetracker.google.com/issues/124403972 + I created a GCP support case as this is clearly a bug to me.Starch
T
32

Stackdriver Logging severity filters are no longer supported when using the Python native logging module.

However, you can still create logs with certain severity by using the Stackdriver Logging Client Libraries. Check this documentation in reference to the Python libraries, and this one for some usage-case examples.

Notice that in order to let the logs be under the correct resource, you will have to manually configure them, see this list for the supported resource types. As well, each resource type has some required labels that need to be present in the log structure.

As an example, the following code will write a log to the Cloud Function resource, in Stackdriver Logging, with an ERROR severity:

from google.cloud import logging
from google.cloud.logging.resource import Resource

log_client = logging.Client()

# This is the resource type of the log
log_name = 'cloudfunctions.googleapis.com%2Fcloud-functions' 

# Inside the resource, nest the required labels specific to the resource type
res = Resource(type="cloud_function", 
               labels={
                   "function_name": "YOUR-CLOUD-FUNCTION-NAME", 
                   "region": "YOUR-FUNCTION-LOCATION"
               },
              )
logger = log_client.logger(log_name.format("YOUR-PROJECT-ID"))
logger.log_struct(
 {"message": "message string to log"}, resource=res, severity='ERROR')

return 'Wrote logs to {}.'.format(logger.name) # Return cloud function response

Notice that the strings in YOUR-CLOUD-FUNCTION-NAME, YOUR-FUNCTION-LOCATION and YOUR-PROJECT-ID, need to be specific to your project/resource.

Tibetan answered 11/2, 2019 at 10:27 Comment(5)
Thanks for the answer Joan. Frankly, I'm a bit disappointed with Google on this one. This change is very developer unfriendly and a big step back for basic severity support in CF logs. The documentation has not been updated, and a fundamental feature of cloud functions has been made much more verbose, unintuitive, and cumbersome to configure. Any inside information regarding if this is a temporary change or just how we should expect it will be going forward?Stealth
Aside from the logging - which is upsetting -, printing to standard or error console does not effect the severity either. Is G removing that functionality too, and only for Python ?? Using the Logging library is not only cumbersome but it also requires to monitor the SD logs when testing/developing the function locally, which is absurd.Starch
Thanks for saving my time. Btw, I found using reserved environments is more handy labels={"function_name": os.getenv('FUNCTION_NAME'), "region": os.getenv('FUNCTION_REGION')}. Also, I tried without .format("YOUR-PROJECT-ID") and it still worksPolyploid
Note that this doesn't work with google-cloud-logging==2.0.0. Will work with google-cloud-logging<=2.0.0. from google.cloud.logging.resource import Resource is not a module in version 2 anymoreCapricorn
@Capricorn True, but it seems I cannot import Resource anymore even from google-cloud-logging<=2.0.0. This is despite this doc still importing Resource in 2.6.0 (latest)Ruvolo
T
20

I encountered the same issue.

In the link that @joan Grau shared, I also see there is a way to integrate cloud logger with Python logging module, so that you could use Python root logger as usually, and all logs will be sent to StackDriver Logging.

https://googleapis.github.io/google-cloud-python/latest/logging/usage.html#integration-with-python-logging-module

...

I tried it and it works. In short, you could do it two ways

One simple way that bind cloud logger to root logging

from google.cloud import logging as cloudlogging
import logging
lg_client = cloudlogging.Client()
lg_client.setup_logging(log_level=logging.INFO) # to attach the handler to the root Python logger, so that for example a plain logging.warn call would be sent to Stackdriver Logging, as well as any other loggers created.

Alternatively, you could set logger with more fine-grain control

from google.cloud import logging as cloudlogging
import logging
lg_client = cloudlogging.Client()

lg_handler = lg_client.get_default_handler()
cloud_logger = logging.getLogger("cloudLogger")
cloud_logger.setLevel(logging.INFO)
cloud_logger.addHandler(lg_handler)
cloud_logger.info("test out logger carrying normal news")
cloud_logger.error("test out logger carrying bad news")
Torgerson answered 8/5, 2019 at 20:35 Comment(4)
Second solution is helpful thank you. However, logs do not differentiate between warning and error.Quyenr
this also does not log exceptions and tracebackArdellaardelle
the second method worked for me, even if it's not perfect. I could only get info and error levels to workRuvolo
This doesn't map levels properly. Warning is being considered as error, which is not a good idea as my code started failing on warning.Jarlath
S
15

Not wanting to deal with cloud logging libraries, I created a custom Formatter that emits a structured log with the right fields, as cloud logging expects it.

class CloudLoggingFormatter(logging.Formatter):
    """Produces messages compatible with google cloud logging"""
    def format(self, record: logging.LogRecord) -> str:
        s = super().format(record)
        return json.dumps(
            {
                "message": s,
                "severity": record.levelname,
                "timestamp": {"seconds": int(record.created), "nanos": 0},
            }
        )

Attaching this handler to a logger results in logs being parsed and shown properly in the logging console. In cloud functions I would configure the root logger to send to stdout and attach the formatter to it.

# setup logging
root = logging.getLogger()
handler = logging.StreamHandler(sys.stdout)
formatter = CloudLoggingFormatter(fmt="[%(name)s] %(message)s")
handler.setFormatter(formatter)
root.addHandler(handler)
root.setLevel(logging.DEBUG)
Snowblink answered 29/10, 2021 at 10:38 Comment(2)
This looks like a nice solution, thank you. How would you differentiate between local/dev envs which shouldn't log to GCP with production envs?Abvolt
This doesn't log to GCP, it logs to stdout. GCP picks up the logs properly because of the structured format. What you describe may be a problem if you use cloud logging libraries, not this.Snowblink
I
10

From Python 3.8 onwards you can simply print a JSON structure with severity and message properties. For example:

print(
    json.dumps(
        dict(
            severity="ERROR",
            message="This is an error message",
            custom_property="I will appear inside the log's jsonPayload field",
        )
    )
)

Official documentation: https://cloud.google.com/functions/docs/monitoring/logging#writing_structured_logs

Inhambane answered 16/9, 2021 at 3:5 Comment(1)
Just want to bump this answer :)Ester
B
2

To use the standard python logging module on GCP (tested on python 3.9), you can do the following:

import google.cloud.logging
logging_client = google.cloud.logging.Client()
logging_client.setup_logging()

import logging
logging.warning("A warning")

See also: https://cloud.google.com/logging/docs/setup/python

Balcer answered 1/6, 2022 at 9:25 Comment(3)
But 'logging.debug("A debug")' still doesn't seem to work.Bait
you probably need to set the level, the default level is info. Maybe this helps: https://mcmap.net/q/64563/-set-logging-levelsBalcer
logging.getLogger().setLevel(logging.DEBUG) should workFafnir
B
-1

I use a very simple custom logging function to log to Cloud Logging:

import json

def cloud_logging(severity, message):
    print(json.dumps({"severity": severity, "message": message}))

cloud_logging(severity="INFO", message="Your logging message")
Balcer answered 25/5, 2022 at 15:48 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.