Log 'jsonPayload' in Firebase Cloud Functions
Asked Answered
B

3

7

TL;DR;

Does anyone know if it's possible to use console.log in a Firebase/Google Cloud Function to log entries to Stack Driver using the jsonPayload property so my logs are searchable (currently anything I pass to console.log gets stringified into textPayload).


I have a multi-module project with some code running on Firebase Cloud Functions, and some running in other environments like Google Compute Engine. Simplifying things a little, I essentially have a 'core' module, and then I deploy the 'cloud-functions' module to Cloud Functions, 'backend-service' to GCE, which all depend on 'core' etc.

I'm using bunyan for logging throughout my 'core' module, and when deployed to GCE the logger is configured using '@google-cloud/logging-bunyan' so my logs go to Stack Driver.

Aside: Using this configuration in Google Cloud Functions is causing issues with Error: Endpoint read failed which I think is due to functions not going cold and trying to reuse dead connections, but I'm not 100% sure what the real cause is.

So now I'm trying to log using console.log(arg) where arg is an object, not a string. I want this object to appear in Stack Driver under the jsonPayload but it's being stringified and put into the textPayload field.

Brawl answered 9/3, 2018 at 2:45 Comment(3)
Did you go through this? cloud.google.com/logging/docs/setup/nodejsDerayne
Yes, that's what my "Aside: ..." is referring to. I could try to debug why the Error: Endpoint read failed is happening, but that would probably take just as long as following their documentation on using console.log as described here cloud.google.com/functions/docs/monitoring/loggingBrawl
It seems console.log(arg) doesn't support JsonPayload. You should stick with Bunyan and debug the error.Harrier
A
10

It took me awhile, but I finally came across this example in firebase functions samples repository. In the end I settled on something a bit like this:

const Logging = require('@google-cloud/logging');
const logging = new Logging();
const log = logging.log('my-func-logger');
const logMetadata = {
  resource: {
    type: 'cloud_function',
    labels: {
      function_name: process.env.FUNCTION_NAME ,
      project: process.env.GCLOUD_PROJECT,
      region: process.env.FUNCTION_REGION
    },
  },
};
const logData = { id: 1, score: 100 };
const entry = log.entry(logMetaData, logData);
log.write(entry)

You can add a string severity property value to logMetaData (e.g. "INFO" or "ERROR"). Here is the list of possible values.


Update for available node 10 env vars. These seem to do the trick:

labels: {
  function_name: process.env.FUNCTION_TARGET,
  project: process.env.GCP_PROJECT,
  region: JSON.parse(process.env.FIREBASE_CONFIG).locationId
}

UPDATE: Looks like for Node 10 runtimes they want you to set env values explicitly during deploy. I guess there has been a grace period in place because my deployed functions are still working.

Aneroidograph answered 25/5, 2018 at 19:17 Comment(10)
This is the route I ended up taking, I'm also using GKE and kubernetes collects logs from stdout as textPayload and was hoping I could get away with using console.log and not have to deal with logging frameworks. But as you've also figured out, the only way to get jsonPayload is to use the logging library.Brawl
Great answer, thanks. Now all that remains for me is getting the labels['execution_id'] in there too (the same you'd get if you were to use console.log on Firebase).Underpants
PS it looks like @google-cloud/logging-winston (and perhaps the bunyan alternative) adds your logMetadata automatically. Still, the execution_id evades me. I'll probably have to do my own tracing with github.com/googleapis/…Underpants
Do you keep your instance of Logging in the global context? Or do you create a new one with every request? Curious if you have had any problems with dead connections when the cloud function gets cpu throttle when unused.Lions
New instance on each request. I believe cloud func already loaded can be reused and so need to make sure that each call has its own logging instance.Aneroidograph
the execution_id is available in the context passed to background functions (event.context.eventId in Node 6 runtime: cloud.google.com/functions/docs/writing/…)Syringa
@AndrewHaines the eventId from the context looks like a UUID, whereas the execution_id is an integer. The hunt for execution_id carries on...Underpants
@Underpants any progress? Same issue i can catch eventId but not the traceManrope
@Aneroidograph the update for node 10 will not work as GCP_PROJECT and FIREBASE_CONFIG are not accessible anymore in process.env, see more about that here : cloud.google.com/functions/docs/env-varVillain
thanks @foufrix. looks like they need to be explicitly set during deploy...not sure why my stuff running in node 10 has been working though.Aneroidograph
T
3

I ran into the same problem, and as stated by comments on @wtk's answer, I would like to add replicating all of the default cloud function logging behavior I could find in the snippet below, including execution_id.

At least for using Cloud Functions with the HTTP Trigger option the following produced correct logs for me. I have not tested for Firebase Cloud Functions

// global
const { Logging } = require("@google-cloud/logging");
const logging = new Logging();
const Log = logging.log("cloudfunctions.googleapis.com%2Fcloud-functions");
const LogMetadata = {
  severity: "INFO",
  type: "cloud_function",
  labels: {
    function_name: process.env.FUNCTION_NAME,
    project: process.env.GCLOUD_PROJECT,
    region: process.env.FUNCTION_REGION
  }
};

// per request
const data = { foo: "bar" };
const traceId = req.get("x-cloud-trace-context").split("/")[0];
const metadata = {
  ...LogMetadata,
  severity: 'INFO',
  trace: `projects/${process.env.GCLOUD_PROJECT}/traces/${traceId}`,
  labels: {
    execution_id: req.get("function-execution-id")
  }
};
Log.write(Log.entry(metadata, data));
Thithia answered 11/4, 2019 at 22:50 Comment(1)
Does this still work for you? It's so annoying to have to carry the trace id like thatManrope
S
0

The github link in @wtk's answer should be updated to:

https://github.com/firebase/functions-samples/blob/2f678fb933e416fed9be93e290ae79f5ea463a2b/stripe/functions/index.js#L103

As it refers to the repository as of when the question was answered, and has the following function in it:

// To keep on top of errors, we should raise a verbose error report with Stackdriver rather
// than simply relying on console.error. This will calculate users affected + send you email
// alerts, if you've opted into receiving them.
// [START reporterror]
function reportError(err, context = {}) {
    // This is the name of the StackDriver log stream that will receive the log
    // entry. This name can be any valid log stream name, but must contain "err"
    // in order for the error to be picked up by StackDriver Error Reporting.
    const logName = 'errors';
    const log = logging.log(logName);

    // https://cloud.google.com/logging/docs/api/ref_v2beta1/rest/v2beta1/MonitoredResource
    const metadata = {
    resource: {
        type: 'cloud_function',
        labels: {function_name: process.env.FUNCTION_NAME},
    },
    };

    // https://cloud.google.com/error-reporting/reference/rest/v1beta1/ErrorEvent
    const errorEvent = {
    message: err.stack,
    serviceContext: {
        service: process.env.FUNCTION_NAME,
        resourceType: 'cloud_function',
    },
    context: context,
    };

    // Write the error log entry
    return new Promise((resolve, reject) => {
    log.write(log.entry(metadata, errorEvent), (error) => {
        if (error) {
        return reject(error);
        }
        resolve();
    });
    });
}
// [END reporterror]

Slipover answered 13/10, 2021 at 3:24 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.