Output from Dataproc Spark job in Google Cloud Logging
Asked Answered
S

2

16

Is there a way to have the output from Dataproc Spark jobs sent to Google Cloud logging? As explained in the Dataproc docs the output from the job driver (the master for a Spark job) is available under Dataproc->Jobs in the console. There are two reasons I would like to have the logs in Cloud Logging as well:

  1. I'd like to see the logs from the executors. Often the master log will says "executor lost" with no further detail, and it would be very useful to have some more information about what the executor is up to.
  2. Cloud Logging has nice filtering and search

Currently the only output from Dataproc that shows up in Cloud Logging is log items from yarn-yarn-nodemanager-* and container_*.stderr. Output from my application code is shown in Dataproc->Jobs but not in Cloud Logging, and it's only the output from the Spark master, not the executors.

Swish answered 9/12, 2015 at 18:38 Comment(2)
I'll also share that we (Cloud Dataproc team) intend on releasing the feature to pipe driver output to Cloud Logging in the future 1-2 months.Paradisiacal
Any updates on getting log information from executors? I have print(..) statements in my pyspark executors and Im not able to see their output anywhere. I can see print output from the master but any output from inside my map function seems to be lost.Jovitajovitah
P
7

tl;dr

This is not natively supported now but will be natively supported in a future version of Cloud Dataproc. That said, there is a manual workaround in the interim.

Workaround

Cloud Dataproc clusters use fluentd to collect and forward logs to Cloud Logging. The configuration of fluentd is why you see some logs forwarded and not others. Therefore, the simple workaround (until Cloud Dataproc has support for job details in Cloud Logging) is to modify the flientd configuration. The configuration file for fluentd on a cluster is at:

/etc/google-fluentd/google-fluentd.conf

There are two things to gather additional details which will be easiest:

  1. Add a new fluentd plugin based on your needs
  2. Add a new file to the list of existing files collected (line 56 has the files on my cluster)

Once you edit the configuration, you'll need to restart the google-fluentd service:

/etc/init.d/google-fluentd restart

Finally, depending on your needs, you may or may not need to do this across all nodes on your cluster. Based on your use case, it sounds like you could probably just change your master node and be set.

Paradisiacal answered 9/12, 2015 at 21:11 Comment(0)
C
1

You can use the dataproc initialization actions for stackdriver for this:

gcloud dataproc clusters create <CLUSTER_NAME> \
    --initialization-actions gs://<GCS_BUCKET>/stackdriver.sh \
    --scopes https://www.googleapis.com/auth/monitoring.write
Custodian answered 15/1, 2018 at 11:15 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.