Monitoring and alerting on pod status or restart with Google Container Engine (GKE) and Stackdriver
Asked Answered
G

5

24

Is there a way to monitor the pod status and restart count of pods running in a GKE cluster with Stackdriver?

While I can see CPU, memory and disk usage metrics for all pods in Stackdriver there seems to be no way of getting metrics about crashing pods or pods in a replica set being restarted due to crashes.

I'm using a Kubernetes replica set to manage the pods, hence they are respawned and created with a new name when they crash. As far as I can tell the metrics in Stackdriver appear by pod-name (which is unique for the lifetime of the pod) which doesn't sound really sensible.

Alerting upon pod failures sounds like such a natural thing that it sounds hard to believe that this is not supported at the moment. The monitoring and alerting capabilities that I get from Stackdriver for Google Container Engine as they stand seem to be rather useless as they are all bound to pods whose lifetime can be very short.

So if this doesn't work out of the box are there known workarounds or best practices on how to monitor for continuously crashing pods?

Graven answered 4/5, 2017 at 17:28 Comment(2)
I am working as well on a similar solution .. At the moment I didn't find a lot regarding what you ask and other similar metrics that can be interesting .. In case I have some updates I'll let you know!Labour
Agreed that this is a glaring hole in the GKE / Stackdriver stack. Pretty amazed that I can't find a way to set up alerts on when a pod restarts or gets evicted, or when a deployment is added, etc. Will probably end up writing my own python-based daemon to do this. (using this: github.com/kubernetes-client/python )Botsford
I
7

There is a built in metric now, so it's easy to dashboard and/or alert on it without setting up custom metrics

Metric: kubernetes.io/container/restart_count
Resource type: k8s_container
Infract answered 1/12, 2020 at 22:23 Comment(2)
This should be the way to do it now!Guarnerius
Something changed since this comment was published. Now the alert often triggers for pods that are being terminated. Add a filter by state=ACTIVE to avoid this and only be alerted for container restarts in pods that are active.Cathryncathy
D
6

You can achieve this manually with the following:

  1. In Logs Viewer, creating the following filter:

    resource.labels.project_id="<PROJECT_ID>"
    resource.labels.cluster_name="<CLUSTER_NAME>"
    resource.labels.namespace_name="<NAMESPACE, or default>"
    jsonPayload.message:"failed liveness probe"
    
  2. Create a metric by clicking on the Create Metric button above the filter input and filling up the details.

  3. You may now track this metric in Stackdriver.

Would be happy to be informed of a built-in metric instead of this.

Darnell answered 4/1, 2019 at 6:31 Comment(5)
for the payload you probably want ("Killing container" AND "Container failed liveness probe") otherwise you are going to match the autoscaler terminating pods when load reduces.Infract
Do you know how to automatically resolve an alert based on this method?Eurystheus
Now it seems to be "Container product failed liveness probe, will be restarted"Elyn
You should filter on resource too otherwise your metric is going to be scanning every single log message on your cluster namespace resource.type="k8s_pod"Infract
I also find it useful to add a metric label on the container name as grouping by transient pod name is not so useful. Field: jsonPayload.message RegEx: Container ([^\s\]*)Infract
S
5

In my cluster (a bare-metal k8s cluster),I use kube-state-metrics https://github.com/kubernetes/kube-state-metrics to do what you want. This project belongs to kubernetes repo and it is quite easy to use. Once deployed u can use kube_pod_container_status_restarts this metrics to know if a container restarts

Scenography answered 11/7, 2017 at 3:57 Comment(2)
I just installed kube-state-metrics on my dev cluster and this stat is missing. No other useful stats re Pod state seem available, actually. The words "restart", "terminate", "evict", "image", nor "backoff" are nowhere to be seen in the returned 12k metrics. :facepalm:Botsford
Weird, I can see the restart metric in the repo. github.com/kubernetes/kube-state-metrics/blob/…Sensitometer
A
0

Others have commented on how to do this with metrics, which is the right solution if you have a very large number of crashing pods.

An alernative approach is to treat crashing pods as discrete events or even log-lines. You can do this with Robusta (disclaimer, I wrote this) with YAML like this:

triggers:
  - on_pod_update: {}
actions:
  - restart_loop_reporter:
      restart_reason: CrashLoopBackOff
  - image_pull_backoff_reporter:
      rate_limit: 3600
sinks:
  - slack

Here we're triggering an action named restart_loop_reporter whenever a pod updates. The data stream comes from the APIServer.

The restart_loop_reporter is an action which filters out non-crashing pods. Above it's configured to report only on CrashLoopBackOffs but you could remove that to report all crashes.

A benefit of doing it this way is that you can gather extra data about the crash automatically. For example, the above will fetch the pod's logs and forward them along with the crash report.

I'm sending the result here to Slack, but you could just as well send it to a structured output like Kafka (already builtin) or Stackdriver (not yet supported, but I can fix that if you like).

Aten answered 30/12, 2021 at 0:48 Comment(0)
C
-1

Remember that, you can always raise feature request if the options available are not enough.

Currin answered 29/7, 2020 at 1:28 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.