Cannot determine if job needs to be started: Too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew
Asked Answered
A

3

18

I've created and pushed a cron job to deployment, but when I see it running in OpenShift, I get the following error message:

Cannot determine if job needs to be started: Too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew.

From what I understand by this, is that a job failed to run. But I don't understand why it is failing. Why isn't that logged somewhere? - or if it is, where can I find it?

The CronJob controller will keep trying to start a job according to the most recent schedule, but keeps failing and obviously it has done so >100 times.

I've checked the syntax of my cron job, which doesn't give any errors. Also if there are any syntax messages, I'm not even allowed to push.

Anyone know what's wrong?

my Cron Job:

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: my-cjob
  labels:
    job-name: my-cjob
spec:
  schedule: "*/5 * * * *" 
  # activeDeadlineSeconds: 180 # 3 min <<- should this help and why?      
  jobTemplate:
      spec:
        template:         
          metadata:
            name: my-cjob
            labels:
              job-name: my-cjob
          spec:
            containers:
            - name: my-cjob
              image: my-image-name
            restartPolicy: OnFailure

Or should I be using startingDeadlineSeconds? Anyone who has hit this error message and found a solution?

Update as according to comment

When running kubectl get cronjob I get the following:

NAME           SCHEDULE      SUSPEND   ACTIVE    LAST SCHEDULE   AGE
my-cjob        */5 * * * *   False     0         <none>          2d

When running kubectl logs my-cjob I get the following:

Error from server (NotFound): pods "my-cjob" not found

When running kubectl describe cronjob my-cjob I get the following:

Error from server (NotFound): the server could not find the requested resource

When running kubectl logs <cronjob-pod-name> I get many lines o code... Very difficult for me to understand and sort out..

When running kubectl describe pod <cronjob-pod-name> I also get a lot, but this is way easier to sort. Anything specific?

Running kubectl get events I get a lot, but I think this is the related one:

LAST SEEN   FIRST SEEN   COUNT     NAME                                            KIND                    SUBOBJECT                                 TYPE      REASON              SOURCE                                      MESSAGE
1h          1h           2         xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx             Pod                     spec.containers{apiproxy}                 Warning   Unhealthy           kubelet, xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx   Liveness probe failed: Get http://xxxx/xxxx: dial tcp xxxx:8080: connect: connection refused
Acrocarpous answered 5/3, 2020 at 7:18 Comment(8)
You can check the logs of cronjob like anyother object in the cluster. Use kubectl logs <cronjob-pod-name> and kubectl describe pod <cronjob-pod-name>. Can you update your question with results of that commands?Felon
Were your cronjob somehow suspended or did you shutdown the cluster? Can you test this with setting .spec.restartPolicy to Forbid and test it?Felon
@Felon I've updated the question - cant run the commands, you specify, as you can see. Haven't tested the restartPolicy-option yet. How can I find the cronjob-pod-name? If its the one I think it is, it says: Readiness probe failed: Get http://xxxxxx:8080\xxx: net/http: request canceled (Client.Timeout exceeded while awaiting headers)Acrocarpous
should I try adding initialDelaySeconds - trying different long shots, I guess.Acrocarpous
If you can't see it it means that it never actually got to run. What does you cronjob do? What kind of image is that? Can you check kubectl get events ?Felon
@Felon I've updated the question again. And sorry, but I could run the commands, I can see, but the logs-part gives me very much information, which I cannot understand. The describe-part gives better, clearer information, but is there anything specific you want from there? It spits out the events too, stating that Liveness probe failed and Readiness probe failedAcrocarpous
Do you have any Liveness/Readiness probes setup in your cronjob? Can you post full CronJob spec? To exclude issues with image can you run your image as a pod (kubectl run pod-test --image=my-image-name...) ?Felon
@Felon I don't know what you mean - I don't have anything else in my cron job, than specified above. I've tested my image and it creates it and runs it, so that don't seem to be the issue, unfortunately.Acrocarpous
A
15

Setting the startingDeadlineSeconds to 180 fixed the problem + removing the spec.template.metadata.labels.

Acrocarpous answered 6/3, 2020 at 17:38 Comment(1)
Here is a great explanation why setting startingDeadlineSeconds can fix this error.Trevelyan
F
1

I suspended my workload then resumed it after quite a while and saw the same error. Isn't it a bug because I triggered the suspend action on purpose anytime between suspend and resume should NOT be counted against missing starting.

Fascinate answered 17/5, 2022 at 15:35 Comment(0)
H
0

The root cause for this issue:

For every CronJob, the CronJob Controller checks how many schedules it missed in the duration from its last scheduled time until now. If there are more than 100 missed schedules, then it does not start the job and logs the error.^1

A CronJob is counted as missed if it has failed to be created at its scheduled time. For example, if concurrencyPolicy is set to Forbid and a CronJob was attempted to be scheduled when there was a previous schedule still running, then it would count as missed.^1

The simplest solution I can think of is recreating the cronjob to clean the missed schedules.

Hugmetight answered 12/3, 2023 at 11:27 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.