kubernetes-jobs
Pods stuck in PodInitializing state indefinitely
To try and figure this out I would run the command: kubectl get pods – Add the namespace param if required. Then copy the pod name and run: kubectl describe pod {POD_NAME} That should give you some information as to why it’s stuck in the initializing state.
Wait for kubernetes job to complete on either failure/success using command line
Run the first wait condition as a subprocess and capture its PID. If the condition is met, this process will exit with an exit code of 0. kubectl wait –for=condition=complete job/myjob & completion_pid=$! Do the same for the failure wait condition. The trick here is to add && exit 1 so that the subprocess returns … Read more
Kubernetes – how to run job only once
By now this is possible by setting backoffLimit: 0 which tells the controller to do 0 retries. default is 6
How to run containers sequentially as a Kubernetes job?
After a few attempts, I did this and that solved the basic problem (similar to what the OP has posted). This configuration ensures that job-1 completes before job-2 begins. If job-1 fails, job-2 container is not started. I still need to work on the retries and failure handling, but the basics works. Hopefully, this will … Read more
Tell when Job is Complete
Since version 1.11, you can do: kubectl wait –for=condition=complete job/myjob and you can also set a timeout: kubectl wait –for=condition=complete –timeout=30s job/myjob
How to automatically remove completed Kubernetes Jobs created by a CronJob?
You can now set history limits, or disable history altogether, so that failed or successful CronJobs are not kept around indefinitely. See my answer here. Documentation is here. To set the history limits: The .spec.successfulJobsHistoryLimit and .spec.failedJobsHistoryLimit fields are optional. These fields specify how many completed and failed jobs should be kept. By default, they … Read more