For apply new configuration the new pod should be created (the old one will be removed).
-
If your pod was created automatically by
DeploymentorDaemonSetresource, this action will run automaticaly each time after you update resource’s yaml.
It is not going to happen if your resource havespec.updateStrategy.type=OnDelete. -
If problem was connected with error inside docker image, that you solved, you should update pods manually, you can use rolling-update feature for this purpose, In case when new image have same tag, you can just remove broken pod. (see below)
-
In case of node failure, the pod will recreated on new node after few time, the old pod will be removed after full recovery of broken node. worth noting it is not going to happen if your pod was created by
DaemonSetorStatefulSet.
Any way you can manual remove crashed pod:
kubectl delete pod <pod_name>
Or all pods with CrashLoopBackOff state:
kubectl delete pod `kubectl get pods | awk '$3 == "CrashLoopBackOff" {print $1}'`
If you have completely dead node you can add --grace-period=0 --force options for remove just information about this pod from kubernetes.