How to restart Kubernetes Pods with kubectl
Table of Contents
Anyone who has used Kubernetes for an extended period of time will know that things don’t always go as smoothly as you’d like. In production, unexpected things happen, and Pods can crash or fail in some unforeseen way. When this happens, you need a reliable way to restart the Pods.
Restarting a pod is not the same as restarting a container, as a Pod is not a process but an environment for running container(s). A Pod persists until it finishes execution, is deleted, is evicted for lack of resources, or its host node fails.
This article will list 4 scenarios where you might want to restart a Kubernetes Pod and walk you through methods to restart Pods with kubectl.
Encore is the Development Platform for building event-driven and distributed systems. Move faster with purpose-built local dev tools and DevOps automation for AWS/GCP. Get Started for FREE today.
4 scenarios where you might want to restart a Pod
There are several scenarios where you neeed to restart a Pod. The following are 4 of them:
- Unexpected errors such as “Pods stuck in an inactive state” (e.g., pending), “Out of Memory” (occurs Pods try to go beyond the memory limits set in your manifest file), etc.
- To easily upgrade a Pod with a newly-pushed container image if you previously set the PodSpec
imagePullPolicy
toAlways
. - To update configurations and secrets.
- You would want to restart Pods when the application running in the Pod has a corrupted internal state that needs to be cleared.
Now you’ve seen some scenarios where you might want to restart a Pod. Next, you will learn how to restart Pods with kubectl.
Restarting Kubernetes pods with kubectl
kubectl, by design, doesn’t have a direct command for restarting Pods. Because of this, to restart Pods with kubectl, you have to use one of the following methods:
- Restarting Kubernetes Pods by changing the number of replicas with
kubectl scale
command - Downtimeless restarts with
kubectl rollout restart
command - Automatic restarts by updating the Pod’s environment variable
- Restarting Pods by deleting them
Prerequisites
Before you learn how to use each of the above methods, ensure you have the following prerequisites:
- A Kubernetes cluster. The demo in this article was done using minikube — a single Node Kubernetes cluster.
- The kubectl command-line tool configured to communicate with the cluster.
For demo purposes, in any desired directory, create a httpd-deployment.yaml
file with replicas
set to 2
using the following YAML configurations:
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpd-deployment
labels:
app: httpd
spec:
replicas: 2
selector:
matchLabels:
app: httpd
template:
metadata:
labels:
app: httpd
spec:
containers:
- name: httpd-pod
image: httpd:latest
In your terminal, change to the directory where you saved the deployment file, and run:
$ kubectl apply -f httpd-deployment.yaml
The above command will create the httpd deployment with two pods. To verify the number of Pods, run the $ kubectl get pods
command.
Now you have the Pods of the httpd deployment running. Next, you will use each of the earlier methods to restart the Pods.
Restarting Kubernetes Pods by changing the number of replicas
In this method of restarting Kubernetes Pods, you scale the number of the deployment replicas down to zero, which stops and terminates all the Pods. Then you scale them back up to the desired state, which initializes new pods.
Note: It is important to note that when you set the number of replicas to zero, seeing the Pods stop running, there will be some application downtime.
To scale down the httpd deployment replicas you created, run the following kubectl scale
command:
$ kubectl scale deployment httpd-deployment --replicas=0
The above command will show an output indicating that Pods have been scaled, as shown in the image below.
To confirm that the pods were stopped and terminated, run $ kubectl get pods
, and you should get the “No resources are found in default namespace” message.
To scale up the replicas, run the same kubectl scale
, but this time with --replicas=2
.
$ kubectl scale deployment httpd-deployment --replicas=2
After running the above command, to verify the number of pods running, run:
$ kubectl get pods
And you should see each Pod back up and running after restarting, as in the image below.
Downtimeless restarts with Rollout restart
In the previous method, you scaled down the number of replicas to zero to restart the Pods; doing so caused an outage and downtime of the application. To restart without any outage and downtime, use the kubectl rollout restart
command, which restarts the Pods one by one without impacting the deployment.
To use rollout restart
on your httpd deployment, run:
$ kubectl rollout restart deployment httpd-deployment
Now to view the Pods restarting, run:
$ kubectl get pods
Notice in the image below Kubernetes creates a new Pod before Terminating
each of the previous ones as soon as the new Pod gets to Running
status. Because of this approach, there is no downtime in this restart method.
Encore is the Development Platform for building event-driven and distributed systems. Move faster with purpose-built local dev tools and DevOps automation for AWS/GCP. Get Started for FREE today.
Automatic restarts by updating the Pod’s environment variable
So far, you’ve learned two ways of restarting Pods in Kubernetes; one by changing the replicas and the other by rollout restart. The methods work, but you explicitly restarted the pods with both of them.
In this method, once you update the Pod’s environment variable, the change will automatically restart the Pods.
To update the environment variables of the Pods in your httpd deployment, run:
$ kubectl set env deployment httpd-deployment DATE=$()
After running the above command, which adds a DATE
environment variable in the Pods with a null value (=$()
), run $ kubectl get pods
and see the Pods restarting, similar to the rollout restart method.
You can verify that each Pod’s DATE
environment variable is null with the kubectl describe
command.
$ kubectl describe pod <pod_name>
After running the above command, the DATE
variable is empty (null) like in the image below.
Restarting Pods by deleting them
Because the Kubernetes API is declarative, it automatically creates a replacement when you delete a Pod that’s part of a ReplicaSet or Deployment. The ReplicaSet will notice the Pod is no longer available as the number of container instances will drop below the target replica count.
To delete a Pod, use the following command:
$ kubectl delete pod <pod_name>
Though this method works quickly, it is not recommended except if you have a failed or misbehaving Pod or set of Pods. For regular restarts like updating configurations, it is better to use the kubectl scale
or kubectl rollout
commands designed for that use case.
To delete all failed Pods for this restart technique, use this command:
$ kubectl delete pods --field-selector=status.phase=Failed
Cleaning up
Clean up the entire setup by deleting the deployment with the command below:
$ kubectl delete deployment httpd-deployment
Encore is the Development Platform for building event-driven and distributed systems. Move faster with purpose-built local dev tools and DevOps automation for AWS/GCP. Get Started for FREE today.
Conclusion
This article discussed 5 scenarios where you might want to restart Kubernetes Pods and walked you through 4 methods with kubectl. There is more to learn about kubectl. To learn more, check out the kubectl commands reference.
The Practical DevOps Newsletter
Your weekly source of expert tips, real-world scenarios, and streamlined workflows!