Your pods are crashlooping, you quickly jump into the logs for your pod and see… Nothing? Chances are that your pods are crashlooping because the init container is failing. In this guide we’ll go over how to debug init containers in Kubernetes by first figuring out what init containers are failing on your pod and then moving on to investigate why the init containers are failing.
Setup example environment (optional)
If you want to follow along with the examples for debugging Kubernetes init containers, you’re welcome to follow the steps below, but don’t feel obligated if your production environment is currently on fire and you’re just looking for some quick commands to run.
1.) First, verify that Minikube is installed and running.
$ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
timeToStop: Nonexistent
2.) Next, create the following yaml file on your system. We’ll name it pod.yaml for simplicity. You’ll notice it has a primary container and 2 init containers on it.
apiVersion: v1
kind: Pod
metadata:
name: myapp
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo I am alive! && sleep 3600']
initContainers:
- name: init-secret-sync
image: busybox:1.28
command: ['sh', '-c', 'echo starting pretend secret-sync']
- name: init-db
image: busybox:1.28
command: ['sh', '-c', 'echo starting pretend database; service mysql start']
3.) Apply your pod.yaml file. The installation should appear to work just fine, but if we run a `kubectl get pods` you’ll notice that it’s crashlooping.
$ kubectl apply -f pod.yaml
pod/myapp created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp 0/1 Init:CrashLoopBackOff 1 13s
4.) That’s no worry. I’ll just check the logs of my Kubernetes pod to see why it’s not working and fix the problem.
$ kubectl logs myapp
Error from server (BadRequest): container "myapp-container" in pod "myapp" is waiting to start: PodInitializing
“myapp” is waiting to start – what a great error message. Well if I just wait 10 minutes and check the logs again.
$ kubectl logs myapp
Error from server (BadRequest): container "myapp-container" in pod "myapp" is waiting to start: PodInitializing
How incredibly helpful. Nothing has changed. Well now that we’re happily in a broken state and don’t get a helpful error message back you should now be setup and ready to start debugging your Kubernetes init containers!
What init containers are failing on my pod?
To get more information about why an init container would be failing we’ll run a describe on the k8s pod.
$ kubectl describe pod myapp
This will show you a lot of information about your pod, but the section we want to pay attention to is the ‘initContainers’ section. We see there are 2 init containers (init-secret-sync & init-db). In the output below look for the State section of the container to figure out which one is having issues starting up.
Init Containers:
init-secret-sync:
Container ID: docker://f9b2b411abd919187579184f1e70e9ffec8bcc254f68de735fb0070b4f347640
Image: busybox:1.28
Image ID: docker-pullable://[email protected]:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47
Port: <none>
Host Port: <none>
Command:
sh
-c
echo starting pretend secret-sync
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 17 Jan 2021 11:08:07 -0700
Finished: Sun, 17 Jan 2021 11:08:07 -0700
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hldqz (ro)
init-db:
Container ID: docker://dcf804525028ff9f334b471758969e3f6bddb0071dcd453a6bd13fe69fb83d3c
Image: busybox:1.28
Image ID: docker-pullable://[email protected]:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47
Port: <none>
Host Port: <none>
Command:
sh
-c
echo starting pretend database; service mysql start
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 127
Started: Sun, 17 Jan 2021 11:13:50 -0700
Finished: Sun, 17 Jan 2021 11:13:50 -0700
Ready: False
Restart Count: 6
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hldqz (ro)
You’ll notice that the init-secret-sync container is in the state ‘Terminated’ with the reason being ‘Completed’ and the exit code 0 – so that one looks like it ran successfully. The init-db container on the other hand is in the state ‘Waiting’ with the reason ‘CrashLoopBackOff’ and the reason ‘Error’. We’ve found our perpetrator!
Why is the init container failing?
It’s Kubernetes debug time!
1.) The first place to look is the ‘Events‘ section at the very bottom of your `kubectl describe pods myapp` output. We see the following.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 19m default-scheduler Successfully assigned default/myapp to minikube
Normal Pulled 19m kubelet Container image "busybox:1.28" already present on machine
Normal Created 19m kubelet Created container init-secret-sync
Normal Started 19m kubelet Started container init-secret-sync
Normal Started 18m (x4 over 19m) kubelet Started container init-db
Normal Pulled 17m (x5 over 19m) kubelet Container image "busybox:1.28" already present on machine
Normal Created 17m (x5 over 19m) kubelet Created container init-db
Warning BackOff 4m11s (x70 over 19m) kubelet Back-off restarting failed container
Looking at the ‘Type‘ column will tell us if the events were normal or not. In this case we have event that isn’t normal. It appears to be telling us that a container is failing and that it keeps restarting. Not super useful information. However if the reason your pod is failing is because it can’t pull the image down or it can’t schedule your pod on a Kubernetes node those errors would appear here.
2.) If the Event’s don’t give you an additional information you can safely assume there is an error happening on the container itself. To figure this information out we can look at the logs.
$ kubectl logs myapp
Error from server (BadRequest): container "myapp-container" in pod "myapp" is waiting to start: PodInitializing
That’s not super helpful! The reason is that when you run kubectl logs it will only show you the logs of your primary container. To get the logs for an init container simply append the name of the init container to your kubectl logs command like so.
$ kubectl logs myapp init-db
sh: service: not found
Okay, it looks like it’s failing because ‘service‘ can’t be found. Let’s take a look at our init container’s code again.
- name: init-db
image: busybox:1.28
command: ['sh', '-c', 'echo starting pretend database; service mysql start']
In the command section I’m trying to start the mysql service, but I’m using the busybox image to do that – busybox doesn’t have mysql. We’ve located the issue, I can now go fix up the image I’m using and get back to my day.
Wrap up
Hopefully your Production fire has been extinguished and your team was safely able to get their new code out with the help of this article on debugging Kubernetes init containers. If you enjoyed this and want to check out some other Kubernetes resources take a look at these related articles and guides.
Pingback: Setting up Loki & Grafana in Kubernetes - Swiss Army DevOps