Medium, Β onΒ  Kubernetes,Β Containers Submissions: 34/162

There is a Pod called faulty in the challenge namespace. It's desperately trying to start up but can't get far enough in its lifecycle. Apparently, a new container was added to the Pod recently, and after that, the Pod stopped working. Can you identify what causes the Pod to fail and fix it? Note that you are not allowed to change the Pod images or their contents, but you can freely change the rest of the Pod spec.

Can you make all checks below pass?

Hint 1 πŸ’‘

It's a good idea to start by reviewing the Pod spec. Take a close look at the .spec.containers and .spec.initContainers lists. Make sure you understand the .status.containerStatuses and .status.initContainerStatuses fields. You can use either kubectl get pod faulty -n challenge -o yaml or the "Show me the Pod" button above.

Hint 2 πŸ’‘

Try looking at container logs. You can use kubectl logs -n challenge faulty -c <container-name> for that. It may shed some light on why some of the containers are failing.

Hint 3 πŸ’‘

While it's not always possible to sneak peek into the containerized application, in this case you can do that. All containers are simple Python scripts, and you can see their source code by running kubectl exec -n challenge faulty -c <container-name> -- cat /app/<script-name>.py.

Hint 4 πŸ’‘

Remember that init containers are started before the regular containers. Does it look like the new init container may depend on a regular container? Should you try moving that regular container to the init list?

Hint 5 πŸ’‘

Traditional init containers not just start in the order but also always run to completion. If the second init container expects the first one to be running, maybe it's time to try the new restartPolicy: Always attribute? πŸ˜‰

Categories:Β Kubernetes,Β Containers
Discussion:Β  Discord

Level up your server-side game β€” Join 6,600 engineers who receive insightful learning materials straight to their inbox