Challenge,ย Medium, ย onย  Kubernetes,ย Containers

There is a Pod called faulty in the challenge namespace. It's desperately trying to start up but can't get far enough in its lifecycle. Apparently, a new container was added to the Pod recently, and after that, the Pod stopped working. Can you identify what causes the Pod to fail and fix it? Note that you are not allowed to change the Pod images or their contents, but you can freely change the rest of the Pod spec.

Can you make all checks below pass?

Hint 1 ๐Ÿ’ก

It's a good idea to start by reviewing the Pod spec. Take a close look at the .spec.containers and .spec.initContainers lists. Make sure you understand the .status.containerStatuses and .status.initContainerStatuses fields. You can use either kubectl get pod faulty -n challenge -o yaml or the "Show me the Pod" button above.

Hint 2 ๐Ÿ’ก

Try looking at container logs. You can use kubectl logs -n challenge faulty -c <container-name> for that. It may shed some light on why some of the containers are failing.

Hint 3 ๐Ÿ’ก

While it's not always possible to sneak peek into the containerized application, in this case you can do that. All containers are simple Python scripts, and you can see their source code by running kubectl exec -n challenge faulty -c <container-name> -- cat /app/<script-name>.py.

Hint 4 ๐Ÿ’ก

Remember that init containers are started before the regular containers. Does it look like the new init container may depend on a regular container? Should you try moving that regular container to the init list?

Hint 5 ๐Ÿ’ก

Traditional init containers not just start in the order but also always run to completion. If the second init container expects the first one to be running, maybe it's time to try the new restartPolicy: Always attribute? ๐Ÿ˜‰

Categories:ย Kubernetes,ย Containers
Discussion:ย  Discord

Level up your Server Side game โ€” Join 9,000 engineers who receive insightful learning materials straight to their inbox