Lesson  in  Kubernetes the (Very) Hard Way

Conclusion

Final Words

Congratulations!

You now have something most Kubernetes users never build by hand: a cluster whose internals you can actually explain.

Over the course of this journey, we went from plain Linux machines to a functioning Kubernetes cluster, one component at a time. No magic installers. No invisible defaults. Just the building blocks and the interfaces between them.

What You Built

ModuleBuilding blocksWhy it matters
Worker Nodecontainerd, runc, CNI plugins, kubeletThis is where Pods stop being YAML and become running containers
Control Planeetcd, kube-apiserver, kube-scheduler, kube-controller-managerThis is where desired state is stored, exposed, scheduled, and reconciled
ClusterTLS bootstrapping, Flannel, kube-proxy, CoreDNSThis is where separate machines start behaving like one Kubernetes cluster

What You Learned

  • Kubernetes is not a monolith. It is a set of focused components that communicate through well-defined APIs
  • containerd and kubelet are the worker-side bridge between Kubernetes objects and real Linux processes
  • etcd is the source of truth, but almost nobody talks to it directly. The API server stands in front of it and makes the rest of the system possible
  • The scheduler and controller manager do very different jobs: one decides placement, the other keeps actual state aligned with desired state
  • Cluster networking is not one feature. It is a stack of concerns: Pod-to-Pod reachability, Service routing, and DNS-based discovery
  • Certificates, kubeconfigs, bootstrap tokens, and RBAC are not ceremony. They are how trust is established between components

Here's a fun fact: many "Kubernetes problems" turn out to be very ordinary Linux, networking, or certificate problems once you know where to look.

That is the real payoff of doing things the hard way. When a node turns NotReady, a Service stops routing, or DNS fails inside a Pod, you now have a much better idea which layer to inspect first.

No need to treat Kubernetes as a black box anymore.

🐛 Reporting issues

If you encounter any issues throughout the course, please report them here.

Where to Go Next

Now comes the fun part: higher-level Kubernetes tools will make a lot more sense because you know what they are hiding.

Good Next Steps

TopicWhy bother?Explore next
Cluster bootstrappingBuilding everything by hand is great for learning, but automation is how clusters are usually provisionedkubeadm, managed Kubernetes offerings
NetworkingThe networking layer shapes connectivity, policy, and performanceCilium, Calico, NetworkPolicies, Gateway API
Production operationsReal clusters need upgrades, backups, certificate rotation, audit logs, and observabilityBackup strategies, upgrade planning, metrics, logs, tracing
Platform engineeringOnce one cluster is not enough, lifecycle management becomes the next challengeCluster API, GitOps, multi-cluster workflows
ExtensibilityKubernetes gets even more interesting when you teach it new APIs and control loopsCustom Resource Definitions, Operators

A Few Practical Exercises

  1. Build the cluster again, but this time from memory or with your own automation
  2. Compare your manual setup to kubeadm and identify what it generates for you
  3. Replace one subsystem: for example, try Cilium instead of Flannel + kube-proxy
  4. Add production-minded pieces that this course intentionally skipped: a highly available control plane, backups, tighter security, and monitoring

Try this at home (not at work!): stop one component at a time and predict the symptom before you inspect the logs.

No need to memorize every flag or every config file. What matters is that you know what each component is responsible for, what depends on it, and what failure looks like when it disappears.

That is a useful kind of understanding whether you use kubeadm, or a managed control plane.

Congratulations again on finishing Kubernetes The Very Hard Way.

Previous lesson
CoreDNS