Conclusion
Final Words
Congratulations!
You now have something most Kubernetes users never build by hand: a cluster whose internals you can actually explain.
Over the course of this journey, we went from plain Linux machines to a functioning Kubernetes cluster, one component at a time. No magic installers. No invisible defaults. Just the building blocks and the interfaces between them.
What You Built
| Module | Building blocks | Why it matters |
|---|---|---|
| Worker Node | containerd, runc, CNI plugins, kubelet | This is where Pods stop being YAML and become running containers |
| Control Plane | etcd, kube-apiserver, kube-scheduler, kube-controller-manager | This is where desired state is stored, exposed, scheduled, and reconciled |
| Cluster | TLS bootstrapping, Flannel, kube-proxy, CoreDNS | This is where separate machines start behaving like one Kubernetes cluster |
What You Learned
- Kubernetes is not a monolith. It is a set of focused components that communicate through well-defined APIs
- containerd and kubelet are the worker-side bridge between Kubernetes objects and real Linux processes
- etcd is the source of truth, but almost nobody talks to it directly. The API server stands in front of it and makes the rest of the system possible
- The scheduler and controller manager do very different jobs: one decides placement, the other keeps actual state aligned with desired state
- Cluster networking is not one feature. It is a stack of concerns: Pod-to-Pod reachability, Service routing, and DNS-based discovery
- Certificates, kubeconfigs, bootstrap tokens, and RBAC are not ceremony. They are how trust is established between components
Here's a fun fact: many "Kubernetes problems" turn out to be very ordinary Linux, networking, or certificate problems once you know where to look.
That is the real payoff of doing things the hard way.
When a node turns NotReady, a Service stops routing, or DNS fails inside a Pod, you now have a much better idea which layer to inspect first.
No need to treat Kubernetes as a black box anymore.
🐛 Reporting issues
If you encounter any issues throughout the course, please report them here.
Where to Go Next
Now comes the fun part: higher-level Kubernetes tools will make a lot more sense because you know what they are hiding.
Good Next Steps
| Topic | Why bother? | Explore next |
|---|---|---|
| Cluster bootstrapping | Building everything by hand is great for learning, but automation is how clusters are usually provisioned | kubeadm, managed Kubernetes offerings |
| Networking | The networking layer shapes connectivity, policy, and performance | Cilium, Calico, NetworkPolicies, Gateway API |
| Production operations | Real clusters need upgrades, backups, certificate rotation, audit logs, and observability | Backup strategies, upgrade planning, metrics, logs, tracing |
| Platform engineering | Once one cluster is not enough, lifecycle management becomes the next challenge | Cluster API, GitOps, multi-cluster workflows |
| Extensibility | Kubernetes gets even more interesting when you teach it new APIs and control loops | Custom Resource Definitions, Operators |
A Few Practical Exercises
- Build the cluster again, but this time from memory or with your own automation
- Compare your manual setup to
kubeadmand identify what it generates for you - Replace one subsystem: for example, try Cilium instead of Flannel +
kube-proxy - Add production-minded pieces that this course intentionally skipped: a highly available control plane, backups, tighter security, and monitoring
Try this at home (not at work!): stop one component at a time and predict the symptom before you inspect the logs.
No need to memorize every flag or every config file. What matters is that you know what each component is responsible for, what depends on it, and what failure looks like when it disappears.
That is a useful kind of understanding whether you use kubeadm, or a managed control plane.
Congratulations again on finishing Kubernetes The Very Hard Way.
- Previous lesson
- CoreDNS