Kubernetes Pod Scheduling: Topology Spread Constraints
topologySpreadConstraints gives you fine-grained control over how pods distribute across failure domains. Unlike pod anti-affinity which enforces at most one pod per domain, maxSkew sets the maximum allowed difference in pod count between any two domains.
Topology Spread Constraints - Kubernetes Docs
Verify the cluster - nodes are labeled with zone information:
kubectl get nodes --show-labels
cplane-01 and node-01 are in zone-a. node-02 is in zone-b.
Task 1 - Node-Level Spread
Deploy a web Deployment with 4 replicas spread across all 3 nodes. With maxSkew: 1, no node can have more than one extra pod compared to any other.
Steps:
- Create Deployment named
web, imagenginx:alpine, 4 replicas, labelapp: web topologySpreadConstraintswithmaxSkew: 1,topologyKey: kubernetes.io/hostname,whenUnsatisfiable: DoNotSchedule
kubectl get pods -l app=web -o wide
Hint: topologySpreadConstraints structure
topologySpreadConstraints:
- maxSkew: <number>
topologyKey: <topology-key>
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
<label-key>: <label-value>
Task 2 - Zone-Level Spread
Deploy a zone-web Deployment with 4 replicas spread across availability zones instead of individual nodes. The topology key changes from hostname to zone - the scheduler now treats each zone as a single domain.
Steps:
- Create Deployment named
zone-web, imagenginx:alpine, 4 replicas, labelapp: zone-web topologySpreadConstraintswithmaxSkew: 1,topologyKey: topology.kubernetes.io/zone,whenUnsatisfiable: DoNotSchedule
kubectl get pods -l app=zone-web -o wide
Hint: Zone topology key
topologySpreadConstraints:
- maxSkew: <number>
topologyKey: <zone-label-key>
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
<label-key>: <label-value>
This challenge is part of the Kubernetes Pod Scheduling skill path.