Kubernetes Pod Scheduling: Taints, Tolerations and Node Affinity - Why You May Need Both
Taints and tolerations repel unwanted pods from nodes. Node affinity attracts pods to specific nodes. Each solves half the dedicated-node problem - pods with tolerations may still land on neutral nodes, and neutral pods may still land on nodes designated for affinity workloads. This challenge shows both gaps and how combining them closes them.
Verify the cluster - three nodes, one labeled as dedicated:
kubectl get nodes --show-labels
node-01 has label node-type=dedicated. No taint yet.
Scenario A - Taint and Toleration Only
Taint node-01 and deploy pods with a toleration. Watch where the scheduler places them.
Task 1 - Taint the dedicated node
Steps:
- Taint
node-01with keydedicated, valuetrue, effectNoExecute - Create pod named
neutral-pod, imagenginx:alpine, no toleration
kubectl get pod neutral-pod -o wide
neutral-pod should be Running on cplane-01 or node-02 - the taint blocks it from node-01.
Task 2 - Tolerated pods drift
Deploy a Deployment of tolerated pods with topologySpreadConstraints to force even distribution across all 3 nodes. Check where each pod lands.
Steps:
- Create Deployment named
tolerated, imagenginx:alpine, 3 replicas, labelapp: tolerated - Toleration for
dedicated=true:NoExecute topologySpreadConstraintswithmaxSkew: 1,topologyKey: kubernetes.io/hostname,whenUnsatisfiable: DoNotSchedule
kubectl get pods -l app=tolerated -o wide
All 3 nodes should have a pod - including node-01. The toleration allows pods onto the tainted node but does not restrict them to it.
Hint: Toleration + topologySpreadConstraints
tolerations:
- key: <taint-key>
operator: Equal
value: "<value>"
effect: NoExecute
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: <your-label>
Scenario B - Node Affinity Only
Remove the dedicated=true:NoExecute taint from node-01 and rely on required node affinity instead.
Task 3 - Required affinity lands on dedicated node
Steps:
- Remove the
dedicated=true:NoExecutetaint fromnode-01 - Create pod named
affinity-pod, imagenginx:alpine requiredDuringSchedulingIgnoredDuringExecutionnode affinity tonode-type=dedicated
kubectl get pod affinity-pod -o wide
affinity-pod should land on node-01 - the only node with the node-type=dedicated label.
Hint: Required node affinity structure
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: <label-key>
operator: In
values:
- <label-value>
Task 4 - Neutral pods land on the dedicated node too
Deploy neutral pods with no affinity and no toleration. Use topologySpreadConstraints with 3 replicas to distribute them across nodes and observe where they land.
Steps:
- Create Deployment named
neutral-01, imagenginx:alpine, 3 replicas, labelapp: neutral-01 topologySpreadConstraintswithmaxSkew: 1,topologyKey: kubernetes.io/hostname,whenUnsatisfiable: DoNotSchedule- No affinity, no toleration
kubectl get pods -l app=neutral-01 -o wide
One of the neutral pods should land on node-01 - no taint means nothing is blocking them.
Hint: topologySpreadConstraints structure
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: <your-label>
The Combined Solution
Re-apply the dedicated=true:NoExecute taint to node-01. Then deploy with all three: taint + node affinity + toleration.
Task 5 - Taint + Affinity + Toleration
Steps:
- Create pod named
dedicated-pod, imagenginx:alpine requiredDuringSchedulingIgnoredDuringExecutionnode affinity tonode-type=dedicated- Toleration for
dedicated=true:NoExecute
Also deploy neutral pods without a toleration and observe whether they can reach node-01.
Steps:
- Create Deployment named
neutral-02, imagenginx:alpine, 3 replicas, labelapp: neutral-02 topologySpreadConstraintswithmaxSkew: 1,topologyKey: kubernetes.io/hostname,whenUnsatisfiable: DoNotSchedule- No toleration
kubectl get pod dedicated-pod -o wide
kubectl get pods -l app=neutral-02 -o wide
dedicated-pod should be on node-01. Neutral pods should be on cplane-01 and node-02 only - the taint blocks them from node-01.
This challenge is part of the Kubernetes Pod Scheduling skill path.