Kubernetes Pod Scheduling: nodeSelector and Node Affinity
nodeSelector targets a node by label - add a label to a node, add the same key-value to the pod spec, done. Node affinity does the same thing but with more options: match multiple values, exclude nodes with NotIn, check if a label exists with Exists, and use preferred mode that still schedules the pod even if no node matches.
Verify the cluster - two worker nodes, labeled by size:
kubectl get nodes --show-labels
node-01 has size=large and tier=fast. node-02 has size=medium and tier=fast.
Task 1 - nodeSelector
nodeSelector is a map of key-value pairs. The pod is scheduled only to nodes that have all of them.
Steps:
- Create pod named
nodeselector-pod, imagenginx:alpine - Add a
nodeSelectorthat targets nodes with thesize=largelabel
kubectl get pod nodeselector-pod -o wide
The pod lands on node-01 - the only node with size=large.
Hint: nodeSelector structure
spec:
nodeSelector:
size: large
Task 2 - Required Node Affinity
requiredDuringSchedulingIgnoredDuringExecution works like nodeSelector - the pod does not schedule unless a matching node is found. The difference is operators: In checks if the label value is in a list you define. One value in the list gives you the same result as nodeSelector, but you can add more.
Steps:
- Create pod named
required-pod, imagenginx:alpine - Add required node affinity targeting nodes with the
size=largelabel using theInoperator
kubectl get pod required-pod -o wide
Same result as Task 1 - node-01 - but now the rule can do things nodeSelector cannot: match multiple values, exclude nodes, or check if a label key exists at all.
Hint: Required node affinity structure
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: size
operator: In
values:
- large
Task 3 - Target node-01 with NotIn
Task 2 got to node-01 by selecting it. You can reach the same node the other way - by excluding every node that is not node-01. node-02 has size=medium and cplane-01 has no size label at all.
Steps:
- Create pod named
notin-pod, imagenginx:alpine - Required node affinity with two
matchExpressions:- Use
NotInon thesizelabel to exclude nodes withsize=medium - Use
Existson thesizelabel to exclude nodes that have nosizelabel at all
- Use
kubectl get pod notin-pod -o wide
notin-pod lands on node-01 - not because it was chosen, but because the other two nodes were excluded.
Hint: NotIn + Exists operators
nodeSelectorTerms:
- matchExpressions:
- key: size
operator: NotIn
values:
- medium
- key: size
operator: Exists
Task 4 - Preferred Affinity and Weight Scoring
preferred rules are scored. Each node gets a score based on which rules it satisfies and the weight of each. The scheduler picks the node with the highest total.
Tasks 1 to 3 all landed on node-01. With preferred rules you can steer to node-02 instead - not by excluding node-01, but by giving node-02 a higher score. Both nodes have tier=fast so both score something, but the higher-weight rule decides the winner.
Steps:
- Create pod named
preferred-pod, imagenginx:alpine - Add two preferred rules:
- weight
60: prefer nodes withsize=medium - weight
40: prefer nodes withtier=fast
- weight
kubectl get pod preferred-pod -o wide
node-02 scores 60 + 40 = 100. node-01 scores 0 + 40 = 40. The pod lands on node-02 - not because node-01 was excluded, but because node-02 scored higher.
Hint: Multiple preferred rules structure
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 60
preference:
matchExpressions:
- key: size
operator: In
values:
- medium
- weight: 40
preference:
matchExpressions:
- key: tier
operator: In
values:
- fast
This challenge is part of the Kubernetes Pod Scheduling skill path.