Kubernetes Pod Scheduling: LimitRange and ResourceQuota
Any pod missing resource specs gets BestEffort QoS and is first evicted under pressure. Any pod with no ceiling on its request can starve the entire node. LimitRange solves both: it injects defaults into pods that omit specs, and rejects at admission any pod exceeding limitranges.spec.limits[].max, if defined.
ResourceQuota takes it further - it caps the total resources a namespace can consume, so a single team or application cannot starve the rest of the cluster.
Task 1 - LimitRange
Steps:
- Create
LimitRangenamedpod-defaultsin namespacequota-demo - Default cpu request:
250m, default memory request:128Mi - Default cpu limit:
500m, default memory limit:256Mi - Create pod named
auto-pod, imagenginx:alpine, no resource spec, namespacequota-demo
kubectl describe limitrange pod-defaults -n quota-demo
kubectl get pod auto-pod -n quota-demo -o jsonpath='{.spec.containers[0].resources}'
Hint: LimitRange structure
apiVersion: v1
kind: LimitRange
metadata:
name: <name>
namespace: <namespace>
spec:
limits:
- type: Container
default:
cpu: <cpu-limit>
memory: <memory-limit>
defaultRequest:
cpu: <cpu-request>
memory: <memory-request>
Task 2 - ResourceQuota
Steps:
- Create
ResourceQuotanamedns-quotain namespacequota-demo requests.cpu: 500mrequests.memory: 256Mi
kubectl describe resourcequota ns-quota -n quota-demo
Hint: ResourceQuota structure
apiVersion: v1
kind: ResourceQuota
metadata:
name: <name>
namespace: <namespace>
spec:
hard:
requests.cpu: <cpu>
requests.memory: <memory>
Task 3 - Exhaust the Quota
Steps:
- Create pod named
worker, imagenginx:alpine, no resource spec, namespacequota-demo - Try creating a pod named
overflow, imagenginx:alpine, namespacequota-demo- observe what happens
kubectl describe resourcequota ns-quota -n quota-demo
kubectl run overflow --image=nginx:alpine -n quota-demo
The rejection message shows exactly which quota was exceeded and by how much.
Note: If you deploy a Deployment instead of a bare pod,
kubectl applysucceeds and the Deployment and ReplicaSet are created - but no pods appear. The ReplicaSet tries to create pods, quota rejects them, and the error never surfaces in your terminal. Runkubectl get events -n quota-demo --sort-by='.lastTimestamp'to find the rejection.
Task 4 - Quota Without LimitRange
quota-demo works because the LimitRange injects cpu requests before admission sees the pod. Without a LimitRange, a namespace with a cpu ResourceQuota rejects any pod that arrives without requests or limits set - there is nothing to inject defaults.
Steps:
- Create namespace
quota-strict - Create
ResourceQuotanamedstrict-quotainquota-strict,requests.cpu: 500m, no LimitRange - Create pod named
no-request, imagenginx:alpine, no resource spec, namespacequota-strict- observe what happens - Create pod named
with-request, imagenginx:alpine,requests.cpu: 100monly, namespacequota-strict- observe what happens
kubectl get pod with-request -n quota-strict
The quota defines requests.cpu - so pods must have requests.cpu set. If the quota defined limits.cpu instead, pods would need limits.cpu set.
Hint: ResourceQuota with requests.cpu only
apiVersion: v1
kind: ResourceQuota
metadata:
name: <name>
namespace: <namespace>
spec:
hard:
requests.cpu: <cpu>
Task 5 - Object Count Quota
ResourceQuota can cap the number of any Kubernetes object in a namespace, not just resource consumption. count/pods: 3 limits the namespace to 3 pods total regardless of their resource requests - no requests or limits needed on the pods.
Steps:
- Create namespace
quota-count - Create
ResourceQuotanamedpods-quotainquota-count,count/pods: "3" - Try to create 4 pods named
pod-1,pod-2,pod-3,pod-4, imagenginx:alpine, no resource spec, namespacequota-count- observe the results
kubectl describe resourcequota pods-quota -n quota-count
The rejection shows count/pods was exceeded - no resource math involved, just a counter.
Hint: object count quota
apiVersion: v1
kind: ResourceQuota
metadata:
name: <name>
namespace: <namespace>
spec:
hard:
count/pods: "<number>"
Task 6 - LimitRange Max
When a pod explicitly sets a cpu request, LimitRange defaults are ignored - the explicit value is used as-is. The only way to enforce a ceiling on explicit requests is limitranges.spec.limits[].max. Any pod requesting more than max is rejected at admission.
Steps:
- Create namespace
limitrange-max - Create
LimitRangenamedcpu-maxin namespacelimitrange-maxwithmax.cpu: 500m - Try creating a pod named
over-max, imagenginx:alpine,requests.cpu: 2, namespacelimitrange-max- observe what happens - Create a pod named
within-max, imagenginx:alpine,requests.cpu: 400m, namespacelimitrange-max- observe what happens
kubectl get pod within-max -n limitrange-max
Hint: LimitRange with max
apiVersion: v1
kind: LimitRange
metadata:
name: <name>
namespace: <namespace>
spec:
limits:
- type: Container
max:
cpu: <max-cpu>
This challenge is part of the Kubernetes Pod Scheduling skill path.