Kubernetes: Admission Control
Kubernetes admission control is a middleware mechanism that operates within the Kubernetes API request processing pipeline. It acts as a gateway between incoming API requests and the storage layer (etcd), allowing the cluster to intercept, validate, modify, or reject requests before they are persisted to storage.
In this tutorial, you will learn:
- The different types of admission control mechanisms
- Where they fit in the API request pipeline
- How to configure and use admission controllers
🐛 Reporting issues
If you encounter any issues throughout the tutorial, please report them here.
Lab environment
The lab environment in this tutorial is NOT a fully functioning Kubernetes cluster.
It's only a kube-apiserver instance with etcd, designed to demonstrate that admission control happens inside the kube-apiserver.
Pods will stay in Pending state forever, Deployments will not be turned into Pods, etc.
Understanding the Kubernetes API request pipeline
When a client (such as kubectl
, a controller, or any application) sends a request to the Kubernetes API server,
the request goes through several stages before being persisted to storage:
- Authentication - Verifying the identity of the requester
- Authorization - Checking if the authenticated user has permission to perform the action
- Decoding - Converting the request payload into a structured object, setting default values, etc
- Mutating Admission Control - Making additional modifications to the request payload
- Schema Validation - Verifying that the request conforms to the expected OpenAPI schema
- Validating Admission Control - Performing additional validation or policy enforcement
- Storage Version - Converting the object to the storage version
- Storage - Persisting the object to etcd
If any of these steps fail or explicitly reject the request, processing stops and the API server returns an error.
💡 Not all requests go through every step. Whether a request is processed by a particular stage depends on the type of request.
For example, get
, list
, and watch
requests bypass the admission control layer entirely.
Kubernetes API request pipeline
Types of admission control
Kubernetes implements admission control through a two-phase process:
- Mutating Admission Control - Modifies incoming requests
- Validating Admission Control - Validates requests without making modifications
Admission controllers implement one or both admission control mechanisms (i.e., a controller may be validating, mutating, or both) and are built directly into the API server (controlled by API server flags).
💡 You can find the complete list of admission controllers in the official documentation.
Since admission controllers are compiled into the Kubernetes API server, you cannot create new custom admission controllers. However, there are two special controllers that allow you to customize admission control behavior:
- Webhooks (aka. Dynamic Admission Control) - External services called via HTTP requests
- Policies - Kubernetes objects that define rules using CEL (Common Expression Language) expressions
Both webhooks and policies are configured using Kubernetes objects, providing a flexible and extensible way to customize the admission control process.
💡 Policies are a relatively recent addition to Kubernetes:
ValidatingAdmissionPolicy
became stable in Kubernetes 1.30MutatingAdmissionPolicy
is currently in beta as of Kubernetes 1.34
Since several important Kubernetes features require admission controllers, Kubernetes comes with a set of admission controllers enabled by default.
Kubernetes admission control overview
Mutating Admission Controllers
Mutating admission controllers run first in the API request pipeline. Their primary purpose is to modify or transform incoming requests before validation occurs.
Common use cases for mutating controllers include:
- Defaulting missing field values
- Injecting sidecar containers or volumes
- Modifying resource requests and limits
- Adding labels, annotations, or metadata
Validating Admission Controllers
Validating admission controllers run after mutating controllers (and schema validation) and focus solely on validation without making any modifications.
Validating controllers can take one of three actions:
- Accept the request (allow it to proceed)
- Reject the request (return an error)
- Warn about policy violations while allowing the request to proceed
These controllers ensure that the final request (after all mutations are applied) complies with cluster policies, security requirements, and business rules.
Schema validation
Schema validation occurs between mutating and validating admission controllers in the API request pipeline. During this phase, the Kubernetes API server validates that the request object (after all mutations have been applied) conforms to the expected OpenAPI schema for that resource type.
This validation is distinct from earlier deserialization errors. Deserialization failures occur when the API server cannot parse the incoming request into a valid Kubernetes object structure due to:
- Malformed YAML or JSON
- Invalid data types in fields
- Unknown fields (in case of strict field validation)
In contrast, schema validation ensures that a successfully parsed object meets the specific requirements for its resource type.
💡 Example: Deserialization would catch a malformed YAML file, while schema validation would catch a Pod that doesn't have any containers.
Schema validation occurs after mutating admission controllers because mutations often add required fields or modify values to meet schema requirements, so the validator needs to see the final object state.
If schema validation fails, the API server immediately returns an error, and the request never reaches validating admission controllers or storage. This fail-fast approach prevents invalid objects from entering the cluster and provides clear feedback about what needs to be corrected.
Built-in admission controller: LimitRanger
Kubernetes operators have several tools at their disposal to manage, control, and limit resource usage in a cluster.
One such feature is defining Limit Ranges. These are policies that constrain the resource allocations (limits and requests) that you can specify for each applicable object (such as Pod or Container) in a namespace.
💡 Not to be confused with Resource Quotas that limit aggregate resource consumption (and is also implemented as an admission controller).
Limit ranges are enforced by the LimitRanger admission controller, which is both a mutating (sets default resource limits when missing) and validating (enforces minimum and maximum resource limits) admission controller, making it an excellent example for this demonstration.
Create a LimitRange (in the default
namespace) with some limits and defaults for Containers:
kubectl apply -f ~/examples/builtin/limitrange.yaml
YAML
apiVersion: v1
kind: LimitRange
metadata:
name: default
spec:
limits:
- type: Container
default: # Default limit
cpu: 500m
defaultRequest: # Default request
cpu: 500m
max: # Maximum limit
cpu: "1"
min: # Minimum limit
cpu: 100m
💡 Using the default
namespace is important here: the LimitRanger controller only sets default resource limits for pods and containers in the default
namespace.
Create a Pod with a single Container without resource limits to test the mutating behavior of the LimitRanger controller:
kubectl apply -f ~/examples/builtin/pod-no-resources.yaml
YAML
apiVersion: v1
kind: Pod
metadata:
name: no-resources
spec:
containers:
- name: pause
image: registry.k8s.io/pause:3.10
If you are feeling adventurous
If you want to sharpen your Kubernetes skills, you can try communicating with the Kubernetes API server directly using curl
instead of kubectl
.
yq -o=json . ~/examples/builtin/pod-no-resources.yaml | curl -k \
-X POST \
-H "Authorization: Bearer iximiuz" \
-H "Content-Type: application/json" \
https://127.0.0.1:6443/api/v1/namespaces/default/pods \
-d @-
If the LimitRanger controller is enabled (which it is by default) and working correctly, the Pod should be created with the default resource limits defined by the LimitRange.
Check the Pod to see if the resource limits were applied:
kubectl get pod no-resources \
-o jsonpath={.spec.containers[0].resources} | jq
Another way to verify that the Pod has been mutated by the LimitRanger controller is to check its annotations:
kubectl get pod no-resources \
-o jsonpath={.metadata.annotations} | jq
You should see something like this:
{
"kubernetes.io/limit-ranger": "LimitRanger plugin set: cpu request for container pause; cpu limit for container pause"
}
Now let's explore the validating aspect of the LimitRanger controller.
Create a Pod that violates the limits defined by the LimitRange:
kubectl apply -f ~/examples/builtin/pod-limit-exceeded.yaml
YAML
apiVersion: v1
kind: Pod
metadata:
name: limit-exceeded
spec:
containers:
- name: pause
image: registry.k8s.io/pause:3.10
resources:
limits:
# Must be less than or equal to cpu limit of 1000m
cpu: 2000m
requests:
cpu: 2000m
As expected, the API server (or more precisely, the LimitRanger admission controller) rejected the Pod because it violated the LimitRange.
💡 Since this validation occurs during admission, existing pods will be unaffected by adding a new LimitRange.
This highlights an important limitation of admission control: it only affects new (or updated) objects.
This concludes the demonstration of built-in admission controllers, but this example provides a great opportunity to explore how schema validation works.
The following Pod definition demonstrates how and when schema validation takes place:
YAML
apiVersion: v1
kind: Pod
metadata:
name: no-image
spec:
containers:
- name: pause
# Image is required
image: ""
resources:
limits:
# Must be less than or equal to cpu limit of 1000m
cpu: 2000m
requests:
cpu: 2000m
This Pod is designed to showcase the timing of schema validation in the API request pipeline:
- ✅ Syntactically valid - No deserialization errors
- ✅ Has resource limits - Won't be mutated by LimitRanger
- ❌ Violates LimitRange - Should be rejected by validating admission
- ❌ Missing container image - Should be caught by schema validation first
This demonstrates that schema validation occurs between mutations and validating admission control.
Applying this Pod should fail, but with a different error message this time:
kubectl apply -f ~/examples/builtin/pod-no-image.yaml
Dynamic admission control: Webhooks
As you've seen, admission control is a powerful mechanism for enforcing policies and transforming resources. What makes it truly flexible is the ability to customize admission control behavior using webhooks (also known as admission webhooks or dynamic admission control).
Like built-in admission controllers, webhooks can be mutating, validating, or both, and are invoked during the appropriate phase of the admission control pipeline.
Regardless of their behavior, all webhooks must implement a contract defined by the Kubernetes API to integrate with the API server:
- Accept HTTP POST requests at the configured endpoint
- Receive an AdmissionReview object (containing the request) encoded as JSON in the request body
- Respond with an AdmissionReview object (containing the response) encoded as JSON
The most critical field in the response is allowed
, which determines whether the API server should accept or reject the request.
Beyond simple allow/deny decisions, webhooks can provide additional information in their responses:
- Messages explaining why a request was denied
- Warnings displayed to the user while still allowing the request to proceed
- JSON patches (for mutating webhooks) that modify the incoming object
In order for the API server to call a webhook in the admission control pipeline, the webhook must be configured using one of the following Kubernetes resources:
These configurations specify:
- Which resources the webhook should intercept (using label selectors, resource types, etc.)
- Where to send requests (the webhook service endpoint)
- How to handle failures (fail the request or ignore the error)
- Security settings (TLS configuration, timeout values)
Always configure webhooks with appropriate selectors and constraints to limit their scope.
Webhooks without proper filtering can intercept requests for critical system components in the kube-system
namespace,
potentially causing cluster outages if the webhook fails or becomes unavailable.
Read more in the Good Practices guide in the official Kubernetes documentation.
Let's explore a practical example that demonstrates both mutating and validating webhooks.
Imagine you work at Acme Corp, where the engineering team has established the following policies:
- Environment labeling: Every resource must have an
environment
label (dev, staging, prod) - Team ownership: Production resources must have an
owner
label identifying the responsible team
Policies like these can create friction for developers, so you want to be flexible when possible while still enforcing them in a controlled manner.
You can achieve both goals with webhooks:
- A mutating webhook that automatically adds
environment: dev
when the label is missing - A validating webhook that enforces the ownership requirement for production resources
This approach reduces developer friction while maintaining accountability.
Since implementing webhooks from scratch is beyond the scope of this tutorial, you'll use a simple webhook implementation built with Caddy.
The behavior described above is already implemented in /etc/caddy/Caddyfile
:
localhost:8443 {
route /validate {
k8s_admission validation {
expression "has(object.metadata.labels.environment) && (object.metadata.labels.environment != 'prod' || has(object.metadata.labels.owner))"
message "Production resources must have an owner"
}
}
route /mutate {
k8s_admission json_patch {
op add
path /metadata/labels/environment
value "dev"
}
}
}
Configure the mutating webhook:
kubectl apply -f ~/examples/webhook/config-mutating.yaml
YAML
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
name: default-labels
webhooks:
- name: mutator.labs.iximiuz.com
clientConfig:
url: https://localhost:8443/mutate
rules:
- operations: ["CREATE", "UPDATE"]
apiGroups: ["*"]
apiVersions: ["*"]
resources: ["*"]
objectSelector:
matchLabels:
# This selector makes sure that only resources with
# the "mutator: webhook" label are mutated.
mutator: webhook
matchExpressions:
# This selector makes sure that only resources without
# the "environment" label are mutated.
- key: environment
operator: DoesNotExist
admissionReviewVersions: ["v1"]
sideEffects: None
💡 Since both the API server and Caddy are running on the same host, you can use localhost
as the webhook URL.
In a production environment, you would typically run the webhook as a service inside Kubernetes.
Create a resource (e.g. a Pod) that does not have the environment
label set:
kubectl apply -f ~/examples/webhook/pod-no-env.yaml
YAML
apiVersion: v1
kind: Pod
metadata:
name: webhook-no-env
labels:
mutator: webhook
spec:
containers:
- name: pause
image: registry.k8s.io/pause:3.10
Check the Pod to verify that the environment
label was added:
kubectl get pod webhook-no-env \
-o jsonpath={.metadata.labels} | jq
Now configure the validating webhook to enforce the ownership requirement for production resources:
kubectl apply -f ~/examples/webhook/config-validating.yaml
YAML
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
name: production-owners
webhooks:
- name: validator.labs.iximiuz.com
clientConfig:
url: https://localhost:8443/validate
rules:
- operations: ["CREATE", "UPDATE"]
apiGroups: ["*"]
apiVersions: ["*"]
resources: ["*"]
objectSelector:
matchLabels:
# This selector makes sure that only resources with
# the "validator: webhook" label are validated.
validator: webhook
sideEffects: None
admissionReviewVersions: ["v1"]
Try to create a Pod that is marked as production
but lacks the required owner
label:
kubectl apply -f ~/examples/webhook/pod-no-owner.yaml
YAML
apiVersion: v1
kind: Pod
metadata:
name: webhook-no-owner
labels:
environment: prod
validator: webhook
spec:
containers:
- name: pause
image: registry.k8s.io/pause:3.10
This time, the API server should reject the Pod creation because it lacks the required owner
label for a production resource.
The new kid on the block: Policies
While webhooks have been the go-to solution for custom admission control for years, they can introduce challenges to the Kubernetes API server's performance and reliability:
- Latency: Each webhook call adds network round-trip time to API requests
- Availability: Webhook failures can block cluster operations entirely
- Security: Webhooks require proper TLS configuration and secure network access
- Debugging: Troubleshooting webhook issues is often more complex than built-in controllers
To address these challenges, Kubernetes introduced admission policies as a more lightweight alternative. Policies are defined as Kubernetes resources and evaluated directly by the API server, eliminating the need for external webhook services.
The two main types of admission policies are:
- Mutating Admission Policies - Transform incoming resources
- Validating Admission Policies - Validate incoming resources
Similar to webhooks, policies are configured using Kubernetes resources:
Policies use the Common Expression Language (CEL) to define rules, making them easier to write and maintain than full webhook implementations.
Key advantages of policies over webhooks:
- Performance: No network calls - policies run directly in the API server
- Reliability: No external dependencies that can fail
- Simplicity: CEL expressions are easier to write and debug than full webhook services
- Security: No need to manage TLS certificates or network configurations
While policies are still evolving, they represent the future direction for admission control in Kubernetes.
Let's explore the same example from the webhook section, but this time using policies instead.
Recall Acme Corp's policies:
- Environment labeling: Every resource must have an
environment
label (dev, staging, prod) - Team ownership: Production resources must have an
owner
label identifying the responsible team
You can implement both requirements using admission policies:
- A mutating policy that automatically adds
environment: dev
when the label is missing - A validating policy that enforces the ownership requirement for production resources
This approach provides the same benefits as webhooks but with better performance and simpler operations.
First, create the mutating admission policy:
kubectl apply -f ~/examples/policy/policy-mutating.yaml
YAML
apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingAdmissionPolicy
metadata:
name: default-labels
spec:
matchConstraints:
resourceRules:
- operations: ["CREATE", "UPDATE"]
apiGroups: ["*"]
apiVersions: ["*"]
resources: ["*"]
objectSelector:
matchExpressions:
# This selector makes sure that only resources without
# the "environment" label are mutated.
- key: environment
operator: DoesNotExist
reinvocationPolicy: IfNeeded
mutations:
- patchType: "ApplyConfiguration"
applyConfiguration:
expression: >
Object{
metadata: Object.metadata{
labels: Object.metadata.labels{
environment: "dev"
}
}
}
---
apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingAdmissionPolicyBinding
metadata:
name: default-labels
spec:
policyName: default-labels
matchResources:
objectSelector:
matchLabels:
# This selector makes sure that only resources with
# the "mutator: policy" label are mutated.
mutator: policy
Create a Pod that does not have the environment
label set:
kubectl apply -f ~/examples/policy/pod-no-env.yaml
YAML
apiVersion: v1
kind: Pod
metadata:
name: policy-no-env
labels:
mutator: policy
spec:
containers:
- name: pause
image: registry.k8s.io/pause:3.10
Check the Pod to verify that the environment
label was added:
kubectl get pod policy-no-env \
-o jsonpath={.metadata.labels} | jq
Create the validating admission policy to enforce ownership requirements:
kubectl apply -f ~/examples/policy/policy-validating.yaml
YAML
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingAdmissionPolicy
metadata:
name: production-owners
spec:
matchConstraints:
resourceRules:
- operations: ["CREATE", "UPDATE"]
apiGroups: ["*"]
apiVersions: ["*"]
resources: ["*"]
validations:
- expression: "false"
message: "Production resources must have an owner"
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingAdmissionPolicyBinding
metadata:
name: production-owners
spec:
policyName: production-owners
validationActions: [Deny]
matchResources:
objectSelector:
matchLabels:
# This selector makes sure that only resources with
# the "validator: policy" label are validated.
validator: policy
Try to create a production Pod that lacks the required owner
label:
kubectl apply -f ~/examples/policy/pod-no-owner.yaml
YAML
apiVersion: v1
kind: Pod
metadata:
name: policy-no-owner
labels:
environment: prod
validator: policy
spec:
containers:
- name: pause
image: registry.k8s.io/pause:3.10
Just like with the webhook, the API server should reject the Pod creation because it lacks the required owner
label.
What's next?
This tutorial has covered the fundamentals of Kubernetes admission control, but there is still plenty more to explore.
Both webhooks and policies offer a wide range of customization options, allowing you to tailor admission control to your specific needs. But with great power comes great responsibility: it's important to consider the operational aspects and configure admission control carefully.
Make sure to check out the linked references below for more information.
References
💡 To dive deeper into the concepts covered in this tutorial, check out the resources below.
- Admission Control
- kube-apiserver reference
- Dynamic Admission Control (aka Admission Webhooks)
- Policies
- CEL (Common Expression Language)
- Video: The Future of Kubernetes Admission Logic (by Marcus Noble)
Kubernetes resources
- MutatingWebhookConfiguration
- ValidatingWebhookConfiguration
- MutatingAdmissionPolicy (beta since Kubernetes 1.34)
- MutatingAdmissionPolicyBinding (beta since Kubernetes 1.34)
- ValidatingAdmissionPolicy (stable since Kubernetes 1.30)
- ValidatingAdmissionPolicyBinding (stable since Kubernetes 1.30)
Level up your Server Side game — Join 11,000 engineers who receive insightful learning materials straight to their inbox