Kubernetes - Build and Deploy a Production-Ready Container Image
This challenge combines five separate container image concepts into one pipeline. You will write a multi-stage Dockerfile that injects a version at build time, runs as a non-root user by default, and is then deployed as a Pod that overrides both the user and the default command arguments.
Topics covered:
- Multi-stage builds - compile in one stage, ship a minimal runtime image
- Build-time arguments -
ARGandENVto inject values at build time - Non-root users - creating users in the image and setting a default
USER - Pod security context -
runAsUseroverriding the image default at runtime - command and args - overriding
ENTRYPOINTandCMDfrom the Pod spec
Task 1 - Write the Source Code and Dockerfile
For this challenge we will use a simple demo app written in Go. It prints the version, the runtime mode (set via CMD in the Dockerfile or args in the Pod spec), and the UID it is running as. Save it as ~/main.go:
package main
import (
"fmt"
"os"
)
func main() {
version := os.Getenv("VERSION")
if version == "" {
version = "unknown"
}
mode := "default"
if len(os.Args) > 1 {
mode = os.Args[1]
}
fmt.Printf("Version: %s, mode: %s, running as %d\n", version, mode, os.Getuid())
}
The binary reads the version from the VERSION environment variable (os.Getenv("VERSION")). The version is not hardcoded - it must be injected at build time via a Dockerfile ARG, then persisted as an ENV so the running container can read it.
Now create ~/Dockerfile with two stages:
Builder stage (golang:1.21-alpine):
- Accept a build-time argument
APP_VERSION(default:unknown) and bake it into the image as theVERSIONenvironment variable so the binary can read it - Copy
main.gointo the stage and compile it into a binary namedmydemoapp
Runtime stage (alpine:latest):
- Repeat the
APP_VERSIONargument andVERSIONenvironment variable so the runtime container can read it - Create a system group named
mygroupand add two system users to it:user1000with UID 1000 anduser1001with UID 1001 - Copy the compiled binary from the builder stage
- Run as UID 1000 by default
- Set the binary as the entrypoint with
defaultas the default mode argument
Hint: Dockerfile instructions reference
| Instruction | Purpose |
|---|---|
FROM | Sets the base image for the stage. A second FROM starts a new stage. |
ARG | Declares a build-time variable passed via --build-arg. Not available at runtime. |
ENV | Sets an environment variable baked into the image, available at runtime. |
WORKDIR | Sets the working directory for subsequent instructions. |
COPY | Copies files into the image. COPY --from=<stage> copies from a previous build stage. |
RUN | Executes a shell command during the build (compile, install packages, etc.). |
USER | Sets the default user the container runs as. |
ENTRYPOINT | The executable that always runs. Not replaced by Pod args. |
CMD | Default arguments passed to ENTRYPOINT. Replaced by Pod args. |
Task 2 - Build the Image and Verify
Build the image tagged prod-app:v1, passing APP_VERSION=2.0.0 as a build argument. Then run the image to confirm the output shows Version: 2.0.0 and running as 1000.
Hint: build and run commands
cd ~ && docker build --build-arg APP_VERSION=2.0.0 -t prod-app:v1 .
docker run --rm prod-app:v1
Task 3 - Deploy with Security Context and Args Override
k3s uses containerd as its container runtime, not Docker. Images built with Docker are not automatically visible to k3s - they need to be explicitly imported. This is specific to this lab environment; in a real cluster you would push the image to a registry instead.
Import the image into k3s, then create a Pod named prod-pod that:
- Enforces non-root execution and overrides the image's default user to UID
1001 - Passes
productionas the runtime argument to the container - Uses
imagePullPolicy: Neversince the image was imported locally, not pushed to a registry
Verify the logs show Version: 2.0.0, mode: production, running as 1001.
Hint: import the image into k3s
docker save prod-app:v1 | k3s ctr images import -