Nobody hides from the kernel
Seeing Inside Your App Without Changing Code
Ever wondered what's really going on inside your applications? You may have a large collection of systems with many different applications... how to find/observe/track/understand a dynamic landscape when one application is in Java
, one in Go
and one in Python
?
Or, you may wish to implement the Zero Trust
approach, which demands all traffic be encrypted. Now, that is well and good: only how do you monitor encrypted traffic? You surely don't want your attackers to benefit from hiding inside the encrypted traffic...
No bees were harmed making this tutorial

Traditional methods often require adding logging statements, which means changing code and redeploying. Another option is using debuggers, which typically requires compiling differently and/or accepting heavy performance overheads.
So, what if we could X-ray the applications without cutting them open? Hiding from X-rays is very hard (heavy more like, as you need to carry around dense objects like sheets of lead).

What do I see and how is the contrast when X-raying applications with eBPF? Can I see everything? -- No, I can only see as well as I understand what exactly I am looking for
Now, in your distributed application landscape: Imagine being able to attach trip-wires, collectors and traces underneath
the application, underneath
the so-called user-space, in a place that all the applications need to share: the Linux kernel. These programs can observe e.g. system calls, network events, and file-system access, all without modifying the kernel nor the application source code. These programs can also modify and even block certain behaviours.
I personally see eBPF as an X-ray machine where we can scan our systems like X-rays go through matter, and depending on the type of matter that is being X-ray-ed (conversely the symbols in the code that are being hooked into).
Key Benefits of eBPF
- No Application Code Changes: Observe applications without modifying their source code.
- Low Overhead: eBPF programs run efficiently in the kernel, minimizing performance impact.
- Real Time: eBPF programs can change the Kernel behavior
instantly
- Programmability: You can write custom eBPF programs to collect the data you need

For further reading, see ebpf.io
There are some caveats though, first off:
- Know your own skill level: someone like me should never write production eBPF code. Teaching yourself how it works by writing little program-snippets is one thing, putting it into prod, a very different game. Just like you can buy an X-ray machine, maybe use it, but you should never build one.
- X-rays don't detect everything, neither does eBPF, X-ray images only show structures that absorb in certain energy bands. For example water is transparent to (soft) X-rays and eBPF can not hook into code that is stripped of symbols.

Watching from underneath
or how to trace apps in the Kernel credit
eBPF is not just transparent to the apps, but (can be) also to the attacker : Another important aspect in defensive security is that your attackers don't instantly see (and God forbid alter) your security controls. Here the implementation matters, but if well designed, it will require an attacker to be root to even see that the eBPF instrumentation is running. And ideally, by that time, we ve detected them 🏴☠️ .
Meet our ♥️ OpenSource Friends
We will be exclusively using open-source tooling and Constanze expressely thanks the maintainers, creators and contributors of those projects1.

Pixie (left), Kubescape (middle) and Tetragon (right)
Pixie as The Tracker : understands protocols
- real time debugger
- introspects protocols (SQL, DNS, HTTP, KAFKA, REDIS...) in real time
- 100% souvereign -> collects data into analysis cockpit without data ever being stored outside your datacenter
- manages real time distributed deployment of on-demand eBPF traces
🤝 You ll meet it when:- in Chapter 1: we
we vizualise network traffic
- in Chapter 2 (next week): we
we introspect Database behaviour
- in Chapter 1: we
Kubescape as The Scout : scans everything
- finds vulnerabilities and misconfigurations
- can learn baselines of applications (via Inspector Gadget)
- alerts on anomaly
🤝 You ll meet it when: wego anomaly hunting
Tetragon as The Sniper: specific targeting of identified objectives (in the kernel)
- abstracts eBPF into yaml
- useful if you know what you are looking for
- apart from detecting (tracing), it can also block
🤝 You ll meet it when: wefind and block a vulnerable library
Who this course is (not) for
The way I ve learnt almost everything, is by diving in and discovering how something works by solving a problem I'm interested in solving. These labs will be released on a weekly schedule, and they are not Computer Science lectures, rather the opposite, I will assume that you know as much about 🐝 eBPF as I did about computers in 2005 🤓 and we will phenomenologically uncover how eBPF works, by using it in many different ways - sometimes even by using it wrong:
When in 2005, I got my first computer and the guy2 next door in the student dorm suggested I install the best operating system
, which he claimed was
a Gentoo Linux
3 , I nodded.
So my first make menuconfig
was very exciting, I picked words that sounded vaguely familiar, and ended up booting into a blinking cursor on a black screen...
We all wish there was a fast-track to learning, but nothing beats deleting your partition table for your brain to remember, what exactly that table is and why you will avoid repeating this particular exercise.
Linux is wonderful in that it let's you edit everything and these labs dont mind being destructed, you are invited to build your own mental model by poking at it from all sides, and if you break the lab, there's a reset button in the top right corner.
Fortunately, unlike X-rays, eBPF has no health hazards
☢️
If you are looking for technical material, please see anything Liz Rice has so wonderfully published4
Footnotes
Footnotes
- These projects can do a lot more than I'm mentioning here, they will be installed/running in the labs so you are more than welcome to explore
their functionality further.
↩ - Thanks Fabian 🙏 🤣 ↩
- Gentoo is a distribution of Linux in which everything was compiled from source, incl the kernel ↩
- I cannot recommend Liz's books enough, she writes succinctly and yet never loses the reader. ↩
Protocol Tracing with Pixie - DNS and HTTP
Let's begin our anamnesis with diagnosing traffic protocols
The real world problem :
I have a lot of apps on a few distributed servers and there is lots of traffic. Some of these apps, I dont even exactly know what they do. I may have installed them for one specific reason, however it pulled in lots of dependencies and now I have even more apps.
Let's begin at the beginning and tackle the question: How do I see all of the traffic?

The solution :
Install tracers on each server
- that hook into common kernel functions to capture raw network data
- and use protocol inference to understand the traffic content
- at the source (in the kernel)
- without the apps being disturbed (no downtime)
- without the apps being changed
- without the system needing large amount of extra resources
- without the system needing to change (ok, we can argue this point)
The minimum requirements:
- A reasonably recent kernel (the use-cases in this lab require different minimum version...>4.4, sometimes >5.8)
- A set of kernel flags enabled
- An eBPF tool that knows how different networking options work, so as to extract the protocols meaningfully 1
Our eBPF user-space agent: Pixie!
Please install Pixie with the commands in this box 💡
Please accept the defaults and the EULA
in the dialogue, the installation will take about 5 minutes to equilibrate.
We will be using the pixie's cosmic.ai hosting of
the UI
and to link your local cluster (here on this lab) with the remote UI
, you will be asked by auth0
to input a valid email to receive a token.
sudo bash -c "$(curl -fsSL https://getcosmic.ai/install.sh)"
export PX_CLOUD_ADDR=getcosmic.ai
px auth login
px deploy -p=1Gi
We leverage a CNCF2 project, Pixie, which
places so-called edge-modules
on each Linux node to perform tracing via eBPF: Thus, a user gains access - from either the UI
or via PxL script
-
to all edge-nodes via queries
. The UI
, which can be understood as an analysis cockpit, manifests queries on demand from the in-memory tables
on each node.
It should be noted, that no data is stored outside your datacenter3.
Fancy! And very souvereign
👑

Pixie's Architecture is inherentely distributed and almost entirely on the edge
. This allows the compute and data to be colocated - a pattern often referred to as query-pushdown
- which is not only resource efficient but also imperative for privacy
guarantees
Pixie backstory
While we wait for the installer, here is some backstory on the Pixie project:
Pixie's origin story 💡
The company was co-founded by Zain Asgar (CEO), a former Google engineer working on Google AI and adjunct professor at Stanford, and Ishan Mukherjee (CPO), who led Apple’s Siri Knowledge Graph product team and also previously worked on Amazon’s Robotics efforts.
Pixie core was open-sourced by New Relic to the CNCF in 2021.
And here I'm quoting the blog
"With ... Pixie ... you can get programmatic, unified access to application performance data. Pixie automatically harvests telemetry data, leveraging technology like the extended Berkeley Packet Filter (eBPF) at the kernel layer to capture application data. Pixie’s edge-machine intelligence system, designed for secure and scalable auto-telemetry, connects this data with Kubernetes metadata to provide visibility while maintaining data locality.
Pixie’s developer experience is driven by three fundamental technical breakthroughs:
No-instrumentation data collection: Pixie uses technologies such as eBPF to collect metrics, events, logs, and traces automatically from the application, infrastructure (Kubernetes), operating system, and network layers. For custom data, you can collect logs dynamically using eBPF or ingest existing telemetry.
Script-based analysis: Pixie scripts provide a code-based approach to enable efficient analysis, collaboration, and automation. You can use scripts from Pixie’s native debugging interfaces (web, mobile, and terminal); scripts are community-contributed, team-specific, or custom built. Additionally, you can analyze data from integrations with existing Infrastructure monitoring and observability platforms.
Kubernetes native edge compute: Pixie runs entirely inside Kubernetes as a distributed machine data system, meaning you don’t need to transfer any data outside the cluster. Pixie’s architecture gives you a secure, cost-effective, and scalable way to access unlimited data, deploy AI/ML models at source, and set up streaming telemetry pipelines."
Once, the installer is happy, copy paste the following in a terminal
px run px/schemas

These are all in-memory tables that Pixie constantly populates via eBPF hooks.
Now, click on the link in the bottom, and in the UI
, select in the dropdown px/cluster

The so-called UI
in Pixie is visual query that displays the data - extracted with eBPF and fetched by the script
via Apache Arrow. The script language is pronounced "Pixel" and spelled PxL
You wanted traffic, here is the traffic!
Tracing DNS calls
DNS may be the most bang-for-your-buck protocol to trace for security observability.
So, either, in the UI find the dns_flow_graph
or execute here in the shell, some so-called PxL
script (pronounced 'Pixel'):
px run px/dns_data
Lets generate some DNS traffic
To generate some traffic, we can use this HereDoc:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: dnswizard
spec:
containers:
- name: dnswizard
image: freelyit/nslookup
command: ["/bin/bash", "-c"]
args:
- |
while true; do
nslookup dnswizardry.oracle.com
dig the.domain.network.segment.is.confused.com
dig @8.8.8.8 who.is.the.best.wizard.of.all.of.space.net
nslookup why.did.the.programmer.quit.his.job.because.he.didnt.get.arrays.com
dig i.used.to.think.the.brain.was.the.most.important.organ.then.i.thought.what.was.telling.me.that.it.was.com
dig @8.8.8.8 what.do.you.call.a.fish.with.no.eyes.fsh.com
done
EOF
Copy/paste the above into the terminal and press enter, then go to the UI
or rerun the PxL
script again:
px run px/dns_data

Pixie DNS Flow Graph is an example of a graphical query. If you press RUN
the data are manifested and displayed.
You can manipulate the query both graphically, .e.g the dropdowns or modify the script tab
Exercise: Find the DNS requests both query and reply in pixie
Pixie vs tcpdump comparison aka Packet Capture 💡
Some of you may say "Hey, thats a bit like wireshark" and to some extent it is. Pixie is doing it without a pcap
though and across N nodes (N=1 in this lab). But, it is fun to compare a few things:
In another other tab
sudo tcpdump -i any port 53 -w outputudp.pcap

Graphical summary of the DNS noise generated by our nonsense DNS queries
Now, you can use tcpdump -r outputudp.pcap -X
to read the queries, after stopping (CTRL-C) the packet capture (pcap
). Depending on how long (~3min) you captured
the traffic, you should have several MB
ll -h
-rw-r--r-- 1 tcpdump tcpdump 12M Jul 4 09:28 outputudp.pcap
We can contrast this against Pixie by running a PxL
query, if you re-run this after some time (~10min), you will see that there is an increasing number of compacted data because the maximum table size is capped.
import px
df=px._DebugTableInfo()
px.display(df)

Retrieving the sizes of the in-memory tables in Pixie, which hold the data extracted by eBPF
At this point, you may have a WOW 🤯 moment or at least, see why people call it auto-instrumentation
.
What happened just now is the X-ray effect: Suddenly, there are traffic stats and the content of the packets, enriched with meta-data. And it looks like the applications themselves were emitting the information.
What happened - how did this work?
Lets now X-ray the X-ray

We compare the two Linux machines in this lab and infer what the Pixie Edge Modules might be up to. Open shells on both machines, please:
sudo bpftool prog list
sudo bpftool map list
sudo bpftool perf
The output of the above commands on the cplane-01
node, where pixie is running, is significantly
longer than on the dev-machine
.
The reason is that the dev-machine
has no eBPF running on it.
laborant@cplane-01:src$ sudo bpftool perf
pid 8205 fd 53: prog_id 278 kprobe func __x64_sys_connect offset 0
pid 8205 fd 55: prog_id 279 kretprobe func __x64_sys_connect offset 0
pid 8205 fd 57: prog_id 280 kprobe func __x64_sys_accept offset 0
pid 8205 fd 59: prog_id 281 kretprobe func __x64_sys_accept offset 0
pid 6664 fd 188: prog_id 198 raw_tracepoint sys_enter
pid 8205 fd 61: prog_id 282 kprobe func __x64_sys_accept4 offset 0
pid 8205 fd 63: prog_id 283 kretprobe func __x64_sys_accept4 offset 0
pid 8205 fd 163: prog_id 318 uprobe filename /app/src/vizier/services/agent/pem/pem.runfiles/px/pem offset 40582560
pid 8205 fd 183: prog_id 320 tracepoint sched_process_exit
pid 8205 fd 485: prog_id 362 uprobe filename /proc/542805/root/usr/lib/x86_64-linux-gnu/libssl.so.3 offset 222784
pid 8205 fd 486: prog_id 384 uretprobe filename /proc/542805/root/usr/lib/x86_64-linux-gnu/libssl.so.3 offset 222784
...
So, there are apparently programs prog_id
injected into the kernel (which we can see as file-descriptors fd
), they have processes in user-space (pid
)
that are listening on the data streamed by the eBPF programs.
We can also see, there are different types kprobe
, tracepoint
, uprobe
. And: those seem to naturally occur in a normal
way and ret
way like kretprobe
and uretprobe
.
Then there are tracepoint
, they are either raw or well done . ;)
And then there seems to be something with all those offsets
.
We need more information
How about we start looking at things with familiar names, like send
sudo bpftool prog lis |grep send -A 3
489: kprobe name syscall__probe_entry_send tag 94ffe35598032b0f gpl
loaded_at 2025-06-30T09:50:04+0000 uid 0
xlated 1216B jited 784B memlock 4096B map_ids 276,275,283,278
btf_id 309
490: kprobe name syscall__probe_ret_send tag 5c4947ace1e176d5 gpl
loaded_at 2025-06-30T09:50:04+0000 uid 0
xlated 44432B jited 27877B memlock 45056B map_ids 278,270,283,282,284,272,288,271,286
btf_id 309
491: kprobe name syscall__probe_entry_sendto tag 3de2f77cb59ef762 gpl
loaded_at 2025-06-30T09:50:04+0000 uid 0
xlated 1392B jited 895B memlock 4096B map_ids 276,275,283,280,278
btf_id 309
492: kprobe name syscall__probe_ret_sendto tag 02bb6cd5d560bf46 gpl
loaded_at 2025-06-30T09:50:04+0000 uid 0
xlated 47376B jited 29630B memlock 49152B map_ids 280,270,283,287,278,282,284,272,288,271,286
btf_id 309
Still rather cryptic, well, but we can use some pure logic here: if we have a program, especially if written in C, that is getting
data out of the kernel somehow by attaching these entry/ret
probes, well the C compiler probably insisted on a data-structure to be defined to pull
the data out.
So, we have so far found that there are hook-points of different type and we conjecture there must be data structs
to funnel the data out.

Important: The eBPF programms define:
- hook points to be attached to, to listen for events
- data-structures to be extracted
Once the data is extracted, in userland an application like Pixie UI
can consume the data and produce all those beautiful graphs and tables.
Can I see this data that is being collected?
We can use the map_ids
from bpftool prog list
and display what is in those "maps". So, pick some integers:
sudo bpftool map dump id 283
Example TCP 💡
The integer 283
will likely have different content, please consult the output of sudo bpftool prog list
{
"key": 1046949818007559,
"value": {
"conn_id": {
"upid": {
"": {
"pid": 243762,
"tgid": 243762
},
"start_time_ticks": 1760046
},
"fd": 7,
"tsid": 17600461614827
},
"laddr": {
"sa": {
"sa_family": 255,
"": {
"sa_data_min": "",
"": {
"__empty_sa_data": {},
"sa_data": []
}
}
},
"in4": {
"sin_family": 255,
"sin_port": 0,
"sin_addr": {
"s_addr": 0
},
"__pad": [0,0,0,0,0,0,0,0
]
},
"in6": {
"sin6_family": 255,
"sin6_port": 0,
"sin6_flowinfo": 0,
"sin6_addr": {
"in6_u": {
"u6_addr8": [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
],
"u6_addr16": [0,0,0,0,0,0,0,0
],
"u6_addr32": [0,0,0,0
]
}
},
"sin6_scope_id": 0
}
},
"raddr": {
"sa": {
"sa_family": 255,
"": {
"sa_data_min": "",
"": {
"__empty_sa_data": {},
"sa_data": []
}
}
},
"in4": {
"sin_family": 255,
"sin_port": 0,
"sin_addr": {
"s_addr": 0
},
"__pad": [0,0,0,0,0,0,0,0
]
},
"in6": {
"sin6_family": 255,
"sin6_port": 0,
"sin6_flowinfo": 0,
"sin6_addr": {
"in6_u": {
"u6_addr8": [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
],
"u6_addr16": [0,0,0,0,0,0,0,0
],
"u6_addr32": [0,0,0,0
]
}
},
"sin6_scope_id": 0
}
},
"protocol": "kProtocolUnknown",
"role": "kRoleUnknown",
"ssl": false,
"ssl_source": "kSSLNone",
"wr_bytes": 8,
"rd_bytes": 0,
"last_reported_bytes": 0,
"app_wr_bytes": 0,
"app_rd_bytes": 0,
"protocol_match_count": 0,
"protocol_total_count": 2,
"prev_count": 4,
"prev_buf": "",
"prepend_length_header": false
}
}
Some of these maps will be empty, but you will surely find quite a range of data content in these maps.
Does it work for HTTP, too?
Navigate via the dropdown to the px/http_data
script, or any other that has http
in the name. You'll see the content, header, user-agent
and other typical HTTP properties:

We will come back to HTTP, when we cover HTTPS/TLS. Feel free to explore Pixie at your leisure...
PxL script and visual Queries in Pixie 💡
Navigate to px/http_post_requests
and click the Edit
feature on the top right of the UI
allows you to change the display/query.

Familiarize yourself with the concept of df
aka Dataframe by modifying the PxL script, executing it and receiving something meaningful.
You may see similarities with pandas and the PxL script is powerful (it has a learning curve, though)
But but but, how does it (protocol inference) work?
Above, these eBPF-maps contained more or less readable data captured from the different kernel probes, e.g.
"protocol": "kProtocolDNS",
"role": "kRoleClient",
"direction": "kEgress",
"ssl": false,
"ssl_source": "kSSLNone",
"source_fn": "kSyscallSendMMsg"
},
"msg": [-33,114,1,0,0,1,0,0,0,0,0,0,3,119,104,121,3,100,105,100,3,116,104,101,10,112,114,111,103,114,97,109,109,101,114,4,113,117,105,116,3,104,105,115,3,106,111,98,7,98,101,99,97,117,115,101,2,104,101,5,...
You may have noticed protocol
attributions in the above data-maps , their definitions are here.
//.../socket_tracer/bcc_bpf_intf/common.h
enum traffic_protocol_t {
kProtocolUnknown = 0,
kProtocolHTTP = 1,
kProtocolHTTP2 = 2,
kProtocolMySQL = 3,
kProtocolCQL = 4,
kProtocolPGSQL = 5,
kProtocolDNS = 6,
kProtocolRedis = 7,
kProtocolNATS = 8,
kProtocolMongo = 9,
kProtocolKafka = 10,
kProtocolMux = 11,
kProtocolAMQP = 12,
kProtocolTLS = 13,
};
struct protocol_message_t {
enum traffic_protocol_t protocol;
enum message_type_t type;
};
This still doesn't tell us how Pixie derives those protocols, though. If we look at the actual Pixie DNS Source code, we get more clues:
// Each protocol should define a struct called defining its protocol traits.
// This ProtocolTraits struct should define the following types:
// - frame_type: This is the low-level frame to which the raw data is parsed.
// Examples: http::Message, cql::Frame, mysql::Packet
// - state_type: This is state struct that contains any relevant state for the protocol.
// The state_type must have three members: global, send and recv.
// A convenience NoState struct is defined for any protocols that have no state.
// - record_type: This is the request response pair, the content of which has been interpreted.
// This struct will be passed to the SocketTraceConnector to be appended to the
// appropriate table.
//
// Example for HTTP protocol:
//
// namespace http {
// struct ProtocolTraits {
// using frame_type = Message;
// using record_type = Record;
// using state_type = NoState;
// };
// }
//
// Note that the ProtocolTraits are hooked into the SocketTraceConnector through the
// protocol_transfer_specs.
#include "src/stirling/source_connectors/socket_tracer/protocols/common/event_parser.h" // For FrameBase.
// Flags in the DNS header:
// +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
// |QR| Opcode |AA|TC|RD|RA| Z|AD|CD| RCODE |
// +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
#define EXTRACT_DNS_FLAG(flags, pos, width) ((flags >> pos) & ((1 << width) - 1))
// A DNSRecord represents a DNS resource record
// Typically it is the answer to a query (e.g. from name->addr).
// Spec: https://www.ietf.org/rfc/rfc1035.txt
struct DNSRecord {
std::string name;
// cname and addr are mutually exclusive.
// Either a record provdes a cname (an alias to another record), or it resolves the address.
std::string cname;
InetAddr addr;
};
struct Frame : public FrameBase {
DNSHeader header;
const std::vector<DNSRecord>& records() const { return records_; }
bool consumed = false;
void AddRecords(std::vector<DNSRecord>&& records) {
for (const auto& r : records) {
records_size_ += r.name.size() + r.cname.size() + sizeof(r.addr);
}
records_ = std::move(records);
}
size_t ByteSize() const override { return sizeof(Frame) + records_size_; }
private:
std::vector<DNSRecord> records_;
size_t records_size_ = 0;
};
For each event that comes over a socket
, a protocol type is inferred. Given that characteristics such as headers the underlying protocols
are very well known (e.g. RFC 1035 for DNS), these patterns are overlayed, allowing the socket_tracer
to translate the raw data into the interpreted protocol
.
Does this always work? 💡
This does not always work and there may be mis-attributions, which why realistic test data is required to
have an anticipation of which conditions lead to non-identification
or confusion
.
Exercise
- Find a DNS request in an eBPF map
- How do you know it's DNS and not something else?
- Draw your understanding of how this data was extracted on a napkin or the back of an envelope
- Why is DNS such a valuable protocol for security defense?
- Advanced: Try spoofing DNS and catching yourself
Conclusions
🪄 Magic works 🪄 - sprinkling Pixie dust ✨ and there was the traffic

So far, we found out that eBPF is an event-driven method that allows attaching hooks to the kernel to extract data via maps. And, once in userspace, applications can use and display those data.
- We didnt make any changes to any applications, we deployed Pixie, specifically the Pixie-Edge-Module and the traffic appeared.
- By comparing a
naked
linux with the one Pixie was deployed on, we saw there were different types of probes most notablykprobes
,tracepoints
anduprobes
. - The source code hinted that the type of traffic is inferred and overlayed like a pattern over a stream of data.
🌟 This week's special thanks 🌟:
...go to Dom Delnano for being so patient and always graceful 🙏
Next week:
Do you have the feeling we barely scratched the surface?
Next week, we willl be X-raying Databases and see inserts/deletions, connection-strings being passed and listen to DBs chatter internally and while we decipher kafkaesk
behaviour, we ll also explore what the different types of eBPF-probes are.
Footnotes
Footnotes
- See caveat about not writing production eBPF yourself, as there are hard problems like misattribution across protocols, keeping state and ordering frames ↩
- Cloud Native Compute Foundation ↩
- Depending on where you deploy the
Pixie Cloud
, you can also keep the analysis and display inside your datacenter ↩
X-raying Databases with Pixie - Redis, SQL and Kafka
Coming next week
🧙♂️ You shall not pass!
🚧 Work in progress... Please, consider upgrading your account to Premium to help us finish this content faster.References and Contact
If you'd like to show your appreciation -> consider leaving a star ⭐ at https://github.com/k8sstormcenter and at https://github.com/pixie-io/pixie
To give me feedback, stay in touch or find a conference where to meet me https://fusioncore.ai
This content is not sponsored, and comes with lots of ♥️ but no liability. If you find that I m misrepresenting something or any other errors, please do reach out in some way and help me improve.
- “eBPF Foundation and eBPF Foundation logo design are registered trademarks of eBPF Foundation.”
- https://github.com/pixie-io/pixie is part of the Cloud Native Compute Foundation
- https://docs.px.dev/tutorials/pixie-101/network-monitoring/
- Liz Rice, Container Security, https://www.oreilly.com/library/view/container-security/9781492056690/ ISBN: 9781492056706
- Liz Rice, Learning eBPF, https://www.oreilly.com/library/view/learning-ebpf/9781098135119/, ISBN: 9781098135126
FAQs
Common questions:
- How is it different from an LSM like SELinux or AppArmour: the effect at runtime is similar, however, an LSM doesnt change
in realtime
. LSMs are very well tested and serve rather broad use-cases. But, their coverage, i.e. the changes they can affect are limited to their design. This means, you can't alter random locations in the kernel with LSMs. So: you can typically modify LSM-policies at runtime and probably even switch them on/off in productionat will
, but I have not heard of anyone doing this. People tend to test out their config going fromaudit -> enforce
once confident and then leaving it atenforce
. - Can I use eBPF and LSMs at the same time? Yes, of course. And you should
- How is it different from seccomp profiles? Blocking/allowing syscalls (which is what those
secure computing mode
profiles do to achieve sandboxing) is very much in the same line of idea. Differences are that these profiles are typically supplied at deploy-time and they only manage syscalls. eBPF has a much broader applicability and flexibility.
Level up your Server Side game — Join 10,500 engineers who receive insightful learning materials straight to their inbox