eBPF: The Future of Linux Networking & Observability
eBPF is transforming Linux observability and networking. Companies like Netflix, Facebook, and Cloudflare use it for everything from security to load balancing. Here’s why it matters.
What is eBPF?
eBPF (extended Berkeley Packet Filter) allows running sandboxed programs in the Linux kernel without modifying kernel source code or loading kernel modules.
User Space
↓ (system calls)
┌─────────────────────────────┐
│ Linux Kernel │
│ ┌────────────────────────┐ │
│ │ eBPF Program │ │
│ │ (verified, JIT) │ │
│ └────────────────────────┘ │
│ ↓ (attach to) │
│ • Network stack │
│ • System calls │
│ • Tracepoints │
│ • Function calls │
└─────────────────────────────┘
Why eBPF Matters
Traditional Kernel Extension
Option 1: Modify kernel source → Rebuild → Reboot
Option 2: Load kernel module → Risk crashes → Security concerns
eBPF Approach
Load eBPF program → Kernel verifies → Safe execution → No reboot
Benefits:
- Safe: Verified before execution
- Fast: JIT compiled to native code
- Dynamic: Load/unload at runtime
- Portable: Write once, run on any kernel (with BTF)
Use Cases
Observability
Tracing
// Trace all file opens
SEC("tracepoint/syscalls/sys_enter_openat")
int trace_openat(struct trace_event_raw_sys_enter* ctx)
{
bpf_printk("File opened by PID %d\n", bpf_get_current_pid_tgid());
return 0;
}
Tools like bpftrace make this accessible:
# Count syscalls by process
bpftrace -e 'tracepoint:raw_syscalls:sys_enter { @[comm] = count(); }'
# Trace file opens
bpftrace -e 'tracepoint:syscalls:sys_enter_openat { printf("%s %s\n", comm, str(args->filename)); }'
Performance Analysis
# CPU profiling with sampling
profile -F 99 -aK 10
# Block I/O latency histogram
biolatency
# TCP connection latency
tcpconnlat
Networking
Load Balancing (Cilium/Katran)
Incoming packets → eBPF → Direct routing to backend
↓
No iptables
Wire-speed performance
Facebook’s Katran handles millions of packets per second.
Network Security
// Drop packets from blocked IPs
SEC("xdp")
int block_ip(struct xdp_md *ctx)
{
struct iphdr *iph = get_ip_header(ctx);
if (is_blocked(iph->saddr)) {
return XDP_DROP;
}
return XDP_PASS;
}
Security
Runtime Detection
// Detect process injection attempts
SEC("tracepoint/syscalls/sys_enter_ptrace")
int detect_ptrace(struct trace_event_raw_sys_enter* ctx)
{
// Log potential injection
send_alert();
return 0;
}
Falco and Tetragon use eBPF for security monitoring.
Key Components
XDP (eXpress Data Path)
Packet processing at the earliest point:
Network card → XDP → (drop/pass/redirect) → Network stack
| Level | Latency | Use Case |
|---|---|---|
| XDP | ~100ns | DDoS, load balancing |
| tc | ~1µs | Firewalling, shaping |
| iptables | ~10µs | Complex rules |
TC (Traffic Control)
# Attach eBPF to network interface
tc filter add dev eth0 ingress bpf da obj filter.o sec classifier
Tracing
Multiple attachment points:
- Tracepoints: Stable kernel events
- Kprobes: Kernel function entry/exit
- Uprobes: User-space function tracing
- fentry/fexit: Fast kernel function tracing
Tools Ecosystem
Low-Level
| Tool | Use |
|---|---|
| bpftrace | High-level tracing language |
| libbpf | C library for eBPF |
| BCC | Python bindings and tools |
Higher-Level
| Tool | Use |
|---|---|
| Cilium | Kubernetes networking |
| Falco | Runtime security |
| Pixie | Kubernetes observability |
| Tetragon | Security observability |
Getting Started
Prerequisites
# Check kernel version (5.4+ recommended)
uname -r
# Install tools
apt install bpftrace bcc-tools linux-tools-generic
First eBPF Program
// hello.bpf.c
#include <linux/bpf.h>
#include <bpf/bpf_helpers.h>
SEC("tracepoint/syscalls/sys_enter_write")
int hello(void *ctx)
{
bpf_printk("Hello, eBPF!\n");
return 0;
}
char LICENSE[] SEC("license") = "GPL";
# Compile
clang -O2 -target bpf -c hello.bpf.c -o hello.bpf.o
# Load and view output
bpftool prog load hello.bpf.o /sys/fs/bpf/hello
cat /sys/kernel/debug/tracing/trace_pipe
Using bpftrace
# One-liner: trace file opens
bpftrace -e 'tracepoint:syscalls:sys_enter_openat { printf("%s\n", str(args->filename)); }'
# Script: syscall latency
bpftrace -e '
tracepoint:raw_syscalls:sys_enter { @start[tid] = nsecs; }
tracepoint:raw_syscalls:sys_exit / @start[tid] / {
@latency = hist(nsecs - @start[tid]);
delete(@start[tid]);
}'
eBPF and Kubernetes
Cilium
Replaces kube-proxy with eBPF:
# Pod-to-pod networking via eBPF
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: allow-frontend
spec:
endpointSelector:
matchLabels:
app: backend
ingress:
- fromEndpoints:
- matchLabels:
app: frontend
Benefits:
- No iptables rules explosion
- Identity-based security
- Low latency
Observability
# Pixie: Auto-instrumented observability
px deploy
px run service_stats
eBPF captures metrics without application changes.
Limitations
Kernel Version Requirements
| Feature | Minimum Kernel |
|---|---|
| Basic eBPF | 4.4 |
| BTF (portable) | 5.2 |
| Bounded loops | 5.3 |
| Ring buffer | 5.8 |
Verification Limits
- Loop bounds checked
- Stack size limited (512 bytes)
- Instruction count limited
Learning Curve
- Kernel concepts required
- C programming for raw eBPF
- Debugging is harder
Final Thoughts
eBPF is the future of Linux systems programming. It enables:
- Observability without overhead
- Networking at wire speed
- Security at runtime
If you work with Linux in production, learn eBPF. It’s the superpower you didn’t know you needed.
The kernel, now programmable.