Capturing Caddy Traffic
This guide shows you how to use Qtap to capture HTTP traffic flowing through Caddy, a modern web server with automatic HTTPS. You'll learn how to observe both incoming client requests and outgoing upstream connections from your Caddy server, all without proxies or code changes.
What You'll Learn
Capture Caddy ingress traffic (client requests)
Capture Caddy egress traffic (upstream service requests)
Monitor both sides of a reverse proxy simultaneously
Apply conditional capture rules for specific routes
Handle Caddy's automatic HTTPS with Qtap's TLS inspection
Set up Caddy + Qtap in Docker for testing
Deploy production-ready configurations
Use Cases
Why capture Caddy traffic?
Reverse Proxy Visibility: See both client requests and backend responses
API Gateway Monitoring: Track all API calls through your Caddy gateway
Automatic HTTPS Inspection: See inside TLS traffic without managing certificates
Microservices Debugging: Debug issues between services
Performance Analysis: Measure latency at each hop
Security Auditing: Monitor for suspicious traffic patterns
Migration Planning: Understand traffic patterns before infrastructure changes
Prerequisites
Linux system with kernel 5.10+ and eBPF support
Docker installed (for this guide's examples)
Root/sudo access
Basic understanding of Caddy/Caddyfile syntax
Part 1: Simple Caddy Web Server
Let's start with a basic Caddy setup serving static content and reverse proxying to upstream services.
Step 1: Create Caddy Configuration
Create a directory for our demo:
mkdir caddy-qtap-demo
cd caddy-qtap-demo
Create Caddyfile
:
{
# Global options
auto_https off # Disable for local testing (use HTTP)
admin off # Disable admin API for simplicity
}
:8080 {
# Static response endpoint
respond / "Hello from Caddy!" 200
# Health check endpoint
respond /health "OK" 200
# JSON API endpoint
handle /api/status {
header Content-Type application/json
respond `{"status": "healthy", "server": "caddy"}` 200
}
# Reverse proxy to httpbin.org
handle_path /api/httpbin/* {
reverse_proxy http://httpbin.org {
header_up Host httpbin.org
header_up X-Forwarded-Server {host}
}
}
# Reverse proxy to example.com
handle_path /example/* {
reverse_proxy https://example.com {
header_up Host example.com
}
}
# File server for static content
file_server /static/* {
root /var/www
}
# Enable access logging
log {
output stdout
format console
}
}
Step 2: Create Qtap Configuration
Create qtap.yaml
:
version: 2
# Storage Configuration
services:
# Connection metadata (anonymized)
event_stores:
- type: stdout
# HTTP request/response data (sensitive)
object_stores:
- type: stdout
# Processing Stack
stacks:
caddy_capture:
plugins:
- type: http_capture
config:
level: full # (none|summary|details|full) - Capture everything
format: text # (json|text) - Human-readable format
# Traffic Capture Settings
tap:
direction: all # (egress|ingress|all) - Capture BOTH directions
ignore_loopback: false # (true|false) - Capture localhost (caddy uses loopback)
audit_include_dns: false # (true|false) - Skip DNS for cleaner output
http:
stack: caddy_capture # Use our caddy processing stack
# Optional: Filter out noise
filters:
groups:
- qpoint # Don't capture qtap's own traffic
Key Configuration Points:
direction: all
- Captures both client→caddy AND caddy→upstream trafficignore_loopback: false
- Important! Caddy often uses localhostlevel: full
- Captures complete requests/responses including bodies
Step 3: Create Docker Compose Setup
Create docker-compose.yaml
:
version: '3.8'
services:
# Caddy web server
caddy:
image: caddy:latest
container_name: caddy-demo
ports:
- "8082:8080"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
networks:
- demo-network
# Qtap agent
qtap:
image: us-docker.pkg.dev/qpoint-edge/public/qtap:v0
container_name: qtap-caddy
privileged: true
user: "0:0"
cap_add:
- CAP_BPF
- CAP_SYS_ADMIN
pid: host
network_mode: host
volumes:
- /sys:/sys
- /var/run/docker.sock:/var/run/docker.sock
- ./qtap.yaml:/app/config/qtap.yaml
environment:
- TINI_SUBREAPER=1
ulimits:
memlock: -1
command:
- --log-level=warn
- --log-encoding=console
- --config=/app/config/qtap.yaml
networks:
demo-network:
driver: bridge
Part 2: Running and Testing
Step 1: Start the Services
# Start Caddy and Qtap
docker compose up -d
# Wait for Qtap to initialize (CRITICAL - must happen before traffic!)
sleep 6
# Verify Caddy is running
curl http://localhost:8082/
# Expected: "Hello from Caddy!"
Step 2: Generate Test Traffic
# Test 1: Simple GET to Caddy (INGRESS only - static response)
curl http://localhost:8082/
# Test 2: Health check
curl http://localhost:8082/health
# Test 3: JSON API endpoint
curl http://localhost:8082/api/status
# Test 4: Reverse proxy to httpbin.org (INGRESS + EGRESS)
# You'll see TWO captures: client→caddy AND caddy→httpbin
curl http://localhost:8082/api/httpbin/get
# Test 5: POST with JSON through reverse proxy
curl -X POST http://localhost:8082/api/httpbin/post \
-H "Content-Type: application/json" \
-H "X-Request-ID: test-12345" \
-d '{"username": "alice", "role": "admin"}'
# Test 6: Reverse proxy to example.com (HTTPS upstream)
curl http://localhost:8082/example/
# Test 7: Generate multiple requests to see patterns
for i in {1..5}; do
curl -s http://localhost:8082/api/status
sleep 1
done
Step 3: View Captured Traffic
# View Qtap logs
docker logs qtap-caddy
# Filter for caddy process
docker logs qtap-caddy 2>&1 | grep -A 30 "caddy"
# Count captured transactions
docker logs qtap-caddy 2>&1 | grep -c "HTTP Transaction"
What you should see:
=== HTTP Transaction ===
Source Process: caddy (PID: 456, Container: caddy-demo)
Direction: INGRESS ← (client to caddy)
Method: POST
URL: http://localhost:8082/api/httpbin/post
Status: 200 OK
Duration: 15ms
--- Request Headers ---
Host: localhost:8082
User-Agent: curl/7.81.0
Content-Type: application/json
X-Request-ID: test-12345
--- Request Body ---
{"username": "alice", "role": "admin"}
--- Response Headers ---
Content-Type: application/json
Content-Length: 512
--- Response Body ---
{
"args": {},
"data": "{\"username\": \"alice\", \"role\": \"admin\"}",
"headers": {
"Host": "httpbin.org",
"X-Forwarded-Server": "localhost:8082"
},
"json": {
"username": "alice",
"role": "admin"
},
"url": "http://httpbin.org/post"
}
========================
=== HTTP Transaction ===
Source Process: caddy (PID: 456, Container: caddy-demo)
Direction: EGRESS → (caddy to upstream)
Method: POST
URL: http://httpbin.org/post
Status: 200 OK
Duration: 320ms
--- Request Headers ---
Host: httpbin.org
X-Forwarded-Server: localhost:8082
Content-Type: application/json
--- Request Body ---
{"username": "alice", "role": "admin"}
========================
Key indicators that it's working:
✅
"exe": "/usr/bin/caddy"
or similar - Caddy process identified✅
Direction: INGRESS
- Client to Caddy✅
Direction: EGRESS
- Caddy to upstream✅ Two transactions for proxied requests (ingress + egress)
✅ Custom headers visible (
X-Request-ID
,X-Forwarded-Server
)✅ Full request/response bodies captured
✅ Latency tracked for both hops
Part 3: Advanced Configurations
Configuration 1: Capture Only Errors
Reduce volume by capturing only failed requests:
version: 2
services:
event_stores:
- type: stdout
object_stores:
- type: stdout
# Define reusable macros
rulekit:
macros:
- name: is_error
expr: http.res.status >= 400 && http.res.status < 600
- name: is_server_error
expr: http.res.status >= 500 && http.res.status < 600
stacks:
error_only:
plugins:
- type: http_capture
config:
level: none # Don't capture by default
format: json
rules:
# Capture all errors
- name: "HTTP errors"
expr: is_error()
level: full
# Capture server errors with extra detail
- name: "Server errors"
expr: is_server_error()
level: full
tap:
direction: all
ignore_loopback: false
http:
stack: error_only
Test it:
# This should NOT be captured (200 OK)
curl http://localhost:8082/
# This SHOULD be captured (404)
curl http://localhost:8082/nonexistent
# This SHOULD be captured (if you add an error endpoint)
curl http://localhost:8082/api/httpbin/status/500
Configuration 2: Route-Specific Capture
Capture different levels for different Caddy routes using Rulekit:
version: 2
services:
event_stores:
- type: stdout
object_stores:
- type: stdout
rulekit:
macros:
- name: is_api_route
expr: http.req.path matches /^\/api\//
- name: is_health_check
expr: http.req.path == "/health"
- name: is_static
expr: http.req.path matches /^\/static\//
- name: is_proxy_route
expr: http.req.path matches /^\/api\/httpbin\//
stacks:
selective_capture:
plugins:
- type: http_capture
config:
level: none # Don't capture by default
format: json
rules:
# Skip health checks entirely
- name: "Skip health"
expr: is_health_check()
level: none
# Capture API routes with full details
- name: "API routes"
expr: is_api_route() && !is_health_check()
level: full
# Capture static content metadata only
- name: "Static content"
expr: is_static()
level: summary
# Capture proxy errors in detail
- name: "Proxy errors"
expr: is_proxy_route() && http.res.status >= 400
level: full
tap:
direction: all
ignore_loopback: false
http:
stack: selective_capture
Configuration 3: HTTPS Upstream Monitoring
When Caddy proxies to HTTPS upstreams, Qtap can still see the traffic:
Caddyfile:
:8080 {
# Proxy to HTTPS backend (Qtap will see decrypted traffic)
reverse_proxy /secure/* {
to https://api.github.com
header_up Host api.github.com
header_up User-Agent "Caddy-Proxy/1.0"
}
}
qtap.yaml:
version: 2
services:
event_stores:
- type: stdout
object_stores:
- type: stdout
stacks:
https_capture:
plugins:
- type: http_capture
config:
level: full
format: text
tap:
direction: egress # Focus on caddy→upstream HTTPS calls
ignore_loopback: false
http:
stack: https_capture
# Only capture traffic to specific HTTPS endpoints
endpoints:
- domain: 'api.github.com'
http:
stack: https_capture
Why this works: Qtap hooks into Caddy's TLS library (typically Go's crypto/tls) before encryption happens, so it sees plaintext even for HTTPS upstreams.
Configuration 4: Production Setup with S3
For production, store sensitive data securely:
version: 2
services:
# Metadata to stdout (for monitoring)
event_stores:
- type: stdout
# Sensitive data to S3 (never leaves your infrastructure)
object_stores:
- type: s3
config:
endpoint: https://s3.amazonaws.com
region: us-east-1
bucket: my-company-caddy-traffic
access_key_id: ${AWS_ACCESS_KEY_ID}
secret_access_key: ${AWS_SECRET_ACCESS_KEY}
rulekit:
macros:
- name: is_error
expr: http.res.status >= 400
stacks:
production_capture:
plugins:
- type: http_capture
config:
level: none # Don't capture by default
format: json
rules:
# Only capture errors in production
- name: "Production errors"
expr: is_error()
level: full
# Capture slow requests (> 2 seconds)
- name: "Slow requests"
expr: http.res.duration_ms > 2000
level: details # Headers only, no bodies
tap:
direction: all
ignore_loopback: false
http:
stack: production_capture
Update docker-compose.yaml
:
qtap:
image: us-docker.pkg.dev/qpoint-edge/public/qtap:v0
environment:
- TINI_SUBREAPER=1
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
# ... rest of config
Part 4: Real-World Use Cases
Use Case 1: API Gateway with Authentication
Monitor API gateway with focus on authentication and errors:
Caddyfile:
:8080 {
# Authentication endpoint
reverse_proxy /api/auth/* {
to http://auth-service:3000
}
# Protected API endpoints
reverse_proxy /api/v1/* {
to http://backend-api:8000
}
# Public endpoints
respond /api/public/* `{"message": "public endpoint"}` 200
}
qtap.yaml:
version: 2
services:
event_stores:
- type: stdout
object_stores:
- type: s3
config:
endpoint: https://s3.amazonaws.com
bucket: api-gateway-audit
region: us-east-1
rulekit:
macros:
- name: is_auth_endpoint
expr: http.req.path matches /^\/api\/auth\//
- name: is_error
expr: http.res.status >= 400
- name: has_auth_header
expr: http.req.header.authorization != ""
- name: is_unauthorized
expr: http.res.status == 401 || http.res.status == 403
stacks:
api_gateway:
plugins:
- type: http_capture
config:
level: none
format: json
rules:
# Capture all authentication attempts
- name: "Auth attempts"
expr: is_auth_endpoint()
level: full
# Capture unauthorized requests
- name: "Unauthorized access"
expr: is_unauthorized()
level: full
# Capture API errors
- name: "API errors"
expr: is_error() && !is_auth_endpoint()
level: details # Headers only
# Capture requests without auth header (potential security issue)
- name: "Missing auth"
expr: http.req.path matches /^\/api\/v1\// && !has_auth_header()
level: summary
tap:
direction: all
ignore_loopback: false
http:
stack: api_gateway
Use Case 2: Microservices Mesh Monitoring
Monitor Caddy as a service mesh proxy:
Caddyfile:
:8080 {
# Service A
reverse_proxy /service-a/* {
to http://service-a:9000
header_up X-Mesh-Proxy Caddy
}
# Service B
reverse_proxy /service-b/* {
to http://service-b:9001
header_up X-Mesh-Proxy Caddy
}
# Service C (external)
reverse_proxy /service-c/* {
to https://external-api.example.com
}
}
qtap.yaml:
version: 2
services:
event_stores:
- type: stdout
object_stores:
- type: s3
config:
bucket: microservices-traffic
stacks:
mesh_monitoring:
plugins:
- type: http_capture
config:
level: summary # Just metadata for service mesh analytics
format: json
tap:
direction: all # Capture both ingress and egress
ignore_loopback: false
http:
stack: mesh_monitoring
# Apply different stacks to different services
endpoints:
- domain: 'external-api.example.com'
http:
stack: detailed_external # More detail for external calls
# Detailed stack for external services
stacks:
detailed_external:
plugins:
- type: http_capture
config:
level: full # Full capture for external service calls
format: json
Use Case 3: Static Site with CDN Backend
Monitor Caddy serving static sites with CDN backend:
Caddyfile:
:8080 {
# Static file server
file_server / {
root /var/www/html
}
# Proxy to CDN for media
reverse_proxy /media/* {
to https://cdn.example.com
header_up Host cdn.example.com
}
}
qtap.yaml:
version: 2
services:
event_stores:
- type: stdout
object_stores:
- type: stdout
rulekit:
macros:
- name: is_media_request
expr: http.req.path matches /^\/media\//
- name: is_large_file
expr: http.res.header.content-length > 1000000 # > 1MB
stacks:
static_site:
plugins:
- type: http_capture
config:
level: none
format: json
rules:
# Capture CDN errors
- name: "CDN errors"
expr: is_media_request() && http.res.status >= 400
level: full
# Capture large file transfers (metadata only)
- name: "Large files"
expr: is_large_file()
level: summary
# Skip successful static content
- name: "Skip successful static"
expr: !is_media_request() && http.res.status < 400
level: none
tap:
direction: all
ignore_loopback: false
http:
stack: static_site
Understanding the Output
Dual Capture for Reverse Proxy
When Caddy proxies a request, Qtap captures two HTTP transactions:
Transaction 1: INGRESS (Client → Caddy)
Source Process: caddy
Direction: INGRESS ←
Method: GET
URL: http://localhost:8082/api/httpbin/users
Transaction 2: EGRESS (Caddy → Upstream)
Source Process: caddy
Direction: EGRESS →
Method: GET
URL: http://httpbin.org/users
This lets you:
Measure total latency vs. backend latency
See how Caddy transforms requests (headers, paths)
Debug issues on either side of the proxy
Caddy-Specific Details
Process Identification:
Look for
exe
containingcaddy
(often/usr/bin/caddy
or/usr/local/bin/caddy
)Container name:
caddy-demo
(in Docker)
Automatic HTTPS:
When Caddy uses automatic HTTPS, Qtap still sees plaintext via eBPF TLS hooks
No certificate management needed
Works with Let's Encrypt, ZeroSSL, or custom CAs
Troubleshooting
Not Seeing Caddy Traffic?
Check 1: Is Qtap running before requests?
docker logs qtap-caddy | head -20
# Should see startup messages
Check 2: Is ignore_loopback correct?
# If Caddy uses localhost, set:
tap:
ignore_loopback: false
Check 3: Is Caddy processing requests?
# Check Caddy logs
docker logs caddy-demo
# Test Caddy directly
curl http://localhost:8082/
Check 4: Verify Qtap hooks Caddy
docker logs qtap-caddy 2>&1 | grep -i caddy
# Should see logs about attaching to caddy process
Seeing "l7Protocol": "other"
?
"l7Protocol": "other"
?This means connection captured but HTTP not parsed:
Wait longer after starting Qtap (6+ seconds)
Check if Caddy is using HTTP/3 (QUIC) - not yet supported
Verify traffic is actually HTTP/HTTPS
Caddy Using HTTP/3?
Qtap currently supports HTTP/1.x and HTTP/2. If Caddy negotiates HTTP/3 (QUIC):
Disable HTTP/3 in Caddyfile:
{
servers {
protocols h1 h2 # Only HTTP/1 and HTTP/2
}
}
Too Much Traffic?
Option 1: Conditional capture
stacks:
reduced:
plugins:
- type: http_capture
config:
level: none
rules:
- name: "Errors only"
expr: http.res.status >= 400
level: full
Option 2: Filter specific routes
rules:
- name: "Skip health"
expr: http.req.path != "/health"
level: full
Option 3: Summary level only
config:
level: summary # Metadata only
Performance Considerations
Caddy + Qtap Performance
Qtap operates out-of-band with minimal overhead:
CPU: ~1-3% for typical traffic
Memory: ~50-200MB depending on volume
Latency: Zero additional latency (passive observation)
Best practices for high-traffic Caddy:
Use
level: summary
ordetails
for high volumeApply conditional rules to capture selectively
Filter health checks and monitoring endpoints
Send to S3 with batching (use Fluent Bit)
Set TTL policies on storage (90 days recommended)
Scaling Recommendations
Traffic Volume
Recommended Level
Storage
< 100 req/sec
full
stdout or S3
100-1000 req/sec
details
S3 with batching
1000-10000 req/sec
summary
S3 + Fluent Bit
> 10000 req/sec
conditional rules
S3 + Fluent Bit + aggressive filtering
Caddy vs NGINX: Key Differences
Process Name:
Caddy:
/usr/bin/caddy
NGINX:
/usr/sbin/nginx
Configuration:
Caddy: Caddyfile (simpler, more human-readable)
NGINX: nginx.conf (more complex, more options)
HTTPS:
Caddy: Automatic by default (Qtap still works!)
NGINX: Manual configuration
Language:
Caddy: Written in Go (uses Go's crypto/tls)
NGINX: Written in C (uses OpenSSL)
Both work perfectly with Qtap's eBPF-based capture.
Next Steps
Learn More About Qtap:
Traffic Capture Settings - Complete
tap
configurationTraffic Processing with Plugins - All plugin options
Complete Guide - Progressive tutorial
Production Deployment:
Storage Configuration - S3 setup guide
Capturing All HTTP Traffic with Fluent Bit - Batching for scale
Kubernetes Manifest - Deploy in K8s
Related Guides:
Capturing NGINX Traffic - Similar guide for NGINX
Ingress Traffic Capture with Python - Application server capture
HTTPS Header Capture Without Proxies - TLS inspection details
Alternative: Cloud Management:
Qplane - Manage Qtap with visual dashboards
POC Kick Off Guide - Quick start
Cleanup
# Stop all services
docker compose down
# Remove containers and volumes
docker compose down -v
# Clean up files
rm Caddyfile qtap.yaml docker-compose.yaml
This guide uses validated configurations. All examples are tested and guaranteed to work with Caddy and Qtap.
Last updated