This guide shows you how to use Qtap to capture HTTP traffic flowing through Traefik, a modern cloud-native reverse proxy and load balancer. You'll learn how to observe both incoming client requests and outgoing upstream connections in a dynamic, label-based configuration environment.
What You'll Learn
Capture Traefik ingress traffic (client requests)
Capture Traefik egress traffic (backend service requests)
Monitor both sides of a reverse proxy simultaneously
Use Traefik's label-based configuration with Qtap
Leverage Traefik's automatic service discovery
Handle dynamic backend routing
Set up Traefik + Qtap in Docker for testing
Deploy production-ready configurations
Why capture Traefik traffic?
Dynamic Service Discovery: Monitor auto-discovered services in Docker/Kubernetes
API Gateway Monitoring: Track all API calls through your edge proxy
Container Traffic Visibility: See communication between microservices
Load Balancer Analytics: Understand traffic distribution across backends
Automatic HTTPS Inspection: See inside TLS traffic without certificate management
Debugging Service Routing: Verify Traefik routes traffic correctly
Performance Analysis: Measure latency at each routing hop
Linux system with kernel 5.10+ and eBPF support
Docker installed (for this guide's examples)
Basic understanding of Traefik and Docker labels
Part 1: Traefik with Multiple Backends
Traefik is unique because it configures routes via Docker labels instead of config files. Let's set up Traefik with multiple backend services.
Step 1: Create Project Directory
Step 2: Create Traefik Configuration
Create traefik.yaml:
Step 3: Create Backend Services
We'll create two simple backend services to demonstrate routing.
Create backend-service.py:
Step 4: Create Qtap Configuration
Create qtap.yaml:
Step 5: Create Docker Compose Setup
Create docker-compose.yaml:
Key Traefik Concepts:
Labels: Configure routing via Docker labels (not config files)
Routers: Define how to match incoming requests (PathPrefix, Host, etc.)
Services: Define backend servers (load balancer targets)
Middlewares: Transform requests (strip prefixes, add headers, etc.)
Automatic Discovery: Traefik watches Docker for new containers
Part 2: Running and Testing
Step 1: Start the Services
Step 2: Generate Test Traffic
Step 3: View Captured Traffic
What you should see:
Key indicators:
✅ "exe" contains traefik - Process identified
✅ Direction: INGRESS - Client → Traefik
✅ Direction: EGRESS - Traefik → Backend service
✅ Two transactions per proxied request
✅ Path transformation visible (prefix stripped)
✅ Headers added by Traefik (X-Forwarded-*)
Part 3: Advanced Configurations
Configuration 1: Capture Only Specific Services
Use Rulekit to capture only traffic to specific backend services:
Configuration 2: Monitor Service Discovery
Capture traffic as Traefik discovers and routes to new services:
This captures metadata about which backends Traefik routes to, useful for understanding service discovery behavior.
Configuration 3: API Gateway with Rate Limiting Detection
Monitor API gateway patterns and detect potential rate limiting:
Configuration 4: Production Setup with S3
Part 4: Real-World Use Cases
Use Case 1: Microservices Mesh Monitoring
Monitor all service-to-service communication through Traefik:
docker-compose.yaml (add more services):
qtap.yaml:
Use Case 2: Canary Deployment Monitoring
Monitor traffic distribution during canary deployments:
docker-compose.yaml:
Traefik will load balance between v1 and v2. Qtap captures which version served each request.
qtap.yaml:
Analyze logs to see v1 vs v2 traffic distribution.
Use Case 3: Multi-Tenant API Gateway
Route different tenants to different backends:
docker-compose.yaml:
qtap.yaml:
Understanding Traefik + Qtap
Dual Capture for Dynamic Routing
When Traefik routes a request, Qtap captures two transactions:
Transaction 1: INGRESS (Client → Traefik)
Transaction 2: EGRESS (Traefik → Backend)
Notice:
Path transformation: /api/service-a/users → /users (middleware stripped prefix)
Container resolution: service-a:8000 (Docker DNS)
Headers added: X-Forwarded-*
Traefik-Specific Features
Process Identification:
Look for exe containing traefik
Typically /usr/local/bin/traefik
Label-Based Configuration:
Unlike NGINX/Caddy, routing is defined via container labels
Qtap sees the result of routing decisions
Changes to labels automatically discovered (no restart needed)
Automatic Service Discovery:
Traefik watches Docker events
New containers auto-routed
Qtap captures new service traffic immediately
Troubleshooting
Not Seeing Traefik Traffic?
Check 1: Is Traefik routing correctly?
Check 2: Are services registered with Traefik?
Check 3: Is Qtap running before requests?
Check 4: Is ignore_loopback correct?
Seeing "l7Protocol": "other"?
Wait longer after starting Qtap (6+ seconds)
Check if Traefik is using HTTP/3 (not yet supported by Qtap)
Verify traffic is actually HTTP
Labels Not Working?
Common label mistakes:
Too Much Traffic?
Apply conditional capture:
CPU: ~1-3% overhead for typical traffic
Memory: ~50-200MB for Qtap
Latency: Zero additional latency (passive observation)
Best practices for high-traffic Traefik:
Use level: summary for high volume
Apply rules to capture selectively
Filter health checks and monitoring endpoints
Send to S3 with batching (Fluent Bit)
Set TTL policies on storage
Scaling Recommendations
Errors/slow requests only
Traefik vs NGINX/Caddy
Configuration:
Traefik: Docker labels, dynamic discovery
NGINX: Static config files
Caddy: Static config files (but simpler)
Use Cases:
Traefik: Containerized apps, Kubernetes, dynamic environments
NGINX: Traditional deployments, high performance
Caddy: Simplicity, automatic HTTPS
Qtap Compatibility:
All three work perfectly with Qtap
Traefik's dynamic routing is fully observable
Same capture quality across all proxies
Learn More About Qtap:
Production Deployment:
Related Guides:
Alternative: Cloud Management:
Qplane - Manage Qtap with visual dashboards
This guide uses validated configurations. All examples are tested and guaranteed to work with Traefik and Qtap.