Complete Guide: Hello World to Production
Progressive 4-level tutorial that takes you from "hello world" to production-ready Qtap deployment. Perfect for teams wanting full control with YAML configuration.
Who This Is For
Use this guide if you want to:
Learn Qtap systematically from basics to advanced features
Deploy Qtap for long-term production monitoring
Understand how all Qtap features work together
Build self-managed infrastructure with version-controlled configs
Choose this guide for:
Production deployments with S3 storage
Rule-based selective capture (capture errors only, specific endpoints, etc.)
Air-gapped or isolated environments
GitOps workflows with infrastructure-as-code
Choose something else if you:
Need visibility RIGHT NOW for debugging → Production Debugging (30 seconds)
Want a quick preview → 5-Minute Quickstart (5 minutes)
Prefer managed dashboards → Qplane POC Guide (10 minutes)
Time to complete: 50 minutes (can pause between levels)
This guide takes you from a simple "hello world" setup to a production-ready configuration, with every example validated and tested. Follow along at your own pace - each level builds on the previous one.
What you'll learn:
Level 1: Dead simple setup - verify qtap is working (10 min)
Level 2: Basic filtering and selective capture (10 min)
Level 3: Conditional capture with rulekit expressions (15 min)
Level 4: Production storage with S3 (15 min)
Prerequisites:
Docker installed and running
Basic familiarity with YAML
curl or similar HTTP client
Level 1: Dead Simple - Verify It's Working
Goal: Get qtap running and see it capture HTTPS traffic in under 5 minutes.
What you'll learn:
Basic qtap configuration
How to verify qtap sees inside HTTPS
Understanding qtap output
Step 1: Create Your First Config
Create a file called qtap-level1.yaml:
What this does:
Captures all outbound HTTP/HTTPS traffic
Shows full request/response details in your terminal
No filters - captures everything (except localhost)
Step 2: Start Qtap
Wait for qtap to initialize (important!):
Step 3: Generate Test Traffic
Step 4: See What Qtap Captured
What you should see:
🎉 Success indicators:
✅ Clean formatted output (no internal logging clutter)
✅ You see the full URL despite HTTPS encryption
✅ All request and response headers are visible (including HTTP/2 pseudo-headers like
:path,:authority)✅ The response body (JSON) is captured in full
✅ Rich metadata: process info, duration, bytes sent/received
Understanding the Output
Let's break down what qtap showed you:
Metadata Section:
Transaction Time - When the request occurred
Duration - How long the request took (3356ms)
Direction -
egress-external(outbound to external IP)Process -
/usr/bin/curl- Qtap knows which process made the requestBytes Sent/Received - Network traffic volume
Request Section:
Method & URL - GET request to https://httpbin.org/get
Headers - Including HTTP/2 pseudo-headers (
:path,:scheme,:authority,:method)Standard headers like
User-AgentandAccept
Response Section:
Status - 200 OK
Headers - All response headers including server, content-type, etc.
Body - Complete JSON response payload
How does qtap see inside HTTPS?
Qtap uses eBPF to hook into system calls before TLS encryption happens. It doesn't break TLS or act as a proxy - it just observes what your application sends before OpenSSL encrypts it.
Troubleshooting
Don't see any output?
Make sure qtap was running BEFORE you generated traffic
Wait 6 seconds after starting qtap before testing
Check qtap is running:
docker ps | grep qtap-level1
See connection info but no HTTP details?
Look for
"l7Protocol": "other"- means HTTP parsing failedCheck if you're looking at the right traffic (search for your test domain)
Clean up when done:
What's Next?
✅ Level 1 Complete! You now know:
How to configure qtap basics
How to verify qtap is capturing traffic
That qtap can see inside HTTPS
Next up: Level 2 - Basic Filtering and Selective Capture
Filter out noisy processes
Apply different capture levels by domain
Use multiple stacks for different traffic types
Level 2: Basic Filtering and Selective Capture
Goal: Control what traffic gets captured and apply different capture levels to different traffic.
What you'll learn:
Filter out noisy processes (curl, wget)
Apply different capture levels by domain
Use different capture levels for different endpoints
Why Filtering Matters
In real environments, you'll see a LOT of traffic. Level 1 captures everything, which means:
Kubernetes health checks flood your logs
Package managers create noise
Your own debugging with curl shows up
Monitoring agents add clutter
Level 2 teaches you to be selective.
Step 1: Create a Filtered Config
Create qtap-level2.yaml:
What's new here:
Two stacks: Detailed (full) vs lightweight (summary only)
Process filters: Ignore curl (filter out manual testing)
Domain-specific stacks: httpbin.org uses detailed_stack for full capture
Default lightweight: Everything else is summary only
Step 2: Start Level 2
Step 3: Test the Filters
Test 1: Filtered traffic (should NOT appear)
Check logs:
Expected: Nothing! curl is filtered.
Test 2: Domain-specific capture (should appear with FULL details)
Check logs:
Expected: Full HTTP details including body because:
httpbin.org matches our endpoint rule
Endpoint rule applies
detailed_stackfor full capturewget (/bin/busybox) is NOT filtered, so traffic is captured
Understanding the Results
What you should see:
Curl traffic: Filtered out
httpbin.org via wget: CAPTURED with full details
Method, URL, headers, body all visible
Because it matches the endpoint rule
Other domains: Would show summary only (if we tested them)
Basic info: method, URL, status, duration
No headers, no bodies
Why This Matters
This configuration gives you:
Less noise: Filters out development tools
Selective detail: Full capture only where you need it
Better performance: Summary-only for bulk traffic
Cost savings: Less data stored/transmitted
Advanced: Using Prefix Filters
Want to filter an entire directory?
This would block /usr/bin/curl, /usr/bin/wget, /usr/bin/python3, etc.
Clean Up
What's Next?
✅ Level 2 Complete! You now know:
How to filter noisy processes
How to apply different capture levels by domain
How to use multiple stacks for different traffic types
Next up: Level 3 - Conditional Capture with Rulekit
Use rulekit expressions for intelligent capture
Create reusable macros
Capture only errors and specific request types
Filter by container and pod labels
Level 3: Conditional Capture with Rulekit
Goal: Use rulekit expressions for intelligent, conditional traffic capture.
What you'll learn:
Capture only errors and POST requests
Use different capture levels based on conditions
Create reusable macros for complex logic
Filter by container and pod labels
Why Conditional Capture Matters
In high-traffic environments, capturing everything is expensive and noisy. You want:
Selective detail: Full capture only when needed
Error focus: Always capture failures
Method-based filtering: Capture mutating operations (POST, PUT, DELETE)
Smart sampling: Reduce volume without losing critical data
Step 1: Create a Rulekit-Based Config
Create qtap-level3.yaml:
Step 2: Understanding Rulekit
Macros:
Reusable expressions defined once, used many times
Called like functions:
is_error(),is_slow()Make configs cleaner and easier to maintain
Capture Levels in Rules:
none: Skip capture entirelysummary: Basic info (method, URL, status, duration, process)details: Include headers (no bodies)full: Everything (headers + bodies)
Available Fields:
Request:
http.req.method,http.req.path,http.req.host,http.req.urlResponse:
http.res.statusHeaders:
http.req.headers.<name>,http.res.headers.<name>Source:
src.container.name,src.container.labels.<key>,src.pod.name,src.pod.labels.<key>
Step 3: Start Level 3
Step 4: Test Conditional Capture
Test 1: Success (should show SUMMARY - basic info only):
Check logs - should see basic metadata (method, URL, status) but no headers or body:
Test 2: Error (should show FULL details):
Check logs - should see full capture with all details:
Test 3: API path (should show DETAILS with headers):
Understanding the Results
What you should see:
Success requests: Summary level (default)
Method, URL, status code
Duration, process info
No headers, no bodies
Error requests (404, 500): Full capture
Complete request and response headers
Full request/response bodies
Because they match
is_error()macro
API paths: Details level
Headers included
No bodies (saves space)
Because path matches
/^\/api\//
Advanced Rulekit Patterns
Container-based filtering:
Regex matching:
Complex conditions:
Clean Up
Level 4: Production Storage with S3
Goal: Store captured traffic in S3 for long-term retention and compliance.
What you'll learn:
Configure S3-compatible object storage
Use MinIO for local testing
Combine stdout (debugging) with S3 (persistence)
Production storage best practices
Why S3 Storage Matters
The primary benefit of S3 storage is keeping sensitive data within your network boundary.
When you use S3-compatible storage (MinIO, AWS S3, GCS), captured HTTP traffic containing sensitive information (API keys, tokens, PII, etc.) never leaves your infrastructure. This is critical for:
Data sovereignty: Keep sensitive data in your own network/region
Security: Prevent data exfiltration - traffic never goes to external services
Compliance: Meet regulatory requirements (GDPR, HIPAA, SOC2)
Control: You own and control access to all captured data
Additionally, S3 provides:
Forensics: Investigate security incidents weeks/months later
Analytics: Analyze traffic patterns over time
Debugging: Review full request/response data when issues occur
Durability: Long-term retention with lifecycle policies
stdout is great for development, but production requires secure, persistent storage within your control.
Step 1: Set Up MinIO (Local S3)
For this guide, we'll use MinIO - an S3-compatible storage you can run locally:
Step 2: Create S3-Enabled Config
Create qtap-level4-s3.yaml:
What's different:
S3 object_store: Captured HTTP transactions go to MinIO
stdout event_store: Connection metadata still goes to console
Error-only capture: Only store failures (reduces costs)
Step 3: Start Qtap with S3
Step 4: Test S3 Storage
Generate an error (will be stored in S3):
Check qtap logs for S3 upload confirmation:
List objects in MinIO:
You should see files with SHA digest names - these are your captured HTTP transactions!
Step 5: Retrieve a Captured Transaction
Get the digest from qtap logs:
Download the object from MinIO:
You'll see the full HTTP transaction - request headers, response headers, and body!
Understanding S3 Configuration
S3 Parameters:
endpoint
S3 server address
s3.amazonaws.com or localhost:9000
bucket
Where to store objects
qtap-captures
region
S3 region
us-east-1
insecure
Allow HTTP (MinIO only!)
true for local, false for production
access_key
S3 credentials
From env or text
secret_key
S3 credentials
From env or text
Production S3 Config (AWS):
Then run qtap with:
Production Best Practices
Storage Strategy:
✅ Use S3 for object storage (HTTP transactions)
✅ Use stdout or logging service for events (metadata)
✅ Only capture what you need (errors, slow requests)
✅ Set S3 lifecycle rules to auto-delete old data
Security:
✅ Always use
insecure: falsein production✅ Use environment variables for credentials (never hardcode)
✅ Enable S3 bucket encryption
✅ Restrict S3 bucket access with IAM policies
Cost Optimization:
✅ Use
level: noneby default, capture via rules only✅ Capture errors at
fulllevel (need details to debug)✅ Capture success at
summaryordetailslevel✅ Use S3 lifecycle policies to transition to cheaper storage tiers
S3 Lifecycle Example
For AWS S3, set up lifecycle rules:
This:
Keeps data in standard storage for 30 days
Moves to infrequent access after 30 days (cheaper)
Archives to Glacier after 90 days (very cheap)
Deletes after 1 year (compliance)
Clean Up
Congratulations! 🎉
You've completed the Ultimate Qtap Setup Guide. You now know how to:
Level 1 - Basics:
✅ Configure qtap to capture HTTP/HTTPS traffic
✅ Verify qtap can see inside TLS connections
✅ Understand qtap output format
Level 2 - Filtering:
✅ Filter processes to reduce noise
✅ Apply different capture levels by domain
✅ Use different capture levels for different endpoints
Level 3 - Conditional Capture:
✅ Use rulekit expressions for intelligent capture
✅ Create reusable macros
✅ Capture only errors and specific request types
✅ Filter by container and pod labels
Level 4 - Production Storage:
✅ Configure S3-compatible object storage
✅ Combine stdout and S3 for debugging + persistence
✅ Implement cost-effective storage strategies
✅ Follow security best practices
Next Steps
Production Readiness
Ready to deploy qtap in production? Here's your checklist:
Configuration Reference:
Storage Configuration - Complete S3 setup, credentials management, and event stores
Traffic Processing with Plugins - All available plugins and advanced options
Traffic Capture Settings - Filters, endpoints, and direction settings
Rulekit Documentation - Advanced expression syntax and examples
Alternative: Cloud Management
Want centralized management with visual dashboards?
Qplane Overview - Cloud control plane for managing multiple agents
POC Kick Off Guide - Quick start with Qplane (10 minutes)
Get Help:
Troubleshooting - Common issues and solutions
Report issues on GitHub
All examples in this guide have been validated and tested. Every configuration is guaranteed to work.
Last updated