Know WHAT happened, WHICH process, debug application issues
Traces
Environment variables
Network timing (TLS handshake, data transfer, connection lifecycle)
Know WHEN it happened, network performance, infrastructure debugging
Both data types are automatically correlated using the same Trace ID, giving you the complete picture:
Trace ID: 87cc302ce72eb54d25c46a736db020de
Logs show: GET /api/users → 200 OK in 2023ms from /usr/bin/curl
Traces show: TLS handshake 88ms, server response after 2.0s, total 2.4s
Choose based on your needs:
Logs only: Most common - see HTTP transactions, filter by process, understand application behavior
Most users start here. This sends HTTP transaction metadata to OpenTelemetry.
Step 1: Configure Qtap
Create qtap-config.yaml:
Step 2: Deploy OpenTelemetry Collector
Already have an OpenTelemetry Collector? Update the endpoint in Step 1 to point to your collector and skip to Step 3.
Create otel-collector-config.yaml:
Create docker-compose.yaml:
Start the collector:
Step 3: Start Qtap (Logs Only)
Note: No trace environment variables are set. This configuration sends logs only.
Step 4: Generate Traffic and Verify
You should see two types of log events:
1. Connection Event:
2. Artifact Record Event:
Success! Qtap is sending HTTP transaction logs to OpenTelemetry. Use event.summary.connection_id to correlate connection metadata with HTTP transactions.
Adding Traces for Network Timing
Want to see TLS handshake duration, data transfer timing, and network-level performance? Add traces.
Step 1: Update OTel Collector Config
Add a traces pipeline to otel-collector-config.yaml:
Restart the collector:
Step 2: Restart Qtap with Trace Environment Variables
What changed: Added trace environment variables. Qtap now sends both logs and traces.
Step 3: Verify Traces
You should see Connection spans:
Complete observability! Logs and traces share the same Trace ID for automatic correlation.
Understanding the Data
What Logs Contain
Connection Events (event.type: connection):
TCP connection metadata
Protocol detection (http1, http2, tcp)
TLS version and inspection status
Bytes sent/received
Process information (exe path, user ID)
Artifact Record Events (event.type: artifact_record):
HTTP method, URL, path, status code
User agent and content type
Duration in milliseconds
Process information
Reference URL to full request/response in object stores
Example log query:
What Traces Contain
Connection Spans:
Complete TCP connection lifecycle
Timing events:
OpenEvent: Connection established
TLSClientHelloEvent: TLS handshake initiated
ProtocolEvent: L7 protocol detected (e.g., http2)
DataEvent: Network data transmitted (with byte counts)
CloseEvent: Connection closed
Total span duration = service response time at network level
Example trace query:
Correlation Example
Use Trace ID or connection.id to link logs and traces:
In Logs:
In Traces:
Complete picture:
What: GET request returned 200 OK
HTTP duration: 2023ms (from log)
Network duration: 2368ms (from trace)
TLS handshake: 88ms (from trace events)
Server response time: ~2.0s (from DataEvent timing in trace)
Kubernetes Deployment
Using OpenTelemetry Operator
Already have OpenTelemetry Operator installed? Skip to step 2.
Install the OpenTelemetry Operator:
Deploy an OpenTelemetry Collector:
Deploy qtap using Helm:
First, create your qtap configuration file qtap-config.yaml:
Create a Helm values file values.yaml for trace environment variables:
Install qtap with Helm:
Verify the deployment:
Backend-Specific Configurations
SaaS Platforms
Datadog
Datadog supports OTLP ingestion directly. Configure for logs:
For traces, set environment variables:
Honeycomb
Configure for logs:
For traces:
New Relic
Configure for logs:
For traces, use the same endpoint via environment variables.
Grafana Cloud
Configure for logs:
For traces, set:
Elastic
Configure for logs:
Self-Hosted
For self-hosted backends (Jaeger, Zipkin, Grafana Tempo, etc.), point qtap to your OpenTelemetry Collector, then configure the collector to export to your backend:
Then configure your OTel Collector with the appropriate exporter for your backend.
Querying and Filtering
Filter by Event Type
Find Slow Requests
Find Errors
Track Specific Endpoints
Monitor HTTP/2 vs HTTP/1 Traffic
Correlate Logs and Traces
Troubleshooting
No Logs Appearing
Check qtap logs for OTel connection:
Common issues:
Wrong endpoint - Verify the endpoint address and port
gRPC default: 4317
HTTP default: 4318
With --network=host: use localhost:4317
TLS mismatch - If backend requires TLS, set tls.enabled: true
Authentication - Verify headers/API keys are correct
Collector not running - Check docker ps | grep otel
# Show only HTTP transaction summaries
event.type = "artifact_record"
# Show only TCP connections
event.type = "connection"
# Slow HTTP requests (logs)
event.type = "artifact_record" AND event.summary.duration_ms > 1000
# Slow network connections (traces)
service.name = "qtap" AND span.name = "Connection" AND span.duration > 1000ms
# HTTP errors
event.type = "artifact_record" AND event.summary.response_status >= 500
# Specific host
event.type = "artifact_record" AND event.summary.request_host = "api.example.com"
# Specific path pattern
event.type = "artifact_record" AND event.summary.request_path LIKE "/api/users/%"
# Find all data for a specific Trace ID
trace.id = "87cc302ce72eb54d25c46a736db020de"
# Or use connection.id to link logs and traces
event.summary.connection_id = "d3sggv07p3qrvfkj173g" # in logs
connection.id = "d3sggv07p3qrvfkj173g" # in traces
docker logs qtap | grep -i "otel\|error"
# Docker
docker inspect qtap | grep -A 5 OTEL
# Kubernetes
kubectl describe pod qtap-xxxxx | grep -A 5 OTEL
service:
pipelines:
traces: # ← Must have this
receivers: [otlp]
processors: [batch]
exporters: [your_exporter]
rpc error: code = Unavailable desc = name resolver error
processors:
probabilistic_sampler:
sampling_percentage: 10 # Sample 10% of traces