OpenTelemetry Integration
This guide shows you how to send qtap observability data to OpenTelemetry-compatible backends using OTLP (OpenTelemetry Protocol).
Understanding Qtap's OpenTelemetry Integration
Qtap sends two types of OpenTelemetry data that work together for complete observability:
Logs
qtap.yaml event_stores
HTTP transaction metadata (method, URL, status, duration, process)
Know WHAT happened, WHICH process, debug application issues
Traces
Environment variables
Network timing (TLS handshake, data transfer, connection lifecycle)
Know WHEN it happened, network performance, infrastructure debugging
Both data types are automatically correlated using the same Trace ID, giving you the complete picture:
Trace ID: 87cc302ce72eb54d25c46a736db020de
Logs show: GET /api/users → 200 OK in 2023ms from /usr/bin/curl
Traces show: TLS handshake 88ms, server response after 2.0s, total 2.4sChoose based on your needs:
Logs only: Most common - see HTTP transactions, filter by process, understand application behavior
Traces only: Network performance monitoring, TLS timing analysis
Both together: Complete observability - correlate application behavior with network timing
Prerequisites
Qtap installed and running (see Getting Started)
An OpenTelemetry-compatible backend or collector
Quick Start: Logs Only (5 minutes)
Most users start here. This sends HTTP transaction metadata to OpenTelemetry.
Step 1: Configure Qtap
Create qtap-config.yaml:
Step 2: Deploy OpenTelemetry Collector
Create otel-collector-config.yaml:
Create docker-compose.yaml:
Start the collector:
Step 3: Start Qtap (Logs Only)
Step 4: Generate Traffic and Verify
You should see two types of log events:
1. Connection Event:
2. Artifact Record Event:
Success! Qtap is sending HTTP transaction logs to OpenTelemetry. Use event.summary.connection_id to correlate connection metadata with HTTP transactions.
Adding Traces for Network Timing
Want to see TLS handshake duration, data transfer timing, and network-level performance? Add traces.
Step 1: Update OTel Collector Config
Add a traces pipeline to otel-collector-config.yaml:
Restart the collector:
Step 2: Restart Qtap with Trace Environment Variables
Step 3: Verify Traces
You should see Connection spans:
Complete observability! Logs and traces share the same Trace ID for automatic correlation.
Understanding the Data
What Logs Contain
Connection Events (event.type: connection):
TCP connection metadata
Protocol detection (http1, http2, tcp)
TLS version and inspection status
Bytes sent/received
Process information (exe path, user ID)
Artifact Record Events (event.type: artifact_record):
HTTP method, URL, path, status code
User agent and content type
Duration in milliseconds
Process information
Reference URL to full request/response in object stores
Example log query:
What Traces Contain
Connection Spans:
Complete TCP connection lifecycle
Timing events:
OpenEvent: Connection established
TLSClientHelloEvent: TLS handshake initiated
ProtocolEvent: L7 protocol detected (e.g., http2)
DataEvent: Network data transmitted (with byte counts)
CloseEvent: Connection closed
Total span duration = service response time at network level
Example trace query:
Correlation Example
Use Trace ID or connection.id to link logs and traces:
In Logs:
In Traces:
Complete picture:
What: GET request returned 200 OK
HTTP duration: 2023ms (from log)
Network duration: 2368ms (from trace)
TLS handshake: 88ms (from trace events)
Server response time: ~2.0s (from DataEvent timing in trace)
Kubernetes Deployment
Using OpenTelemetry Operator
Install the OpenTelemetry Operator:
Deploy an OpenTelemetry Collector:
Deploy qtap using Helm:
First, create your qtap configuration file qtap-config.yaml:
Create a Helm values file values.yaml for trace environment variables:
Install qtap with Helm:
Verify the deployment:
Backend-Specific Configurations
SaaS Platforms
Datadog
Datadog supports OTLP ingestion directly. Configure for logs:
For traces, set environment variables:
Honeycomb
Configure for logs:
For traces:
New Relic
Configure for logs:
For traces, use the same endpoint via environment variables.
Grafana Cloud
Configure for logs:
For traces, set:
Elastic
Configure for logs:
Self-Hosted
For self-hosted backends (Jaeger, Zipkin, Grafana Tempo, etc.), point qtap to your OpenTelemetry Collector, then configure the collector to export to your backend:
Then configure your OTel Collector with the appropriate exporter for your backend.
Querying and Filtering
Filter by Event Type
Find Slow Requests
Find Errors
Track Specific Endpoints
Monitor HTTP/2 vs HTTP/1 Traffic
Correlate Logs and Traces
Troubleshooting
No Logs Appearing
Check qtap logs for OTel connection:
Common issues:
Wrong endpoint - Verify the endpoint address and port
gRPC default:
4317HTTP default:
4318With
--network=host: uselocalhost:4317
TLS mismatch - If backend requires TLS, set
tls.enabled: trueAuthentication - Verify headers/API keys are correct
Collector not running - Check
docker ps | grep otel
No Traces Appearing
Verify environment variables are set:
Check OTel Collector has traces pipeline:
Logs But No Traces (or vice versa)
Logs and traces are independent:
Logs require
event_storestype: otel in qtap.yamlTraces require
OTEL_TRACES_EXPORTER=otlpenvironment variable
Ensure both are configured if you want both data types.
Connection Refused Errors
Solutions:
Check if OTel Collector is running:
docker ps | grep otelVerify network connectivity between qtap and collector
Check firewall rules for port 4317/4318
High Trace Volume
Qtap creates a Connection span for every TCP connection. To reduce volume:
Use sampling in your OTel Collector:
Or filter by instrumentation scope:
Debugging with stdout Protocol
For local debugging, use the stdout protocol to see OTLP data printed to qtap logs:
Then check qtap logs:
Best Practices
Use TLS in production - Always enable
tls.enabled: truefor productionStore sensitive data securely - Use S3 object stores for full HTTP payloads
Filter appropriately - Use qtap's filters to avoid capturing unnecessary traffic
Set resource attributes - Use
service_nameandenvironmentfor filteringMonitor qtap itself - Set up alerts on qtap's Prometheus metrics
Use batch processing - OTel Collector's batch processor reduces API calls
Start with logs - Most users need HTTP transaction metadata (logs) first
Add traces for troubleshooting - Enable traces when debugging network performance
Use correlation - Link logs and traces via Trace ID or connection.id for complete observability
Configuration Quick Reference
Logs Only
qtap.yaml:
Environment variables: None needed
Traces Only
qtap.yaml: Any (stdout, otel, doesn't matter for traces)
Environment variables:
Both Logs and Traces
qtap.yaml:
Environment variables:
Next Steps
Configure S3 Object Storage for sensitive payload data
Set up Prometheus Metrics for qtap health monitoring
Configure Traffic Filters to reduce noise
Learn about Tracing for detailed network-level observability
Additional Resources
Last updated