OpenTelemetry Integration
This guide shows you how to send qtap observability data to OpenTelemetry-compatible backends using OTLP (OpenTelemetry Protocol).
Understanding Qtap's OpenTelemetry Integration
Qtap sends two types of OpenTelemetry data that work together for complete observability:
Logs
qtap.yaml event_stores
HTTP transaction metadata (method, URL, status, duration, process)
Know WHAT happened, WHICH process, debug application issues
Traces
Environment variables
Network timing (TLS handshake, data transfer, connection lifecycle)
Know WHEN it happened, network performance, infrastructure debugging
Both data types are automatically correlated using the same Trace ID, giving you the complete picture:
Trace ID: 87cc302ce72eb54d25c46a736db020de
Logs show: GET /api/users → 200 OK in 2023ms from /usr/bin/curl
Traces show: TLS handshake 88ms, server response after 2.0s, total 2.4sChoose based on your needs:
Logs only: Most common - see HTTP transactions, filter by process, understand application behavior
Traces only: Network performance monitoring, TLS timing analysis
Both together: Complete observability - correlate application behavior with network timing
Prerequisites
Qtap installed and running (see Getting Started)
An OpenTelemetry-compatible backend or collector
Quick Start: Logs Only (5 minutes)
Most users start here. This sends HTTP transaction metadata to OpenTelemetry.
Step 1: Configure Qtap
Create qtap-config.yaml:
version: 2
services:
event_stores:
- type: otel # Send logs to OpenTelemetry
endpoint: "localhost:4317" # OTel Collector gRPC endpoint
protocol: grpc
service_name: "qtap"
environment: "production"
tls:
enabled: false # Set to true for production
object_stores:
- type: stdout # Or configure S3 for sensitive data
stacks:
default:
plugins:
- type: http_capture
config:
level: summary # (none|summary|details|full)
format: json # (json|text)
tap:
direction: egress # (egress|egress-external|egress-internal|ingress|all)
ignore_loopback: false # (true|false)
audit_include_dns: false # (true|false)
http:
stack: defaultStep 2: Deploy OpenTelemetry Collector
Create otel-collector-config.yaml:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
timeout: 10s
exporters:
# Debug exporter - prints logs to console for testing
debug:
verbosity: detailed
# Add your backend exporter here
# otlphttp:
# endpoint: "https://your-backend.com/v1/logs"
# headers:
# api-key: "${API_KEY}"
service:
pipelines:
logs: # Logs pipeline for qtap event_stores
receivers: [otlp]
processors: [batch]
exporters: [debug] # Add your backend exporterCreate docker-compose.yaml:
services:
otel-collector:
image: otel/opentelemetry-collector:latest
container_name: otel-collector
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- "4317:4317" # OTLP gRPC
- "4318:4318" # OTLP HTTPStart the collector:
docker compose up -dStep 3: Start Qtap (Logs Only)
docker run -d --name qtap \
--user 0:0 --privileged \
--cap-add CAP_BPF --cap-add CAP_SYS_ADMIN \
--pid=host --network=host \
-v /sys:/sys \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$(pwd)/qtap-config.yaml:/app/config/qtap.yaml" \
-e TINI_SUBREAPER=1 \
--ulimit=memlock=-1 \
us-docker.pkg.dev/qpoint-edge/public/qtap:v0 \
--log-level=info \
--log-encoding=console \
--config="/app/config/qtap.yaml"Step 4: Generate Traffic and Verify
# Generate test traffic
curl https://httpbin.org/get
# Check OTel Collector logs
docker logs otel-collectorYou should see two types of log events:
1. Connection Event:
LogRecord
Body: Connection: [egress-external via tcp] hostname → httpbin.org
Attributes:
event.type: connection
event.l7Protocol: http2
event.tlsVersion: 772
event.meta.connectionId: d3sggv07p3qrvfkj173g
event.source.exe: /usr/bin/curl2. Artifact Record Event:
LogRecord
Body: Artifact stored: http_transaction
Attributes:
event.type: artifact_record
event.summary.request_method: GET
event.summary.request_host: httpbin.org
event.summary.response_status: 200
event.summary.duration_ms: 2023
event.summary.connection_id: d3sggv07p3qrvfkj173gSuccess! Qtap is sending HTTP transaction logs to OpenTelemetry. Use event.summary.connection_id to correlate connection metadata with HTTP transactions.
Adding Traces for Network Timing
Want to see TLS handshake duration, data transfer timing, and network-level performance? Add traces.
Step 1: Update OTel Collector Config
Add a traces pipeline to otel-collector-config.yaml:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
processors:
batch:
timeout: 10s
exporters:
debug/logs:
verbosity: detailed
debug/traces:
verbosity: detailed
service:
pipelines:
logs:
receivers: [otlp]
processors: [batch]
exporters: [debug/logs]
traces: # Add traces pipeline
receivers: [otlp]
processors: [batch]
exporters: [debug/traces]Restart the collector:
docker compose restart otel-collectorStep 2: Restart Qtap with Trace Environment Variables
docker rm -f qtap
docker run -d --name qtap \
--user 0:0 --privileged \
--cap-add CAP_BPF --cap-add CAP_SYS_ADMIN \
--pid=host --network=host \
-v /sys:/sys \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$(pwd)/qtap-config.yaml:/app/config/qtap.yaml" \
-e TINI_SUBREAPER=1 \
-e OTEL_TRACES_EXPORTER=otlp \
-e OTEL_EXPORTER_OTLP_PROTOCOL=grpc \
-e OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317 \
--ulimit=memlock=-1 \
us-docker.pkg.dev/qpoint-edge/public/qtap:v0 \
--log-level=info \
--log-encoding=console \
--config="/app/config/qtap.yaml"Step 3: Verify Traces
# Generate traffic
curl https://httpbin.org/get
# Check for traces
docker logs otel-collector | grep "Span #"You should see Connection spans:
Span
Name: Connection
Trace ID: 87cc302ce72eb54d25c46a736db020de
Attributes:
connection.id: d3sggv07p3qrvfkj173g
Events:
- OpenEvent (t=0ms)
- TLSClientHelloEvent (t=88ms)
- ProtocolEvent[http2] (t=343ms)
- DataEvent (request sent)
- DataEvent (response received, +2.0s)
- CloseEventComplete observability! Logs and traces share the same Trace ID for automatic correlation.
Understanding the Data
What Logs Contain
Connection Events (event.type: connection):
TCP connection metadata
Protocol detection (http1, http2, tcp)
TLS version and inspection status
Bytes sent/received
Process information (exe path, user ID)
Artifact Record Events (event.type: artifact_record):
HTTP method, URL, path, status code
User agent and content type
Duration in milliseconds
Process information
Reference URL to full request/response in object stores
Example log query:
event.type = "artifact_record" AND event.summary.response_status >= 500What Traces Contain
Connection Spans:
Complete TCP connection lifecycle
Timing events:
OpenEvent: Connection established
TLSClientHelloEvent: TLS handshake initiated
ProtocolEvent: L7 protocol detected (e.g., http2)
DataEvent: Network data transmitted (with byte counts)
CloseEvent: Connection closed
Total span duration = service response time at network level
Example trace query:
service.name="qtap" AND span.name="Connection" AND span.duration > 1000msCorrelation Example
Use Trace ID or connection.id to link logs and traces:
In Logs:
{
"Trace ID": "87cc302ce72eb54d25c46a736db020de",
"event.type": "artifact_record",
"event.summary.connection_id": "d3sggv07p3qrvfkj173g",
"event.summary.request_method": "GET",
"event.summary.response_status": 200,
"event.summary.duration_ms": 2023
}In Traces:
{
"Trace ID": "87cc302ce72eb54d25c46a736db020de",
"Span Name": "Connection",
"connection.id": "d3sggv07p3qrvfkj173g",
"Duration": 2368ms,
"Events": [
{"Name": "TLSClientHelloEvent", "Timestamp": "+88ms"},
{"Name": "DataEvent", "Timestamp": "+2023ms"}
]
}Complete picture:
What: GET request returned 200 OK
HTTP duration: 2023ms (from log)
Network duration: 2368ms (from trace)
TLS handshake: 88ms (from trace events)
Server response time: ~2.0s (from DataEvent timing in trace)
Kubernetes Deployment
Using OpenTelemetry Operator
Install the OpenTelemetry Operator:
kubectl apply -f https://github.com/open-telemetry/opentelemetry-operator/releases/latest/download/opentelemetry-operator.yamlDeploy an OpenTelemetry Collector:
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: qtap-collector
namespace: monitoring
spec:
mode: daemonset
config: |
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
processors:
batch:
timeout: 10s
exporters:
otlphttp:
endpoint: "https://your-backend.com/v1/logs"
headers:
api-key: "${API_KEY}"
service:
pipelines:
logs:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp]
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp]Deploy qtap using Helm:
First, create your qtap configuration file qtap-config.yaml:
version: 2
services:
event_stores:
- type: otel
endpoint: "qtap-collector-collector.monitoring.svc.cluster.local:4317"
protocol: grpc
service_name: "qtap"
environment: "production"
object_stores:
- type: s3
endpoint: "minio.storage.svc.cluster.local:9000"
bucket: "qpoint-objects"
region: "us-east-1"
access_url: "http://minio.storage.svc.cluster.local:9000/{{BUCKET}}/{{DIGEST}}"
insecure: true
access_key:
type: env
value: S3_ACCESS_KEY
secret_key:
type: env
value: S3_SECRET_KEY
stacks:
default_stack:
plugins:
- type: http_capture
config:
level: summary
format: json
tap:
direction: egress
ignore_loopback: true
audit_include_dns: false
http:
stack: default_stackCreate a Helm values file values.yaml for trace environment variables:
extraEnv:
- name: OTEL_TRACES_EXPORTER
value: "otlp"
- name: OTEL_EXPORTER_OTLP_PROTOCOL
value: "grpc"
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: "http://qtap-collector-collector.monitoring.svc.cluster.local:4317"Install qtap with Helm:
# Add Qpoint Helm repository
helm repo add qpoint https://helm.qpoint.io/
helm repo update
# Install qtap with config and trace environment variables
helm install qtap qpoint/qtap \
-n qpoint \
--create-namespace \
--set-file config=./qtap-config.yaml \
--set logLevel=warn \
-f values.yamlVerify the deployment:
kubectl get pods -n qpoint
kubectl logs -n qpoint -l app.kubernetes.io/name=qtap --tail=50Backend-Specific Configurations
SaaS Platforms
Datadog
Datadog supports OTLP ingestion directly. Configure for logs:
event_stores:
- type: otel
endpoint: "https://http-intake.logs.datadoghq.com/v2/logs"
protocol: http
service_name: "qtap"
environment: "production"
headers:
DD-API-KEY:
type: env
value: DATADOG_API_KEY
tls:
enabled: trueFor traces, set environment variables:
OTEL_TRACES_EXPORTER=otlp
OTEL_EXPORTER_OTLP_ENDPOINT=https://trace.agent.datadoghq.com
OTEL_EXPORTER_OTLP_HEADERS=DD-API-KEY=${DATADOG_API_KEY}Honeycomb
Configure for logs:
event_stores:
- type: otel
endpoint: "api.honeycomb.io:443"
protocol: grpc
service_name: "qtap"
environment: "production"
headers:
x-honeycomb-team:
type: env
value: HONEYCOMB_API_KEY
x-honeycomb-dataset:
type: text
value: "qtap-logs"
tls:
enabled: trueFor traces:
OTEL_TRACES_EXPORTER=otlp
OTEL_EXPORTER_OTLP_ENDPOINT=https://api.honeycomb.io
OTEL_EXPORTER_OTLP_HEADERS=x-honeycomb-team=${HONEYCOMB_API_KEY},x-honeycomb-dataset=qtap-tracesNew Relic
Configure for logs:
event_stores:
- type: otel
endpoint: "otlp.nr-data.net:4317"
protocol: grpc
service_name: "qtap"
environment: "production"
headers:
api-key:
type: env
value: NEW_RELIC_LICENSE_KEY
tls:
enabled: trueFor traces, use the same endpoint via environment variables.
Grafana Cloud
Configure for logs:
event_stores:
- type: otel
endpoint: "otlp-gateway-prod-us-central-0.grafana.net:443"
protocol: grpc
service_name: "qtap"
environment: "production"
headers:
authorization:
type: env
value: GRAFANA_CLOUD_API_KEY # Format: "Basic base64(instanceID:apiKey)"
tls:
enabled: trueFor traces, set:
OTEL_TRACES_EXPORTER=otlp
OTEL_EXPORTER_OTLP_ENDPOINT=https://tempo-prod-us-central-0.grafana.net:443
OTEL_EXPORTER_OTLP_HEADERS=authorization=${GRAFANA_CLOUD_API_KEY}Elastic
Configure for logs:
event_stores:
- type: otel
endpoint: "https://your-deployment.es.us-central1.gcp.cloud.es.io:443"
protocol: http
service_name: "qtap"
environment: "production"
headers:
Authorization:
type: env
value: ELASTIC_APM_SECRET_TOKEN
tls:
enabled: trueSelf-Hosted
For self-hosted backends (Jaeger, Zipkin, Grafana Tempo, etc.), point qtap to your OpenTelemetry Collector, then configure the collector to export to your backend:
# Qtap config - same for all self-hosted backends
event_stores:
- type: otel
endpoint: "otel-collector.monitoring.svc.cluster.local:4317"
protocol: grpc# Trace environment variables
OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector.monitoring.svc.cluster.local:4317Then configure your OTel Collector with the appropriate exporter for your backend.
Querying and Filtering
Filter by Event Type
# Show only HTTP transaction summaries
event.type = "artifact_record"
# Show only TCP connections
event.type = "connection"Find Slow Requests
# Slow HTTP requests (logs)
event.type = "artifact_record" AND event.summary.duration_ms > 1000
# Slow network connections (traces)
service.name = "qtap" AND span.name = "Connection" AND span.duration > 1000msFind Errors
# HTTP errors
event.type = "artifact_record" AND event.summary.response_status >= 500Track Specific Endpoints
# Specific host
event.type = "artifact_record" AND event.summary.request_host = "api.example.com"
# Specific path pattern
event.type = "artifact_record" AND event.summary.request_path LIKE "/api/users/%"Monitor HTTP/2 vs HTTP/1 Traffic
# HTTP/2 connections (logs)
event.type = "connection" AND event.l7Protocol = "http2"
# HTTP/1 connections (logs)
event.type = "connection" AND event.l7Protocol = "http1"Correlate Logs and Traces
# Find all data for a specific Trace ID
trace.id = "87cc302ce72eb54d25c46a736db020de"
# Or use connection.id to link logs and traces
event.summary.connection_id = "d3sggv07p3qrvfkj173g" # in logs
connection.id = "d3sggv07p3qrvfkj173g" # in tracesTroubleshooting
No Logs Appearing
Check qtap logs for OTel connection:
docker logs qtap | grep -i "otel\|error"Common issues:
Wrong endpoint - Verify the endpoint address and port
gRPC default:
4317HTTP default:
4318With
--network=host: uselocalhost:4317
TLS mismatch - If backend requires TLS, set
tls.enabled: trueAuthentication - Verify headers/API keys are correct
Collector not running - Check
docker ps | grep otel
No Traces Appearing
Verify environment variables are set:
# Docker
docker inspect qtap | grep -A 5 OTEL
# Kubernetes
kubectl describe pod qtap-xxxxx | grep -A 5 OTELCheck OTel Collector has traces pipeline:
service:
pipelines:
traces: # ← Must have this
receivers: [otlp]
processors: [batch]
exporters: [your_exporter]Logs But No Traces (or vice versa)
Logs and traces are independent:
Logs require
event_storestype: otel in qtap.yamlTraces require
OTEL_TRACES_EXPORTER=otlpenvironment variable
Ensure both are configured if you want both data types.
Connection Refused Errors
rpc error: code = Unavailable desc = name resolver errorSolutions:
Check if OTel Collector is running:
docker ps | grep otelVerify network connectivity between qtap and collector
Check firewall rules for port 4317/4318
High Trace Volume
Qtap creates a Connection span for every TCP connection. To reduce volume:
Use sampling in your OTel Collector:
processors:
probabilistic_sampler:
sampling_percentage: 10 # Sample 10% of tracesOr filter by instrumentation scope:
processors:
filter/traces:
traces:
exclude:
match_type: strict
instrumentation_libraries:
- name: qtap.eventstore # Exclude event store internal tracesDebugging with stdout Protocol
For local debugging, use the stdout protocol to see OTLP data printed to qtap logs:
event_stores:
- type: otel
protocol: stdout # Prints OTLP data to console
service_name: "qtap"
environment: "development"Then check qtap logs:
docker logs qtapBest Practices
Use TLS in production - Always enable
tls.enabled: truefor productionStore sensitive data securely - Use S3 object stores for full HTTP payloads
Filter appropriately - Use qtap's filters to avoid capturing unnecessary traffic
Set resource attributes - Use
service_nameandenvironmentfor filteringMonitor qtap itself - Set up alerts on qtap's Prometheus metrics
Use batch processing - OTel Collector's batch processor reduces API calls
Start with logs - Most users need HTTP transaction metadata (logs) first
Add traces for troubleshooting - Enable traces when debugging network performance
Use correlation - Link logs and traces via Trace ID or connection.id for complete observability
Configuration Quick Reference
Logs Only
qtap.yaml:
services:
event_stores:
- type: otel
endpoint: "localhost:4317"
protocol: grpcEnvironment variables: None needed
Traces Only
qtap.yaml: Any (stdout, otel, doesn't matter for traces)
Environment variables:
OTEL_TRACES_EXPORTER=otlp
OTEL_EXPORTER_OTLP_PROTOCOL=grpc
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317Both Logs and Traces
qtap.yaml:
services:
event_stores:
- type: otel
endpoint: "localhost:4317"
protocol: grpcEnvironment variables:
OTEL_TRACES_EXPORTER=otlp
OTEL_EXPORTER_OTLP_PROTOCOL=grpc
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317Next Steps
Configure S3 Object Storage for sensitive payload data
Set up Prometheus Metrics for qtap health monitoring
Configure Traffic Filters to reduce noise
Learn about Tracing for detailed network-level observability
Additional Resources
Last updated