OpenTelemetry Integration

This guide shows you how to send qtap observability data to OpenTelemetry-compatible backends using OTLP (OpenTelemetry Protocol).

Understanding Qtap's OpenTelemetry Integration

Qtap sends two types of OpenTelemetry data that work together for complete observability:

Data Type
Configuration
What It Contains
Use Case

Logs

qtap.yaml event_stores

HTTP transaction metadata (method, URL, status, duration, process)

Know WHAT happened, WHICH process, debug application issues

Traces

Environment variables

Network timing (TLS handshake, data transfer, connection lifecycle)

Know WHEN it happened, network performance, infrastructure debugging

Both data types are automatically correlated using the same Trace ID, giving you the complete picture:

Trace ID: 87cc302ce72eb54d25c46a736db020de

Logs show:     GET /api/users → 200 OK in 2023ms from /usr/bin/curl
Traces show:   TLS handshake 88ms, server response after 2.0s, total 2.4s

Choose based on your needs:

  • Logs only: Most common - see HTTP transactions, filter by process, understand application behavior

  • Traces only: Network performance monitoring, TLS timing analysis

  • Both together: Complete observability - correlate application behavior with network timing

Prerequisites

  • Qtap installed and running (see Getting Started)

  • An OpenTelemetry-compatible backend or collector

Quick Start: Logs Only (5 minutes)

Most users start here. This sends HTTP transaction metadata to OpenTelemetry.

Step 1: Configure Qtap

Create qtap-config.yaml:

Step 2: Deploy OpenTelemetry Collector

Already have an OpenTelemetry Collector? Update the endpoint in Step 1 to point to your collector and skip to Step 3.

Create otel-collector-config.yaml:

Create docker-compose.yaml:

Start the collector:

Step 3: Start Qtap (Logs Only)

Note: No trace environment variables are set. This configuration sends logs only.

Step 4: Generate Traffic and Verify

You should see two types of log events:

1. Connection Event:

2. Artifact Record Event:

Adding Traces for Network Timing

Want to see TLS handshake duration, data transfer timing, and network-level performance? Add traces.

Step 1: Update OTel Collector Config

Add a traces pipeline to otel-collector-config.yaml:

Restart the collector:

Step 2: Restart Qtap with Trace Environment Variables

What changed: Added trace environment variables. Qtap now sends both logs and traces.

Step 3: Verify Traces

You should see Connection spans:

Understanding the Data

What Logs Contain

Connection Events (event.type: connection):

  • TCP connection metadata

  • Protocol detection (http1, http2, tcp)

  • TLS version and inspection status

  • Bytes sent/received

  • Process information (exe path, user ID)

Artifact Record Events (event.type: artifact_record):

  • HTTP method, URL, path, status code

  • User agent and content type

  • Duration in milliseconds

  • Process information

  • Reference URL to full request/response in object stores

Example log query:

What Traces Contain

Connection Spans:

  • Complete TCP connection lifecycle

  • Timing events:

    • OpenEvent: Connection established

    • TLSClientHelloEvent: TLS handshake initiated

    • ProtocolEvent: L7 protocol detected (e.g., http2)

    • DataEvent: Network data transmitted (with byte counts)

    • CloseEvent: Connection closed

  • Total span duration = service response time at network level

Example trace query:

Correlation Example

Use Trace ID or connection.id to link logs and traces:

In Logs:

In Traces:

Complete picture:

  • What: GET request returned 200 OK

  • HTTP duration: 2023ms (from log)

  • Network duration: 2368ms (from trace)

  • TLS handshake: 88ms (from trace events)

  • Server response time: ~2.0s (from DataEvent timing in trace)

Kubernetes Deployment

Using OpenTelemetry Operator

Already have OpenTelemetry Operator installed? Skip to step 2.

  1. Install the OpenTelemetry Operator:

  1. Deploy an OpenTelemetry Collector:

  1. Deploy qtap using Helm:

First, create your qtap configuration file qtap-config.yaml:

Create a Helm values file values.yaml for trace environment variables:

Install qtap with Helm:

Verify the deployment:

Backend-Specific Configurations

SaaS Platforms

Datadog

Datadog supports OTLP ingestion directly. Configure for logs:

For traces, set environment variables:

Honeycomb

Configure for logs:

For traces:

New Relic

Configure for logs:

For traces, use the same endpoint via environment variables.

Grafana Cloud

Configure for logs:

For traces, set:

Elastic

Configure for logs:

Self-Hosted

For self-hosted backends (Jaeger, Zipkin, Grafana Tempo, etc.), point qtap to your OpenTelemetry Collector, then configure the collector to export to your backend:

Then configure your OTel Collector with the appropriate exporter for your backend.

Querying and Filtering

Filter by Event Type

Find Slow Requests

Find Errors

Track Specific Endpoints

Monitor HTTP/2 vs HTTP/1 Traffic

Correlate Logs and Traces

Troubleshooting

No Logs Appearing

Check qtap logs for OTel connection:

Common issues:

  1. Wrong endpoint - Verify the endpoint address and port

    • gRPC default: 4317

    • HTTP default: 4318

    • With --network=host: use localhost:4317

  2. TLS mismatch - If backend requires TLS, set tls.enabled: true

  3. Authentication - Verify headers/API keys are correct

  4. Collector not running - Check docker ps | grep otel

No Traces Appearing

Verify environment variables are set:

Check OTel Collector has traces pipeline:

Logs But No Traces (or vice versa)

Logs and traces are independent:

  • Logs require event_stores type: otel in qtap.yaml

  • Traces require OTEL_TRACES_EXPORTER=otlp environment variable

Ensure both are configured if you want both data types.

Connection Refused Errors

Solutions:

  • Check if OTel Collector is running: docker ps | grep otel

  • Verify network connectivity between qtap and collector

  • Check firewall rules for port 4317/4318

High Trace Volume

Qtap creates a Connection span for every TCP connection. To reduce volume:

Use sampling in your OTel Collector:

Or filter by instrumentation scope:

Debugging with stdout Protocol

For local debugging, use the stdout protocol to see OTLP data printed to qtap logs:

Then check qtap logs:

Best Practices

  1. Use TLS in production - Always enable tls.enabled: true for production

  2. Store sensitive data securely - Use S3 object stores for full HTTP payloads

  3. Filter appropriately - Use qtap's filters to avoid capturing unnecessary traffic

  4. Set resource attributes - Use service_name and environment for filtering

  5. Monitor qtap itself - Set up alerts on qtap's Prometheus metrics

  6. Use batch processing - OTel Collector's batch processor reduces API calls

  7. Start with logs - Most users need HTTP transaction metadata (logs) first

  8. Add traces for troubleshooting - Enable traces when debugging network performance

  9. Use correlation - Link logs and traces via Trace ID or connection.id for complete observability

Configuration Quick Reference

Logs Only

qtap.yaml:

Environment variables: None needed

Traces Only

qtap.yaml: Any (stdout, otel, doesn't matter for traces)

Environment variables:

Both Logs and Traces

qtap.yaml:

Environment variables:

Next Steps

Additional Resources

Last updated