Storage Configuration
Understanding Qtap Storage Components
Qtap captures two distinct types of data, each with different storage requirements:
Events (Connection Metadata): Anonymized information about connections, including timestamps, endpoints, and performance metrics
Objects (Payload Content): Actual request and response data including headers and bodies, which may contain sensitive information
Each type has its own dedicated storage configuration in the services
section of your qpoint.yaml
file.
services:
event_stores:
- id: console_stdout
- type: stdout
object_stores:
- id: console_stdout
- type: stdout
Event Stores
Event stores handle anonymized metadata about network connections. This data is generally not sensitive but is useful for analytics, troubleshooting, and monitoring.
Console Output (stdout)
The simplest option for development and debugging:
event_stores:
- id: console_stdout
type: stdout
This configuration sends all event data to the console where Qtap is running, making it immediately visible but not persistent.
Axiom
For sending events to Axiom for analytics and monitoring:
event_stores:
- type: axiom
dataset:
type: text
value: qpoint-events
token:
type: env
value: AXIOM_TOKEN
This configuration sends event data to an Axiom dataset for advanced analytics and visualization.
Axiom Configuration Parameters
dataset
The name of the Axiom dataset to send events to
token
Axiom API token
OpenTelemetry (OTLP)
Send events to any OpenTelemetry-compatible backend using the OTLP protocol. Qtap exports events as OpenTelemetry Logs with rich structured attributes.
event_stores:
- type: otel
endpoint: "localhost:4317" # OTLP endpoint (required for grpc/http, not needed for stdout)
protocol: grpc # grpc, http, or stdout (required)
service_name: "qtap" # Service name for resource attributes (optional)
environment: "production" # Environment tag (optional)
headers: # Custom headers (optional)
api-key:
type: env
value: OTEL_API_KEY
tls:
enabled: false # Enable TLS (optional, default: false)
This configuration works with any OTLP-compatible backend including:
OpenTelemetry Collector
Datadog (OTLP ingestion)
Honeycomb (OTLP API)
New Relic (OTLP endpoint)
Grafana Cloud (OTLP)
Elastic (OTLP)
OpenTelemetry Configuration Parameters
endpoint
OTLP endpoint address (e.g., "localhost:4317" for gRPC, "localhost:4318" for HTTP). Required for grpc and http protocols. Not needed for stdout protocol.
protocol
Protocol to use: "grpc" (default port 4317), "http" (default port 4318), or "stdout" for debugging
service_name
Service name added to resource attributes (default: "qtap")
environment
Environment name for filtering/grouping (e.g., "production", "staging")
headers
Custom headers for authentication (e.g., API keys). Each header can use type: env
to load from environment variables or type: text
for direct values
tls.enabled
Enable TLS for secure connections (default: false)
Event Types Sent
Qtap sends two types of OpenTelemetry log events:
Connection Events (
event.type: connection
) - TCP connection lifecycle and metadata including TLS version, protocol detection, bytes transferredArtifact Records (
event.type: artifact_record
) - HTTP transaction summaries with method, URL, status, duration, bytes, and links to full request/response data in object stores
See the OpenTelemetry Integration Guide for detailed setup instructions.
Qpoint Pulse Service (Coming Soon)
For self-hosted environments with a Pulse instance:
event_stores:
- id: pulse
type: pulse
endpoint: http://pulse-service:8000
token:
type: env
value: PULSE_TOKEN
This connects to a Pulse service for advanced analytics and visualization.
Object Stores
Object stores contain the actual content of requests and responses, which often includes sensitive information. This data requires more careful handling and secure storage.
Console Output (stdout)
For development and debugging:
object_stores:
- id: console_stdout
type: stdout
Sends all object data to the console.
S3-Compatible Storage
For secure, persistent storage:
object_stores:
- id: s3_store
type: s3
endpoint: storage.example.com:9000
bucket: qpoint-objects
region: us-east-1
access_url: https://storage.example.com/{{BUCKET}}/{{DIGEST}}
insecure: false
access_key:
type: env
value: S3_ACCESS_KEY
secret_key:
type: env
value: S3_SECRET_KEY
This configuration:
Stores objects in an S3-compatible storage service
Uses HTTPS for secure transmission (
insecure: false
)Retrieves credentials from environment variables
Provides a template URL for accessing stored objects
S3 Configuration Parameters
endpoint
S3 server hostname and port
minio.example.com:9000
bucket
S3 bucket name
qpoint-objects
region
S3 region name
us-east-1
access_url
URL template for object access
https://storage.example.com/{{BUCKET}}/{{DIGEST}}
insecure
Allow HTTP instead of HTTPS
false
(recommended)
access_key
S3 access key configuration
See credential management
secret_key
S3 secret key configuration
See credential management
URL Template Variables
The access_url
parameter supports these template variables:
{{ENDPOINT}}
: The S3 endpoint{{BUCKET}}
: The bucket name{{DIGEST}}
: The unique file identifier
Credential Management
For security, Qtap supports retrieving credentials from environment variables or using direct text values.
Environment Variable Configuration
token:
type: env
value: AXIOM_TOKEN # Name of the environment variable
Direct Text Configuration
token:
type: text
value: your_actual_token # Direct token value (not recommended for production)
For S3 credentials, when running Qtap, ensure these environment variables are set:
export S3_ACCESS_KEY=your_access_key
export S3_SECRET_KEY=your_secret_key
For Docker:
docker run \
# Other parameters...
-e S3_ACCESS_KEY=your_access_key \
-e S3_SECRET_KEY=your_secret_key \
# Rest of command...
For Kubernetes, use secrets:
kubectl create secret generic s3-credentials \
--from-literal=access-key='YOUR_ACCESS_KEY' \
--from-literal=secret-key='YOUR_SECRET_KEY' \
-n qpoint
And reference them in your Helm values:
extraEnv:
- name: S3_ACCESS_KEY
valueFrom:
secretKeyRef:
name: s3-credentials
key: access-key
- name: S3_SECRET_KEY
valueFrom:
secretKeyRef:
name: s3-credentials
key: secret-key
Object Storage Configuration Examples
MinIO Configuration
MinIO is a popular self-hosted, S3-compatible object store:
object_stores:
- id: minio
type: s3
endpoint: minio.example.com:9000
bucket: qpoint
region: us-east-1
access_url: https://minio.example.com/{{BUCKET}}/{{DIGEST}}
insecure: false
access_key:
type: env
value: MINIO_ACCESS_KEY
secret_key:
type: env
value: MINIO_SECRET_KEY
AWS S3 Configuration
For AWS S3:
object_stores:
- id: aws_s3
type: s3
endpoint: s3.amazonaws.com
bucket: my-company-qpoint
region: us-west-2
access_url: https://s3.amazonaws.com/{{BUCKET}}/{{DIGEST}}
insecure: false
access_key:
type: env
value: AWS_ACCESS_KEY_ID
secret_key:
type: env
value: AWS_SECRET_ACCESS_KEY
Google Cloud Storage Configuration
For Google Cloud Storage:
object_stores:
- id: gcs
type: s3
endpoint: storage.googleapis.com
bucket: my-company-qpoint
region: us-central1
access_url: https://storage.cloud.google.com/{{BUCKET}}/{{DIGEST}}
insecure: false
access_key:
type: env
value: GCS_ACCESS_KEY
secret_key:
type: env
value: GCS_SECRET_KEY
Security Best Practices
When configuring storage, especially for production environments:
Use HTTPS: Always set
insecure: false
to enforce encrypted connectionsEnvironment Variables: Never store credentials in the configuration file
Bucket Policies: Restrict access to your storage bucket with appropriate IAM policies
Encryption: Enable server-side encryption for stored objects
Lifecycle Rules: Configure automatic deletion of old data to comply with retention policies
Audit Logging: Enable access logging for your storage service
Complete Storage Configuration Example
version: 2
services:
event_stores:
- id: console_stdout
type: stdout
object_stores:
- id: minio
type: s3
endpoint: minio.internal:9000
bucket: qpoint-objects
region: us-east-1
access_url: https://minio.internal:9000/{{BUCKET}}/{{DIGEST}}
insecure: false
access_key:
type: env
value: S3_ACCESS_KEY
secret_key:
type: env
value: S3_SECRET_KEY
This configuration sends connection metadata to the console for easy monitoring while securely storing the actual request and response content in MinIO.
Last updated