Object Storage Configuration
Object storage is where Qpoint stores sensitive request and response payloads captured by your Qtap agents.
⚠️ Before Production Deployment: By default, new Qplane accounts are configured to use Qpoint Cloud object storage for testing and development. You must configure your own object storage (S3, GCS, Azure Blob, or MinIO) before deploying Qtap agents in any production or sensitive environment to ensure sensitive data stays within your infrastructure.
This guide explains when data is stored, how to configure your own object storage, and security best practices for production deployments.
Understanding Qpoint's Data Architecture
Qpoint uses a "separation of data" architecture to preserve data sovereignty and give you complete control over sensitive information:
Anonymized Metadata → Pulse (Qpoint Cloud)
Connection metadata: IPs, ports, timings, status codes
Sent to Qpoint's cloud control plane (Pulse) for dashboards, metrics, and alerts
Contains no sensitive payload data
Required for Qplane dashboards to function
Sensitive Payloads → Object Storage
Actual request/response headers and bodies
By default, sent to Qpoint Cloud object storage (testing only)
For production: Configure your own object storage so payloads are sent directly to YOUR infrastructure (S3, GCS, MinIO, etc.)
Qplane UI fetches payloads on-demand directly to your browser
Configuring your own storage preserves data sovereignty and regulatory compliance
When Data is Stored in Object Storage
⚠️ Critical: Not all traffic is stored to object storage. Payloads are only captured and stored when specific plugins are configured to do so.
Plugins That Write to Object Storage
1. Detect Errors Plugin
Captures full request/response details when HTTP errors occur.
When it writes:
HTTP status codes: 404, 500, 502, 503, 504, and other configurable error codes
Automatically enabled in Qplane with pre-configured error detection rules
What it stores:
Complete request headers and body
Complete response headers and body
Timing information
Learn more: Detect Errors Plugin Documentation
2. HTTP Capture Plugin
More flexible, rule-based capture for any traffic matching your criteria.
When it writes:
Only when capture level is set to
detailsorfullBased on configurable rulekit expressions (domain, path, method, headers, etc.)
Capture levels:
none- No object storage writessummary- No object storage writesdetails- Writes headers to object storagefull- Writes headers AND bodies to object storage
Example use case: Capture all calls to api.openai.com to audit what prompts are being sent.
Learn more: HTTP Capture Plugin Documentation
3. Scan Payloads & Data (QScan) Plugin
Classifies sensitive data in traffic (PII, credentials, API keys, etc.).
When it writes:
When "Record matching metadata in object store" is enabled
Stores metadata about detections: path in payload, detection score, data type found
When "Store value with metadata" is checked for specific data types
Also stores the actual matched sensitive values (e.g., the email address, phone number, or credential found)
What it detects:
PII (names, emails, phone numbers, addresses, SSNs, dates of birth)
Credentials (passwords, API keys, tokens, usernames)
Financial data (credit cards, bank accounts, IBAN codes)
Location data (IP addresses, street addresses, cities, zip codes)
Medical identifiers (medical license numbers)
Custom patterns you define
What gets stored in object storage:
Metadata (when enabled): Detection type, location in payload, confidence score
Values (per data type): The actual sensitive data matched (if "Store value with metadata" is checked for that type)
Learn more: QScan Plugin Documentation
Plugins That Do NOT Write to Object Storage
Report Usage Plugin
Only sends anonymized metadata to Pulse
No payloads stored anywhere
Access Logs Plugin
Writes formatted logs to stdout only
No object storage interaction
HTTP Metrics Plugin
Exposes Prometheus metrics only
No object storage interaction
Before You Deploy: Configuration Requirements
⚠️ WARNING: Configure Your Object Storage BEFORE Deploying in Sensitive Environments
Default Behavior:
Qpoint Cloud object storage is configured by default
Designed for testing, preview, and proof-of-concept only
NOT suitable for production or sensitive data
Why This Matters:
Default error detection is active immediately - The pre-configured "Basic Reporting and Error Detection" stack starts capturing error payloads (400, 401, 403, 404, 429, 500, 502, 503, etc.) as soon as an agent deploys
Once an agent starts capturing traffic, payloads are immediately sent to the configured object storage
If you haven't configured your own storage, sensitive data in error responses will be sent to Qpoint Cloud
Reconfiguring later doesn't retroactively move already-captured data
See What's Captured by Default above for the complete list of error codes that trigger captures
Before deploying Qtap in any environment with sensitive data:
✅ Configure your own object storage (S3, GCS, Azure Blob, MinIO)
✅ Verify the configuration in Snapshot YAML
✅ Test with a non-sensitive agent deployment first
✅ Only then deploy to production/sensitive environments
What's Captured by Default in New Accounts
⚠️ Important: Every new Qplane account automatically includes a "Basic Reporting and Error Detection" stack with pre-configured error detection rules. This means object storage writes begin immediately when your agents detect errors.
Default Error Detection Rules:
Every new account comes with 6 pre-configured rules that automatically capture errors:
By default, full request and response data (headers AND bodies) is captured to object storage for these status codes:
App Error
500
Full request + response
Infrastructure Outage
502, 503, 520-523, 525-526, 530
Full request + response
Client Error
400
Full request + response
Authentication Error
401, 403, 407
Full request + response
Rate Limited
429
Full request + response
Not Found
404
Full request + response
Why This Matters:
✅ Benefit: Immediate error visibility and debugging capability from day one
⚠️ Risk: If you haven't configured your own object storage, these payloads go to Qpoint Cloud (testing-only storage)
🔒 Action Required: Configure your own object storage BEFORE deploying agents in any environment with sensitive data
Where Object Storage is NOT Used:
The report_usage plugin (also in the default stack) only sends anonymized metadata to Pulse - it does not write to object storage.
Supported Object Storage Providers
Qpoint supports any S3-compatible object storage:
AWS S3
Cloud
AWS-managed
AWS customers, production at scale
Google Cloud Storage
Cloud
GCP-managed
GCP customers, multi-cloud
Azure Blob Storage
Cloud
Azure-managed
Azure customers
MinIO
Self-hosted
Your infrastructure
Air-gapped environments, full control
Any S3-compatible API
Varies
Varies
Custom requirements
Configuration Guide
Step 1: Navigate to Object Stores
In Qplane, go to Settings → Deploy → Services
Find the Object Stores section
Click "+ Add Object Store"
Step 2: Configure Your Provider
AWS S3 Configuration
Recommended approach: Use IAM roles (no static credentials)
Prerequisites:
S3 bucket created (e.g.,
my-company-qpoint-payloads)IAM role with S3 write permissions for your Qtap agents
Configuration parameters:
object_stores:
- id: production_s3
type: s3
endpoint: s3.amazonaws.com
bucket: my-company-qpoint-payloads
region: us-west-2
access_url: https://s3.amazonaws.com/{{BUCKET}}/{{DIGEST}}If you must use access keys (not recommended):
object_stores:
- id: production_s3
type: s3
endpoint: s3.amazonaws.com
bucket: my-company-qpoint-payloads
region: us-west-2
access_url: https://s3.amazonaws.com/{{BUCKET}}/{{DIGEST}}
access_key:
type: env
value: AWS_ACCESS_KEY_ID
secret_key:
type: env
value: AWS_SECRET_ACCESS_KEYParameter explanations:
endpoint- S3 service endpoint (uses3.amazonaws.comfor AWS)bucket- Your bucket name (must exist before deploying agents)region- AWS region where bucket is locatedaccess_url- Template for retrieving stored objects (Qplane uses this to fetch payloads for display)access_key/secret_key- Optional, only if not using IAM roles
IAM Policy Example:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::my-company-qpoint-payloads/*"
}
]
}See also: AWS S3 Configuration Guide
Google Cloud Storage Configuration
Configuration parameters:
object_stores:
- id: production_gcs
type: gcs
endpoint: storage.googleapis.com
bucket: my-company-qpoint-payloads
region: us-west1
access_url: https://storage.googleapis.com/{{BUCKET}}/{{DIGEST}}
credentials:
type: env
value: GOOGLE_APPLICATION_CREDENTIALSSee also: Google Cloud Storage Guide
MinIO Configuration (Self-Hosted)
Configuration parameters:
object_stores:
- id: internal_minio
type: s3
endpoint: minio.internal.company.com:9000
bucket: qpoint-payloads
region: us-east-1
access_url: https://minio.internal.company.com:9000/{{BUCKET}}/{{DIGEST}}
access_key:
type: env
value: MINIO_ACCESS_KEY
secret_key:
type: env
value: MINIO_SECRET_KEYNotes:
MinIO uses the S3-compatible API (
type: s3)Endpoint should include your MinIO host and port
Region can be any value (MinIO doesn't enforce AWS regions)
See also: MinIO Self-Hosted Guide
Step 3: Set Environment Variables (if using credentials)
If your configuration references environment variables for credentials, you must set these on the hosts running Qtap agents.
Example for systemd:
# /etc/systemd/system/qtap.service.d/override.conf
[Service]
Environment="AWS_ACCESS_KEY_ID=AKIA..."
Environment="AWS_SECRET_ACCESS_KEY=..."Example for Kubernetes:
apiVersion: v1
kind: Secret
metadata:
name: qpoint-object-storage
type: Opaque
data:
AWS_ACCESS_KEY_ID: <base64-encoded>
AWS_SECRET_ACCESS_KEY: <base64-encoded>
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: qtap
spec:
template:
spec:
containers:
- name: qtap
envFrom:
- secretRef:
name: qpoint-object-storageSee also: Qtap Storage Configuration for local YAML configuration examples
Verifying Your Configuration
1. Check Snapshot YAML
After adding your object storage:
Go to Settings → Deploy → Snapshot
Look for your object store in the
services.object_storessectionVerify all parameters are correct (endpoint, bucket, region)
Example of what you should see:
services:
object_stores:
- id: production_s3
type: s3
endpoint: s3.amazonaws.com
bucket: my-company-qpoint-payloads
region: us-west-2
access_url: https://s3.amazonaws.com/{{BUCKET}}/{{DIGEST}}2. Deploy a Test Agent
Before deploying to production:
Deploy Qtap in a non-sensitive test environment
Configure a plugin to capture traffic (e.g., HTTP Capture with
level: full)Generate test traffic that matches your capture rules
Check your object storage bucket for uploaded objects
Expected behavior:
Objects appear in your bucket within seconds of captured requests
Object keys are SHA256 digests (e.g.,
abc123def456...)Objects contain JSON-encoded request/response data
3. Monitor Agent Logs
Qtap logs object storage upload attempts:
# View Qtap logs (systemd)
journalctl -u qtap -f
# View Qtap logs (Kubernetes)
kubectl logs -n qpoint daemonset/qtap -fWhat to look for:
✅ Success:
[INFO] Uploaded payload to s3://my-bucket/abc123...❌ Failure:
[ERROR] Failed to upload to object storage: Access Denied
[ERROR] Failed to upload to object storage: No such bucketCommon issues:
Incorrect credentials or IAM permissions
Bucket doesn't exist
Network connectivity to object storage endpoint
Incorrect region configuration
Related Documentation
Settings Overview - Complete settings reference
Stacks & Plugins - Learn about plugins that capture data
Traffic Processing with Plugins - HTTP Capture plugin details
Qtap Storage Configuration - Local YAML configuration examples
AWS S3 Configuration Guide - Provider-specific setup
Google Cloud Storage Guide - GCS-specific setup
MinIO Self-Hosted Guide - Self-hosted MinIO setup
Last updated