Sending All Traffic to S3

Capture all HTTP/HTTPS traffic and store it in S3-compatible storage for persistent, secure access.

Who This Is For

Use this guide if you want to:

  • Store all captured traffic persistently in S3

  • Keep sensitive HTTP data within your own infrastructure

  • Set up a simple, no-filtering capture pipeline

  • Get started with S3 storage before adding rules later

Prerequisites:

  • S3-compatible storage ready (AWS S3, MinIO, Google Cloud Storage)

  • S3 credentials (access key and secret key)

  • S3 bucket created

Time to complete: 10 minutes


Configuration

Save this as qtap-s3.yaml:

version: 2

# Storage Configuration
services:
  # Connection metadata to console (for visibility)
  event_stores:
    - type: stdout

  # HTTP payloads to S3 (where sensitive data lives)
  object_stores:
    - type: s3
      endpoint: s3.amazonaws.com          # Your S3 endpoint
      bucket: my-qtap-data                # Your bucket name
      region: us-east-1                   # Your region
      access_url: https://s3.amazonaws.com/{{BUCKET}}/{{DIGEST}}
      insecure: false                     # Use HTTPS
      access_key:
        type: env
        value: S3_ACCESS_KEY
      secret_key:
        type: env
        value: S3_SECRET_KEY

# Processing Stack
stacks:
  capture_all:
    plugins:
      - type: http_capture
        config:
          level: full      # (none|summary|headers|full) - Capture everything
          format: json     # (json|text) - Structured for storage

# Traffic Capture Settings
tap:
  direction: egress        # (egress|egress-external|egress-internal|ingress|all)
  ignore_loopback: true    # (true|false) - Skip localhost
  audit_include_dns: false # (true|false) - Skip DNS queries
  http:
    stack: capture_all

Running Qtap

Set Your S3 Credentials

Start Qtap with Docker

Or with Linux Binary

Testing

Wait for qtap to initialize, then generate some traffic:

Check the logs for confirmation:

Verify objects are in your S3 bucket:

S3 Provider Examples

AWS S3

MinIO (Self-Hosted)

Google Cloud Storage

Understanding the Output

With this configuration:

  • Console output shows connection metadata (timestamps, endpoints, duration)

  • S3 bucket stores full HTTP payloads (headers, bodies, sensitive data)

Each captured transaction creates an object in S3 with a unique digest identifier. The access_url template lets you construct URLs to retrieve stored objects.

Capture Levels

Adjust the level setting based on your needs:

Level
What's Captured
Use Case

none

Nothing

Use with rules for selective capture

summary

Method, URL, status, duration

Lightweight monitoring

headers

Summary + all headers

Header inspection

full

Headers + request/response bodies

Complete visibility

What's Next?

Last updated