# Complete Guide: Hello World to Production

Progressive 4-level tutorial that takes you from "hello world" to production-ready Qtap deployment. Perfect for teams wanting full control with YAML configuration.

## Who This Is For

**Use this guide if you want to:**

* Learn Qtap systematically from basics to advanced features
* Deploy Qtap for long-term production monitoring
* Understand how all Qtap features work together
* Build self-managed infrastructure with version-controlled configs

**Choose this guide for:**

* Production deployments with S3 storage
* Rule-based selective capture (capture errors only, specific endpoints, etc.)
* Air-gapped or isolated environments
* GitOps workflows with infrastructure-as-code

**Choose something else if you:**

* Need visibility RIGHT NOW for debugging → [Production Debugging](https://docs.qpoint.io/guides/qtap-guides/debugging/production-debugging-with-https-visibility) (30 seconds)
* Want a quick preview → [5-Minute Quickstart](https://docs.qpoint.io/guides/qtap-guides/getting-started/qtap-starter-configuration-stdout-only) (5 minutes)
* Prefer managed dashboards → [Qplane POC Guide](https://docs.qpoint.io/guides/qplane-guides/poc-kick-off-guide) (10 minutes)

**Time to complete:** 50 minutes (can pause between levels)

***

This guide takes you from a simple "hello world" setup to a production-ready configuration, with every example validated and tested. Follow along at your own pace - each level builds on the previous one.

**What you'll learn:**

* Level 1: Dead simple setup - verify qtap is working (10 min)
* Level 2: Basic filtering and selective capture (10 min)
* Level 3: Conditional capture with rulekit expressions (15 min)
* Level 4: Production storage with S3 (15 min)

**Prerequisites:**

* Docker installed and running
* Basic familiarity with YAML
* curl or similar HTTP client

***

## Level 1: Dead Simple - Verify It's Working

**Goal**: Get qtap running and see it capture HTTPS traffic in under 5 minutes.

**What you'll learn:**

* Basic qtap configuration
* How to verify qtap sees inside HTTPS
* Understanding qtap output

### Step 1: Create Your First Config

Create a file called `qtap-level1.yaml`:

```yaml
version: 2

# Where to send captured data (stdout = your terminal)
services:
  event_stores:
    - type: stdout
  object_stores:
    - type: stdout

# What to do with captured traffic
stacks:
  basic_stack:
    plugins:
      - type: http_capture
        config:
          level: full      # (none|summary|headers|full) - Capture everything
          format: text     # (json|text) - Human-readable output

# What traffic to capture
tap:
  direction: egress        # (egress|egress-external|egress-internal|ingress|all) - Outbound traffic only
  ignore_loopback: true    # (true|false) - Skip localhost traffic
  audit_include_dns: false # (true|false) - Don't capture DNS queries
  http:
    stack: basic_stack     # Use the stack we defined above
```

**What this does:**

* Captures all outbound HTTP/HTTPS traffic
* Shows full request/response details in your terminal
* No filters - captures everything (except localhost)

### Step 2: Start Qtap

```bash
docker run -d \
  --name qtap-level1 \
  --user 0:0 \
  --privileged \
  --cap-add CAP_BPF \
  --cap-add CAP_SYS_ADMIN \
  --pid=host \
  --network=host \
  -v /sys:/sys \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v "$(pwd)/qtap-level1.yaml:/app/config/qtap.yaml" \
  -e TINI_SUBREAPER=1 \
  --ulimit=memlock=-1 \
  us-docker.pkg.dev/qpoint-edge/public/qtap:v0 \
  --log-level=info \
  --log-encoding=console \
  --config="/app/config/qtap.yaml"
```

**Wait for qtap to initialize** (important!):

```bash
sleep 6
```

### Step 3: Generate Test Traffic

```bash
docker run --rm curlimages/curl -s https://httpbin.org/get
```

### Step 4: See What Qtap Captured

```bash
docker logs qtap-level1 2>&1 | grep -A 50 "httpbin.org"
```

**What you should see:**

```
HTTP Transaction
================

Metadata:
  Transaction Time: 2025-10-14T17:10:57Z
  Duration: 3356ms
  Direction: egress-external
  Process ID: 230253
  Process: /usr/bin/curl
  Bytes Sent: 41
  Bytes Received: 395

Request:
  Method: GET
  URL: https://httpbin.org/get
  Headers:
    :path: /get
    :scheme: https
    User-Agent: curl/8.12.1
    Accept: */*
    :authority: httpbin.org
    :method: GET

Response:
  Status: 200
  Content-Type: application/json
  Headers:
    Content-Length: 255
    Server: gunicorn/19.9.0
    Access-Control-Allow-Origin: *
    Access-Control-Allow-Credentials: true
    :status: 200
    Date: Tue, 14 Oct 2025 17:10:57 GMT
    Content-Type: application/json
  Body:
{
  "args": {},
  "headers": {
    "Accept": "*/*",
    "Host": "httpbin.org",
    "User-Agent": "curl/8.12.1",
    "X-Amzn-Trace-Id": "Root=1-68ee841e-1663d2ae3f156ba80e3a3a36"
  },
  "origin": "73.71.138.108",
  "url": "https://httpbin.org/get"
}
```

**🎉 Success indicators:**

* ✅ Clean formatted output (no internal logging clutter)
* ✅ You see the full URL despite HTTPS encryption
* ✅ All request and response headers are visible (including HTTP/2 pseudo-headers like `:path`, `:authority`)
* ✅ The response body (JSON) is captured in full
* ✅ Rich metadata: process info, duration, bytes sent/received

### Understanding the Output

Let's break down what qtap showed you:

1. **Metadata Section**:
   * **Transaction Time** - When the request occurred
   * **Duration** - How long the request took (3356ms)
   * **Direction** - `egress-external` (outbound to external IP)
   * **Process** - `/usr/bin/curl` - Qtap knows which process made the request
   * **Bytes Sent/Received** - Network traffic volume
2. **Request Section**:
   * **Method & URL** - GET request to <https://httpbin.org/get>
   * **Headers** - Including HTTP/2 pseudo-headers (`:path`, `:scheme`, `:authority`, `:method`)
   * Standard headers like `User-Agent` and `Accept`
3. **Response Section**:
   * **Status** - 200 OK
   * **Headers** - All response headers including server, content-type, etc.
   * **Body** - Complete JSON response payload

**How does qtap see inside HTTPS?**

Qtap uses eBPF to hook into system calls *before* TLS encryption happens. It doesn't break TLS or act as a proxy - it just observes what your application sends before OpenSSL encrypts it.

### Troubleshooting

**Don't see any output?**

1. Make sure qtap was running BEFORE you generated traffic
2. Wait 6 seconds after starting qtap before testing
3. Check qtap is running: `docker ps | grep qtap-level1`

**See connection info but no HTTP details?**

* Look for `"l7Protocol": "other"` - means HTTP parsing failed
* Check if you're looking at the right traffic (search for your test domain)

**Clean up when done**:

```bash
docker rm -f qtap-level1
```

***

## What's Next?

✅ **Level 1 Complete!** You now know:

* How to configure qtap basics
* How to verify qtap is capturing traffic
* That qtap can see inside HTTPS

**Next up**: Level 2 - Basic Filtering and Selective Capture

* Filter out noisy processes
* Apply different capture levels by domain
* Use multiple stacks for different traffic types

***

## Level 2: Basic Filtering and Selective Capture

**Goal**: Control what traffic gets captured and apply different capture levels to different traffic.

**What you'll learn:**

* Filter out noisy processes (curl, wget)
* Apply different capture levels by domain
* Use different capture levels for different endpoints

### Why Filtering Matters

In real environments, you'll see a LOT of traffic. Level 1 captures everything, which means:

* Kubernetes health checks flood your logs
* Package managers create noise
* Your own debugging with curl shows up
* Monitoring agents add clutter

Level 2 teaches you to be selective.

### Step 1: Create a Filtered Config

Create `qtap-level2.yaml`:

```yaml
version: 2

services:
  event_stores:
    - type: stdout
  object_stores:
    - type: stdout

stacks:
  # Detailed stack for important APIs
  detailed_stack:
    plugins:
      - type: http_capture
        config:
          level: full      # (none|summary|headers|full) - Full capture
          format: text     # (json|text)

  # Lightweight stack for everything else
  lightweight_stack:
    plugins:
      - type: http_capture
        config:
          level: summary   # (none|summary|headers|full) - Just basic info
          format: text     # (json|text)

tap:
  direction: egress        # (egress|egress-external|egress-internal|ingress|all)
  ignore_loopback: true    # (true|false)
  audit_include_dns: false # (true|false)

  # Default: use lightweight stack
  http:
    stack: lightweight_stack

  # Filter out noisy processes
  filters:
    groups:
      - qpoint           # Don't capture qtap's own traffic
    custom:
      - exe: /usr/bin/curl
        strategy: exact  # Filter out manual curl commands

  # Apply specific stacks to specific domains
  endpoints:
    - domain: 'httpbin.org'
      http:
        stack: detailed_stack  # Full capture for httpbin
    - domain: 'api.github.com'
      http:
        stack: detailed_stack  # Full capture for GitHub API
```

**What's new here:**

* **Two stacks**: Detailed (full) vs lightweight (summary only)
* **Process filters**: Ignore curl (filter out manual testing)
* **Domain-specific stacks**: httpbin.org uses detailed\_stack for full capture
* **Default lightweight**: Everything else is summary only

### Step 2: Start Level 2

```bash
docker run -d \
  --name qtap-level2 \
  --user 0:0 \
  --privileged \
  --cap-add CAP_BPF \
  --cap-add CAP_SYS_ADMIN \
  --pid=host \
  --network=host \
  -v /sys:/sys \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v "$(pwd)/qtap-level2.yaml:/app/config/qtap.yaml" \
  -e TINI_SUBREAPER=1 \
  --ulimit=memlock=-1 \
  us-docker.pkg.dev/qpoint-edge/public/qtap:v0 \
  --log-level=info \
  --log-encoding=console \
  --config="/app/config/qtap.yaml"

sleep 6
```

### Step 3: Test the Filters

**Test 1: Filtered traffic (should NOT appear)**

```bash
docker run --rm curlimages/curl -s https://example.com/get
```

Check logs:

```bash
docker logs qtap-level2 2>&1 | grep "example.com"
```

**Expected**: Nothing! curl is filtered.

**Test 2: Domain-specific capture (should appear with FULL details)**

```bash
docker run --rm alpine sh -c "wget -O- https://httpbin.org/get 2>/dev/null"
```

Check logs:

```bash
docker logs qtap-level2 2>&1 | grep -A 20 "httpbin.org"
```

**Expected**: Full HTTP details including body because:

1. httpbin.org matches our endpoint rule
2. Endpoint rule applies `detailed_stack` for full capture
3. wget (/bin/busybox) is NOT filtered, so traffic is captured

### Understanding the Results

**What you should see:**

1. **Curl traffic**: Filtered out

   ```bash
   # This returns nothing
   docker logs qtap-level2 | grep -c "curl"
   # Output: 0
   ```
2. **httpbin.org via wget**: CAPTURED with full details
   * Method, URL, headers, body all visible
   * Because it matches the endpoint rule
3. **Other domains**: Would show summary only (if we tested them)
   * Basic info: method, URL, status, duration
   * No headers, no bodies

### Why This Matters

This configuration gives you:

* **Less noise**: Filters out development tools
* **Selective detail**: Full capture only where you need it
* **Better performance**: Summary-only for bulk traffic
* **Cost savings**: Less data stored/transmitted

### Advanced: Using Prefix Filters

Want to filter an entire directory?

```yaml
filters:
  custom:
    - exe: /usr/bin/
      strategy: prefix  # Filters ALL /usr/bin/* processes
```

This would block `/usr/bin/curl`, `/usr/bin/wget`, `/usr/bin/python3`, etc.

### Clean Up

```bash
docker rm -f qtap-level2
```

***

## What's Next?

✅ **Level 2 Complete!** You now know:

* How to filter noisy processes
* How to apply different capture levels by domain
* How to use multiple stacks for different traffic types

**Next up**: Level 3 - Conditional Capture with Rulekit

* Use rulekit expressions for intelligent capture
* Create reusable macros
* Capture only errors and specific request types
* Filter by container and pod labels

***

## Level 3: Conditional Capture with Rulekit

**Goal**: Use rulekit expressions for intelligent, conditional traffic capture.

**What you'll learn:**

* Capture only errors and POST requests
* Use different capture levels based on conditions
* Create reusable macros for complex logic
* Filter by container and pod labels

### Why Conditional Capture Matters

In high-traffic environments, capturing everything is expensive and noisy. You want:

* **Selective detail**: Full capture only when needed
* **Error focus**: Always capture failures
* **Method-based filtering**: Capture mutating operations (POST, PUT, DELETE)
* **Smart sampling**: Reduce volume without losing critical data

### Step 1: Create a Rulekit-Based Config

Create `qtap-level3.yaml`:

```yaml
version: 2

# Define reusable macros
rulekit:
  macros:
    - name: is_error
      expr: http.res.status >= 400 && http.res.status < 600
    - name: is_post
      expr: http.req.method == "POST"
    - name: is_api_call
      expr: http.req.path matches /^\/api\//

services:
  event_stores:
    - type: stdout
  object_stores:
    - type: stdout

stacks:
  # Smart stack: Conditional capture levels
  smart_stack:
    plugins:
      - type: http_capture
        config:
          level: summary        # (none|summary|headers|full) - Default: basic info only
          format: text          # (json|text)
          rules:
            # Full capture for errors
            - name: "Capture all errors with full details"
              expr: is_error()
              level: full

            # Full capture for POST requests
            - name: "Capture POST requests"
              expr: is_post()
              level: full

            # Details for API calls (headers, no body)
            - name: "Capture API calls with headers"
              expr: is_api_call()
              level: headers

  # Error-only stack: Only capture failures
  error_only_stack:
    plugins:
      - type: http_capture
        config:
          level: none           # (none|summary|headers|full) - Default: capture nothing
          format: text          # (json|text)
          rules:
            # Only capture 4xx and 5xx
            - name: "Client errors"
              expr: http.res.status >= 400 && http.res.status < 500
              level: full

            - name: "Server errors"
              expr: http.res.status >= 500
              level: full

tap:
  direction: egress        # (egress|egress-external|egress-internal|ingress|all)
  ignore_loopback: true    # (true|false)
  audit_include_dns: false # (true|false)

  http:
    stack: smart_stack     # Default: smart conditional capture

  filters:
    groups:
      - qpoint
    custom:
      - exe: /usr/bin/curl
        strategy: exact    # (exact|prefix|regex)

  # Apply different stacks to different domains
  endpoints:
    # GitHub API: error-only capture
    - domain: 'api.github.com'
      http:
        stack: error_only_stack

    # httpbin: smart capture with all rules
    - domain: 'httpbin.org'
      http:
        stack: smart_stack
```

### Step 2: Understanding Rulekit

**Macros:**

* Reusable expressions defined once, used many times
* Called like functions: `is_error()`, `is_slow()`
* Make configs cleaner and easier to maintain

**Capture Levels in Rules:**

* `none`: Skip capture entirely
* `summary`: Basic info (method, URL, status, duration, process)
* `details`: Include headers (no bodies)
* `full`: Everything (headers + bodies)

**Available Fields:**

* Request: `http.req.method`, `http.req.path`, `http.req.host`, `http.req.url`
* Response: `http.res.status`
* Headers: `http.req.headers.<name>`, `http.res.headers.<name>`
* Source: `src.container.name`, `src.container.labels.<key>`, `src.pod.name`, `src.pod.labels.<key>`

### Step 3: Start Level 3

```bash
docker run -d \
  --name qtap-level3 \
  --user 0:0 \
  --privileged \
  --cap-add CAP_BPF \
  --cap-add CAP_SYS_ADMIN \
  --pid=host \
  --network=host \
  -v /sys:/sys \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v "$(pwd)/qtap-level3.yaml:/app/config/qtap.yaml" \
  -e TINI_SUBREAPER=1 \
  --ulimit=memlock=-1 \
  us-docker.pkg.dev/qpoint-edge/public/qtap:v0 \
  --log-level=info \
  --log-encoding=console \
  --config="/app/config/qtap.yaml"

sleep 6
```

### Step 4: Test Conditional Capture

**Test 1: Success (should show SUMMARY - basic info only):**

```bash
docker run --rm alpine sh -c "wget -O- http://httpbin.org/get 2>/dev/null"
```

Check logs - should see basic metadata (method, URL, status) but no headers or body:

```bash
docker logs qtap-level3 2>&1 | grep -A 20 "HTTP Transaction"
```

**Test 2: Error (should show FULL details):**

```bash
docker run --rm alpine sh -c "wget -O- http://httpbin.org/status/404 2>/dev/null" || true
```

Check logs - should see full capture with all details:

```bash
docker logs qtap-level3 2>&1 | grep -A 20 "404"
```

**Test 3: API path (should show DETAILS with headers):**

```bash
docker run --rm alpine sh -c "wget -O- http://httpbin.org/api/test 2>/dev/null" || true
```

### Understanding the Results

**What you should see:**

1. **Success requests**: Summary level (default)
   * Method, URL, status code
   * Duration, process info
   * No headers, no bodies
2. **Error requests (404, 500)**: Full capture
   * Complete request and response headers
   * Full request/response bodies
   * Because they match `is_error()` macro
3. **API paths**: Details level
   * Headers included
   * No bodies (saves space)
   * Because path matches `/^\/api\//`

### Advanced Rulekit Patterns

**Container-based filtering:**

```yaml
rules:
  - name: "Debug specific container"
    expr: src.container.name == "my-app"
    level: full
```

**Regex matching:**

```yaml
rules:
  - name: "External APIs"
    expr: http.req.host matches /\.(googleapis\.com|amazonaws\.com)$/
    level: headers
```

**Complex conditions:**

```yaml
rules:
  - name: "Production API errors"
    expr: is_api_call() && is_error() && src.pod.labels.env == "production"
    level: full
```

### Clean Up

```bash
docker rm -f qtap-level3
```

***

## Level 4: Production Storage with S3

**Goal**: Store captured traffic in S3 for long-term retention and compliance.

**What you'll learn:**

* Configure S3-compatible object storage
* Use MinIO for local testing
* Combine stdout (debugging) with S3 (persistence)
* Production storage best practices

### Why S3 Storage Matters

**The primary benefit of S3 storage is keeping sensitive data within your network boundary.**

When you use S3-compatible storage (MinIO, AWS S3, GCS), captured HTTP traffic containing sensitive information (API keys, tokens, PII, etc.) never leaves your infrastructure. This is critical for:

* **Data sovereignty**: Keep sensitive data in your own network/region
* **Security**: Prevent data exfiltration - traffic never goes to external services
* **Compliance**: Meet regulatory requirements (GDPR, HIPAA, SOC2)
* **Control**: You own and control access to all captured data

Additionally, S3 provides:

* **Forensics**: Investigate security incidents weeks/months later
* **Analytics**: Analyze traffic patterns over time
* **Debugging**: Review full request/response data when issues occur
* **Durability**: Long-term retention with lifecycle policies

stdout is great for development, but production requires secure, persistent storage within your control.

### Step 1: Set Up MinIO (Local S3)

For this guide, we'll use MinIO - an S3-compatible storage you can run locally:

```bash
# Start MinIO
docker run -d \
  --name minio \
  -p 9000:9000 \
  -p 9001:9001 \
  -e MINIO_ROOT_USER=minioadmin \
  -e MINIO_ROOT_PASSWORD=minioadmin \
  quay.io/minio/minio server /data --console-address ":9001"

sleep 3

# Create bucket
docker run --rm \
  --network=host \
  --entrypoint /bin/sh \
  quay.io/minio/minio -c "
    mc alias set local http://localhost:9000 minioadmin minioadmin &&
    mc mb local/qtap-captures &&
    echo 'Bucket created successfully!'
  "
```

### Step 2: Create S3-Enabled Config

Create `qtap-level4-s3.yaml`:

```yaml
version: 2

# Reusable macros
rulekit:
  macros:
    - name: is_error
      expr: http.res.status >= 400 && http.res.status < 600

services:
  # Events: Still use stdout for real-time monitoring
  event_stores:
    - type: stdout

  # Objects: Store in S3 for persistence
  object_stores:
    - type: s3
      endpoint: localhost:9000
      bucket: qtap-captures
      region: us-east-1
      insecure: true              # Only for local MinIO testing
      access_key:
        type: text
        value: minioadmin
      secret_key:
        type: text
        value: minioadmin

stacks:
  production_stack:
    plugins:
      - type: http_capture
        config:
          level: none             # (none|summary|headers|full) - Default: don't capture
          format: json            # (json|text)
          rules:
            # Only capture errors (saves S3 costs)
            - name: "Capture errors"
              expr: is_error()
              level: full         # (none|summary|headers|full)

tap:
  direction: egress        # (egress|egress-external|egress-internal|ingress|all)
  ignore_loopback: true    # (true|false)
  audit_include_dns: false # (true|false)

  http:
    stack: production_stack

  filters:
    groups:
      - qpoint
    custom:
      - exe: /usr/bin/curl
        strategy: exact    # (exact|prefix|regex)
```

**What's different:**

* **S3 object\_store**: Captured HTTP transactions go to MinIO
* **stdout event\_store**: Connection metadata still goes to console
* **Error-only capture**: Only store failures (reduces costs)

### Step 3: Start Qtap with S3

```bash
docker run -d \
  --name qtap-level4-s3 \
  --user 0:0 \
  --privileged \
  --cap-add CAP_BPF \
  --cap-add CAP_SYS_ADMIN \
  --pid=host \
  --network=host \
  -v /sys:/sys \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v "$(pwd)/qtap-level4-s3.yaml:/app/config/qtap.yaml" \
  -e TINI_SUBREAPER=1 \
  --ulimit=memlock=-1 \
  us-docker.pkg.dev/qpoint-edge/public/qtap:v0 \
  --log-level=info \
  --log-encoding=console \
  --config="/app/config/qtap.yaml"

sleep 6
```

### Step 4: Test S3 Storage

**Generate an error (will be stored in S3):**

```bash
docker run --rm alpine sh -c "wget -O- http://httpbin.org/status/500 2>/dev/null" || true
sleep 2
```

**Check qtap logs for S3 upload confirmation:**

```bash
docker logs qtap-level4-s3 2>&1 | grep -i "s3\|upload\|object"
```

**List objects in MinIO:**

```bash
docker run --rm \
  --network=host \
  --entrypoint /bin/sh \
  quay.io/minio/minio -c "
    mc alias set local http://localhost:9000 minioadmin minioadmin &&
    mc ls local/qtap-captures/
  "
```

You should see files with SHA digest names - these are your captured HTTP transactions!

### Step 5: Retrieve a Captured Transaction

**Get the digest from qtap logs:**

```bash
DIGEST=$(docker logs qtap-level4-s3 2>&1 | grep '"digest":' | head -1 | grep -o '"digest":"[^"]*"' | cut -d'"' -f4)
echo "Digest: $DIGEST"
```

**Download the object from MinIO:**

```bash
docker run --rm \
  --network=host \
  --entrypoint /bin/sh \
  quay.io/minio/minio -c "
    mc alias set local http://localhost:9000 minioadmin minioadmin &&
    mc cat local/qtap-captures/$DIGEST
  "
```

You'll see the full HTTP transaction - request headers, response headers, and body!

### Understanding S3 Configuration

**S3 Parameters:**

| Parameter    | Purpose                  | Example                                  |
| ------------ | ------------------------ | ---------------------------------------- |
| `endpoint`   | S3 server address        | `s3.amazonaws.com` or `localhost:9000`   |
| `bucket`     | Where to store objects   | `qtap-captures`                          |
| `region`     | S3 region                | `us-east-1`                              |
| `insecure`   | Allow HTTP (MinIO only!) | `true` for local, `false` for production |
| `access_key` | S3 credentials           | From env or text                         |
| `secret_key` | S3 credentials           | From env or text                         |

**Production S3 Config (AWS):**

```yaml
object_stores:
  - type: s3
    endpoint: s3.amazonaws.com
    bucket: my-company-qtap
    region: us-west-2
    insecure: false              # Always false in production!
    access_key:
      type: env                  # Use environment variables
      value: AWS_ACCESS_KEY_ID
    secret_key:
      type: env
      value: AWS_SECRET_ACCESS_KEY
```

Then run qtap with:

```bash
docker run ... \
  -e AWS_ACCESS_KEY_ID=your_key \
  -e AWS_SECRET_ACCESS_KEY=your_secret \
  ...
```

### Production Best Practices

**Storage Strategy:**

* ✅ Use S3 for object storage (HTTP transactions)
* ✅ Use stdout or logging service for events (metadata)
* ✅ Only capture what you need (errors, slow requests)
* ✅ Set S3 lifecycle rules to auto-delete old data

**Security:**

* ✅ Always use `insecure: false` in production
* ✅ Use environment variables for credentials (never hardcode)
* ✅ Enable S3 bucket encryption
* ✅ Restrict S3 bucket access with IAM policies

**Cost Optimization:**

* ✅ Use `level: none` by default, capture via rules only
* ✅ Capture errors at `full` level (need details to debug)
* ✅ Capture success at `summary` or `details` level
* ✅ Use S3 lifecycle policies to transition to cheaper storage tiers

### S3 Lifecycle Example

For AWS S3, set up lifecycle rules:

```json
{
  "Rules": [
    {
      "Id": "TransitionOldCaptures",
      "Status": "Enabled",
      "Transitions": [
        {
          "Days": 30,
          "StorageClass": "STANDARD_IA"
        },
        {
          "Days": 90,
          "StorageClass": "GLACIER"
        }
      ],
      "Expiration": {
        "Days": 365
      }
    }
  ]
}
```

This:

* Keeps data in standard storage for 30 days
* Moves to infrequent access after 30 days (cheaper)
* Archives to Glacier after 90 days (very cheap)
* Deletes after 1 year (compliance)

### Clean Up

```bash
# Stop qtap
docker rm -f qtap-level4-s3

# Stop MinIO (optional - keeps your test data)
docker rm -f minio
```

***

## Congratulations! 🎉

You've completed the Ultimate Qtap Setup Guide. You now know how to:

**Level 1 - Basics:**

* ✅ Configure qtap to capture HTTP/HTTPS traffic
* ✅ Verify qtap can see inside TLS connections
* ✅ Understand qtap output format

**Level 2 - Filtering:**

* ✅ Filter processes to reduce noise
* ✅ Apply different capture levels by domain
* ✅ Use different capture levels for different endpoints

**Level 3 - Conditional Capture:**

* ✅ Use rulekit expressions for intelligent capture
* ✅ Create reusable macros
* ✅ Capture only errors and specific request types
* ✅ Filter by container and pod labels

**Level 4 - Production Storage:**

* ✅ Configure S3-compatible object storage
* ✅ Combine stdout and S3 for debugging + persistence
* ✅ Implement cost-effective storage strategies
* ✅ Follow security best practices

***

## Next Steps

### Production Readiness

Ready to deploy qtap in production? Here's your checklist:

**Configuration Reference:**

* [Storage Configuration](https://docs.qpoint.io/getting-started/qtap/configuration/storage-configuration) - Complete S3 setup, credentials management, and event stores
* [Traffic Processing with Plugins](https://docs.qpoint.io/getting-started/qtap/configuration/traffic-processing-with-plugins) - All available plugins and advanced options
* [Traffic Capture Settings](https://docs.qpoint.io/getting-started/qtap/configuration/traffic-capture-settings) - Filters, endpoints, and direction settings
* [Rulekit Documentation](https://github.com/qpoint-io/rulekit) - Advanced expression syntax and examples

**Alternative: Cloud Management**

Want centralized management with visual dashboards?

* [Qplane Overview](https://docs.qpoint.io/getting-started/qplane) - Cloud control plane for managing multiple agents
* [POC Kick Off Guide](https://docs.qpoint.io/guides/qplane-guides/poc-kick-off-guide) - Quick start with Qplane (10 minutes)

**Get Help:**

* [Troubleshooting](https://docs.qpoint.io/troubleshooting) - Common issues and solutions
* Report issues on [GitHub](https://github.com/qpoint-io/qtap/issues)

***

*All examples in this guide have been validated and tested. Every configuration is guaranteed to work.*
