Architecture & Data Flow

Guiding Principles

Qpoint is designed with two core principles in mind:

  1. Deep Visibility: To provide a complete, unencrypted view of your application traffic, including payloads, with full context (process, container, host, user, etc.).

  2. Data Sovereignty: To ensure that your sensitive data, the actual content of your traffic, never leaves your environment. You retain full control and ownership of your most critical data at all times.

Core Components

Qpoint's architecture consists of a few key components that work in concert to observe, process, and visualize your data. In a typical deployment, the components are distributed between your environment (e.g., your AWS account, your on-prem infrastructure, etc.) and the Qpoint Cloud.

Qtap (The eBPF Agent / Data Plane)

Qtap is a lightweight, high-performance eBPF agent that you install on your infrastructure. It is the source of all visibility, capturing traffic directly from the Linux kernel.

  • How it Works: By attaching to low-level TLS/SSL functions within the kernel, Qtap intercepts data before it's encrypted and after it's decrypted. This provides access to the original, plain-text data without needing to manage certificates, modify applications, or install invasive proxies.

  • Out-of-Band Operation: Qtap operates out-of-band, meaning it observes traffic without being in the critical path. This ensures it adds no latency and cannot disrupt application performance.

  • Rich Context: It enriches the captured data with a wealth of context, including the associated process, container, host, user, and protocol information.

  • Local Processing: Based on its configuration, the agent decides how to process the captured data, what to filter, and where to send it.

Qplane (The Control Plane)

Qplane is the centralized management and visualization interface for your Qpoint deployment, hosted by Qpoint at app.qpoint.io.

  • Function: This is where you configure your Qtap agents, define rules, view dashboards, and analyze the metadata collected from your services.

  • Security: The Qplane only receives and processes anonymized Event metadata. When configured, it never has access to your sensitive Object payloads.

Qscan (Data Classification)

Qscan is a service, typically run as a Docker container, that is responsible for deep inspection and classification of your data payloads.

  • Function: Qtap can be configured to send data to Qscan, which then identifies specific types of sensitive information (PII, credentials, secrets, etc.) based on predefined or custom rules.

  • Deployment: Generally you host Qscan within your own environment. This ensures that the process of scanning and classifying your sensitive data happens securely under your control.

Object Storage (Your Data Warehouse)

This is a critical component of the security model. You provide your own S3-compatible object store (e.g., AWS S3, MinIO, Google Cloud Storage) where all sensitive payload data is sent and stored.

  • Function: The Qtap agent sends Objects (request/response headers and bodies) directly to this storage bucket.

  • Security: This data flow is direct from the agent in your environment to the object store in your environment. This sensitive data never traverses any Qpoint-managed systems.

Data Flow: The Golden Rule

Qpoint distinguishes between two types of data to ensure security and privacy:

  • Events (Anonymized Metadata): This is high-level, non-sensitive information about a connection (e.g., source/destination IP, process name, status codes, timing). Events are sent from the Qtap agent to the Qplane to populate dashboards and provide analytics.

  • Objects (Sensitive Payloads): This is the actual content of your traffic (e.g., API request/response bodies, headers). Objects are sent directly from the Qtap agent to your self-hosted Object Store and/or Qscan instance.

Configuration in Practice: AWS Example

The flow of data is defined in the qtap.yml configuration file. Here is an example reflecting a common deployment using AWS S3 for object storage and a self-hosted Qscan instance.

version: 2
services:
  # 1. Anonymized EVENT metadata is sent to the Qpoint Cloud
  event_stores:
    - id: qpoint_cloud
      type: pulse
      url: https://api-pulse.qpoint.io

  # 2. Sensitive OBJECT payloads are sent directly to YOUR AWS S3 bucket
  object_stores:
    - id: aws_s3
      type: s3
      endpoint: s3.amazonaws.com
      bucket: my-company-qpoint-data # Your S3 bucket name
      region: us-west-2
      access_key:
        type: env
        value: AWS_ACCESS_KEY_ID
      secret_key:
        type: env
        value: AWS_SECRET_ACCESS_KEY

  # 3. Payloads are also sent to YOUR internal Qscan for classification
  qscan:
    type: client
    url: https://qscan.internal.example.com # Your internal Qscan endpoint
    token:
      type: env
      value: QSCAN_TOKEN

stacks:
  # ... plugin configurations ...
  sensitive_data_stack:
    plugins:
      # This plugin enables Qscan for this stack
      - type: qscan
        config:
          # ... qscan configuration ...
      # ... other plugins

tap:
  # ... tap configurations ...
  filters:
    # ...
    endpoints:
      - domain: api.thirdparty.com
        http:
          # This stack sends data to Qscan for deep analysis
          stack: sensitive_data_stack

As shown, the configuration explicitly directs the agent on how to handle different data types, giving you granular control over your data.

Typical AWS Installation & Usage Workflow

  1. Host Your Services in AWS:

    • Create an S3 bucket in your AWS account to serve as your Object Store.

    • Deploy the Qscan Docker container within your VPC.

  2. Install the Agent: Deploy the Qtap agent onto your EC2 instances or EKS cluster. Ensure the environment variables for AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and QSCAN_TOKEN are available to the agent process.

  3. Configure Qtap: Edit your configuration on app.qpoint.io OR create the qtap.yaml file as shown in the example above, pointing the agent to your S3 bucket and internal Qscan endpoint.

  4. Visualize in Qplane: Log in to the Qplane at app.qpoint.io to see the anonymized event data, manage your agent configurations, and create alerting rules.

  5. Access Payloads Securely: When you need to inspect the full payload of a request from the Qplane UI, your browser will be given a URL to retrieve it directly from your S3 bucket. This maintains the security boundary, as Qpoint's servers never access the data.

Typical GCP Installation & Usage Workflow

1. Host Your Services in GCP:

  • Create a Google Cloud Storage (GCS) bucket in your GCP project to serve as your Object Store.

  • Deploy the Qscan Docker container within your VPC.

2. Install the Agent: Deploy the Qtap agent onto your Compute Engine VMs or GKE cluster. Ensure the environment variables for GCS_ACCESS_KEY, GCS_SECRET_KEY, and QSCAN_TOKEN are available to the agent process (e.g., via metadata, secrets, or environment configuration).

3. Configure Qtap: Edit your configuration on app.qpoint.io OR create the qtap.yaml file using the Google Cloud Storage configuration, pointing the agent to your GCS bucket and internal Qscan endpoint.

4. Visualize in Qplane: Log in to the Qplane at app.qpoint.io to see the anonymized event data, manage your agent configurations, and create alerting rules.

5. Access Payloads Securely: When you need to inspect the full payload of a request from the Qplane UI, your browser will be given a URL to retrieve it directly from your GCS bucket. This maintains the security boundary, as Qpoint's servers never access the data.

Last updated