Architecture & Data Flow
Last updated
Last updated
Qpoint provides exceptional flexibility in how and where your data flows. This architecture gives you full control over your data sovereignty while delivering powerful visibility into your service connections.
Qpoint handles two distinct types of data, each with different sensitivity levels and deployment options:
Objects contain the actual content of your service interactions, including request/response headers and bodies. This data often contains sensitive information such as:
Authentication tokens and credentials
Personal identifiable information (PII)
Business logic and proprietary data
Configuration details and secrets
Key principle: Object data always remains under your complete control, regardless of deployment model.
Events consist of anonymized information about service connections, capturing essential metadata while carefully excluding sensitive content:
Connection details (IP addresses, domains)
Timing information (timestamps, durations)
Performance metrics (bandwidth usage, latency)
Basic request/response metadata (status codes, paths)
Qpoint's architecture consists of four primary components that work together to provide complete visibility:
The Qtap agent is the foundation of Qpoint's visibility capabilities, deployed directly on your servers to observe traffic at its source.
Purpose: Provides deep visibility into connection data before encryption occurs, with complete process context.
Key Technologies:
eBPF (Extended Berkeley Packet Filter) provides:
Kernel-level visibility into network events
Efficient execution with minimal overhead
Transparent operation without code modifications
Safe execution through kernel verification
Deployment Options:
Linux binary (x86_64, arm64)
Docker container
Kubernetes DaemonSet
How Qtap Works:
Monitors network sockets and process creation
Taps into TLS/SSL libraries to see pre-encryption data
Collects connection metadata and payload content
Associates traffic with specific processes and containers
Analyzes traffic locally based on configured rules
Sends metadata to event storage and payloads to object storage
The optional control plane provides centralized management for your Qpoint deployment.
Purpose: Enables configuration, monitoring, and management of your Qtap agents.
Key Components:
Web-based UI for visualization and configuration
REST API for programmatic control
Configuration management system
Team and access management
Deployment orchestration
Security Characteristics:
Handles only configuration data, not sensitive payload content
Supports SSO (Single Sign-On)
Role-based access control
Audit logging for configuration changes (coming soon)
Important: The control plane serves as a management interface only and does not store or process sensitive data.
The event storage system handles anonymized connection metadata.
Purpose: Stores and processes connection metadata for analytics and visualization.
Components:
Pulse Service: Gateway for metrics data that provides:
Authentication and authorization
Data sanitation and processing
Query API for data retrieval
Data flow management into time-series storage
Clickhouse Database: Analytical database optimized for:
High-performance queries on time-series data
Efficient storage of connection metadata
Real-time dashboard updates
Complex analytical queries
Deployment Options:
Managed: Use Qpoint's hosted Pulse service (default)
Self-hosted: Run your own Pulse service and Clickhouse database (coming soon)
Object storage maintains the detailed content of connections, including headers and payload data.
Purpose: Securely stores actual request and response content within your environment.
Key Characteristics:
S3-compatible storage interface
Operates entirely within your environment
Direct browser-to-storage access pattern
Configurable retention policies
Support for various storage providers
Supported Providers:
AWS S3
Google Cloud Storage
MinIO
Any S3-compatible storage service
Qpoint supports multiple deployment architectures to match your specific requirements for data sovereignty, operational needs, and security posture.
This popular configuration balances operational simplicity with data sovereignty:
Qtap Agent: One deployed per server/node
Control Plane: Qpoint's managed control plane for configuration
Event Storage: Qpoint's managed event storage (Pulse service)
Object Storage: Your self-hosted or cloud-provider object storage
Data Flow:
Qtap captures connection data on your servers
All processing happens locally where connections originate
Objects (sensitive data) stay in your environment in your object storage
Only anonymized event metadata is sent to Qpoint's Pulse service
Configuration management happens through Qpoint's control plane
Benefits:
Sensitive payload data never leaves your environment
Simplified management through centralized control plane
Rich dashboards and analytics without managing the analytics infrastructure
Clear separation between management plane and data plane
In this configuration, all components are deployed within your environment:
Qtap Agent: One deployed per server/node
Pulse Service: Self-hosted event ingestion service that sits in front of ClickHouse
Event Storage: Self-hosted time-series database (e.g., ClickHouse)
Object Storage: Self-hosted S3-compatible storage (e.g., MinIO)
Data Flow:
Qtap captures connection data on your servers
All processing happens locally where connections originate
Objects are stored in your self-hosted object storage
Events are sent to your self-hosted Pulse service
Pulse processes and forwards events to your ClickHouse database
Dashboards and analytics run against your local data stores
Benefits:
Complete data sovereignty with no data leaving your environment
Suitable for air-gapped or high-security environments
Full control over retention policies and data lifecycle
All sensitive data remains behind your firewall
For users who prefer complete independence from external services:
Qtap Agent: One deployed per server/node
Event Output: Local stdout
Object Output: Local stdout, local object storage
Data Flow:
Qtap captures connection data on your servers
All processing happens locally where connections originate
Events and objects are output locally according to configuration
No data is transmitted to external services
Benefits:
Complete independence from external services
Suitable for testing, development, or highly restricted environments
All data remains entirely local
No external dependencies or connections required
This configuration is ideal for proof-of-concept deployments and non-sensitive environments:
Qtap Agent: One deployed per server/node
Control Plane: Qpoint's managed control plane for configuration
Event Storage: Qpoint's managed Pulse service and ClickHouse
Object Storage: Qpoint-managed S3-compatible storage
Data Flow:
Qtap captures connection data on your servers
All rules processing happens locally where connections originate
Objects are sent to Qpoint's managed object storage
Events are sent to Qpoint's managed Pulse service and ClickHouse
All management happens through Qpoint's control plane
Benefits:
Fastest path to getting up and running
Zero infrastructure to manage
Full functionality with minimal setup
Ideal for testing, evaluation, and non-sensitive environments
Simple transition to more secure models when ready for production
You have flexibility in where your object data is stored, with all options requiring S3-compatible interfaces:
Simple output to console for debugging and development
Useful for local testing and initial setup
MinIO: Self-hosted S3-compatible object storage
AWS S3: Amazon Simple Storage Service
Google Cloud Storage: GCS buckets
Any S3-compatible storage service: Must support the S3 API
For event data (anonymized metadata), you have two primary options:
Simple output to console for debugging and development
Useful for local testing and initial setup
The Pulse service works with Clickhouse in a paired configuration:
Qpoint-Managed
Pulse Service: Qpoint's managed event ingestion service
Clickhouse Database: Qpoint-managed Clickhouse for analytics
Self-Hosted
Self-hosted Pulse + Clickhouse: Deploy both components in your environment
Both components must be deployed together for proper functionality