Kubernetes Manifest

This guide covers how to deploy Qtap, an eBPF-based traffic monitoring agent, in Kubernetes environments.

Prerequisites

  • Kubernetes cluster on Linux hosts with supported kernel (5.10+)

  • kubectl

  • Helm

Generating Base Kubernetes Manifest

If you'd like to use the Helm chart as a base for building a Kubernetes manifest, you can do so with the following instructions:

helm repo add qpoint https://helm.qpoint.io/
helm template qtap qpoint/qtap > qtap-base.yaml

This command generates a base manifest file named qtap-base.yaml.

Customizing Qtap's Configuration

If you wish to supply your own Qtap configuration file, first write it to a file named qtap-config.yaml . Refer to the Configuration page for further information on the configuration file structure.

Then, instruct helm to use it like so:

helm template qtap qpoint/qtap \
  --set-file config=qtap-config.yaml \
  > qtap-base.yaml

To see additional helm chart configuration options:

Deploying Qtap

Create a namespace and apply the modified manifest to your cluster:

Verifying the Deployment

To verify that Qtap is running:

You should see pods named qtap-xxxx in the Running state.

To check the deployment logs:

Uninstalling Qtap

To uninstall Qtap:

Understanding the Base Manifest

The base manifest creates several Kubernetes resources:

  1. ServiceAccount: Provides an identity for the Qtap pods

  2. ConfigMap: Stores the Qtap configuration

  3. DaemonSet: Ensures Qtap runs on every node in the cluster

Key Components in the DaemonSet

The DaemonSet specification includes several important settings:

  • Host Access: Uses hostPID: true and hostNetwork: true to access the host's process namespace and network

  • Security Context: Requires privileged access and specific capabilities (CAP_BPF, CAP_SYS_ADMIN) for eBPF operations

  • Volume Mounts:

    • /sys: Access to the host's system directories (required for eBPF)

    • /run/containerd/containerd.sock: Access to the container runtime socket (for rich container attribution). Optional.

    • Configuration file

  • Probes: Health checks to ensure the Qtap pod is running correctly

  • Resource Limits: Controls how much CPU and memory Qtap can use

Example

Here's a complete deployment example with an external object store:

Important Notes

  • Privileged Access: The Qtap pod requires privileged access for eBPF operations. Ensure your cluster's security policies allow this.

  • Token Security: For cloud-connected mode, keep your registration token secure and do not share it in public repositories.

  • Configuration Format: For local mode, ensure your configuration is correctly formatted and contains all necessary settings.

  • Host Access: The deployment mounts the host's /sys directory and container runtime socket. Ensure this is allowed in your cluster.

  • Resource Management: You may need to adjust resource requests and limits based on your cluster's capacity and Qtap's workload.

  • DaemonSet Deployment: This deployment uses a DaemonSet to ensure Qtap runs on every node in your cluster. Adjust if this is not your intended behavior.

Common Customizations

The generated manifest provides a starting point. You may need to customize various aspects such as:

  • Resource limits and requests: Adjust based on your workload and node capacity

  • Node selectors or tolerations: Target specific nodes or allow scheduling despite taints

  • Environment variables: Add additional configuration or credentials

  • Volume mounts: Access additional host resources if needed

  • Security contexts: Adjust permissions based on your security requirements

Always review and test your modifications in a non-production environment before deploying to production.

Troubleshooting

If you encounter issues with your Qtap deployment, check the following:

  1. Pod Status: Check if the pods are running

  2. Pod Logs: Examine the logs for error messages

  3. Configuration: Verify your configuration is correctly formatted

  4. Permissions: Ensure the pod has the necessary permissions

  5. Kernel Support: Verify your nodes are running a supported kernel version (5.10+)

Last updated