# Kubernetes Manifest

This guide covers how to deploy Qtap, an eBPF-based traffic monitoring agent, in Kubernetes environments.

### Prerequisites

* Kubernetes cluster on Linux hosts with supported kernel (5.10+)
* `kubectl`
* `Helm`

### Generating Base Kubernetes Manifest

If you'd like to use the Helm chart as a base for building a Kubernetes manifest, you can do so with the following instructions:

```bash
helm repo add qpoint https://helm.qpoint.io/
helm template qtap qpoint/qtap > qtap-base.yaml
```

This command generates a base manifest file named `qtap-base.yaml`.

### Customizing Qtap's Configuration

If you wish to supply your own Qtap configuration file, first write it to a file named `qtap-config.yaml` . Refer to the [configuration](https://docs.qpoint.io/getting-started/qtap/configuration "mention") page for further information on the configuration file structure.

Then, instruct helm to use it like so:

```bash
helm template qtap qpoint/qtap \
  --set-file config=qtap-config.yaml \
  > qtap-base.yaml
```

To see additional helm chart configuration options:

```bash
helm show values qpoint/qtap
```

### Deploying Qtap

Create a namespace and apply the modified manifest to your cluster:

```bash
kubectl create ns qpoint
kubectl apply -f qtap-base.yaml -n qpoint
```

### Verifying the Deployment

To verify that Qtap is running:

```bash
kubectl get pods -n qpoint
```

You should see pods named `qtap-xxxx` in the Running state.

To check the deployment logs:

```bash
kubectl logs -n qpoint pod qtap-xxxx
```

### Uninstalling Qtap

To uninstall Qtap:

```bash
kubectl delete -f qtap-base.yaml -n qpoint
```

### Understanding the Base Manifest

The base manifest creates several Kubernetes resources:

1. **ServiceAccount**: Provides an identity for the Qtap pods
2. **ConfigMap**: Stores the Qtap configuration
3. **DaemonSet**: Ensures Qtap runs on every node in the cluster

#### Key Components in the DaemonSet

The DaemonSet specification includes several important settings:

* **Host Access**: Uses `hostPID: true` and `hostNetwork: true` to access the host's process namespace and network
* **Security Context**: Requires privileged access and specific capabilities (`CAP_BPF`, `CAP_SYS_ADMIN`) for eBPF operations
* **Volume Mounts**:
  * `/sys`: Access to the host's system directories (required for eBPF)
  * `/run/containerd/containerd.sock`: Access to the container runtime socket (for rich container attribution). Optional.
  * Configuration file
* **Probes**: Health checks to ensure the Qtap pod is running correctly
* **Resource Limits**: Controls how much CPU and memory Qtap can use

### Example

Here's a complete deployment example with an external object store:

```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: qtap
  labels:
    app.kubernetes.io/name: qtap
    app.kubernetes.io/instance: qtap
    app.kubernetes.io/version: "v0.17.1"
automountServiceAccountToken: true
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: qtap-config
  labels:
    app.kubernetes.io/name: qtap
    app.kubernetes.io/instance: qtap
    app.kubernetes.io/version: "v0.17.1"
data:
  tap-config.yaml: |
    version: 2
    services:
      event_stores:
        - id: console_stdout
          type: stdout
      object_stores:
        - id: minio
          type: s3
          endpoint: minio.storage.svc.cluster.local:9000
          bucket: qpoint
          region: us-east-1
          access_url: http://minio.storage.svc.cluster.local:9000/{{BUCKET}}/{{DIGEST}}
          insecure: true
          access_key:
            type: env
            value: S3_ACCESS_KEY
          secret_key:
            type: env
            value: S3_SECRET_KEY
    stacks:
      default_stack:
        plugins:
          - type: debug
            config:
              mode: summary
          - type: detect_errors
            config:
              rules:
                - name: "All Errors"
                  trigger_status_codes:
                    - '4xx'
                    - '5xx'
                  only_categories:
                    - app
                  report_as_issue: true
                  record_req_headers: true
                  record_req_body: true
                  record_res_headers: true
                  record_res_body: true
    tap:
      direction: egress
      ignore_loopback: true
      audit_include_dns: true
      http:
        stack: default_stack
      filters:
        groups:
          - eks
          - kubernetes
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: qtap
  labels:
    app.kubernetes.io/name: qtap
    app.kubernetes.io/instance: qtap
    app.kubernetes.io/version: "v0.17.1"
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: qtap
      app.kubernetes.io/instance: qtap
  template:
    metadata:
      labels:
        app.kubernetes.io/name: qtap
        app.kubernetes.io/instance: qtap
        app.kubernetes.io/version: "v0.17.1"
    spec:
      hostPID: true
      hostNetwork: true
      serviceAccountName: qtap
      securityContext:
        null
      containers:
        - name: qtap
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              add:
              - CAP_BPF
              - CAP_SYS_ADMIN
            privileged: true
            readOnlyRootFilesystem: false
            runAsGroup: 0
            runAsNonRoot: false
            runAsUser: 0
          image: "us-docker.pkg.dev/qpoint-edge/public/qtap:v0"
          imagePullPolicy: IfNotPresent
          args: []
          env:
            - name: QPOINT_CONFIG
              value: "/app/tap-config.yaml"
            - name: S3_ACCESS_KEY
              valueFrom:
                secretKeyRef:
                  name: minio-credentials
                  key: access-key
            - name: S3_SECRET_KEY
              valueFrom:
                secretKeyRef:
                  name: minio-credentials
                  key: secret-key
            - name: STATUS_LISTEN
              value: "0.0.0.0:10001"
            - name: LOG_LEVEL
              value: "info"
            - name: LOG_ENCODING
              value: "json"
            - name: TINI_SUBREAPER
              value: "1"
          ports:
            - name: status
              containerPort: 10001
              protocol: TCP
          startupProbe:
            httpGet:
              path: /readyz
              port: status
            initialDelaySeconds: 3
            periodSeconds: 5
            timeoutSeconds: 2
            successThreshold: 1
            failureThreshold: 20
          readinessProbe:
            httpGet:
              path: /readyz
              port: status
            initialDelaySeconds: 3
            periodSeconds: 5
            timeoutSeconds: 2
            successThreshold: 1
            failureThreshold: 1
          livenessProbe:
            httpGet:
              path: /healthz
              port: status
            initialDelaySeconds: 3
            periodSeconds: 10
            timeoutSeconds: 2
            successThreshold: 1
            failureThreshold: 3
          resources:
            limits:
              cpu: 1000m
              memory: 1Gi
            requests:
              cpu: 100m
              memory: 128Mi
          volumeMounts:
            - name: config-volume
              mountPath: /app/tap-config.yaml
              subPath: tap-config.yaml
            - mountPath: /sys
              name: sys
              readOnly: true
            - name: containerd-socket
              mountPath: /run/containerd/containerd.sock
      volumes:
        - name: config-volume
          configMap:
            name: qtap-config
            items:
              - key: tap-config.yaml
                path: tap-config.yaml
        - hostPath:
            path: /sys
            type: Directory
          name: sys
        - name: containerd-socket
          hostPath:
            path: /run/containerd/containerd.sock
            type: Socket
```

### Important Notes

* **Privileged Access**: The Qtap pod requires privileged access for eBPF operations. Ensure your cluster's security policies allow this.
* **Token Security**: For cloud-connected mode, keep your registration token secure and do not share it in public repositories.
* **Configuration Format**: For local mode, ensure your configuration is correctly formatted and contains all necessary settings.
* **Host Access**: The deployment mounts the host's `/sys` directory and container runtime socket. Ensure this is allowed in your cluster.
* **Resource Management**: You may need to adjust resource requests and limits based on your cluster's capacity and Qtap's workload.
* **DaemonSet Deployment**: This deployment uses a DaemonSet to ensure Qtap runs on every node in your cluster. Adjust if this is not your intended behavior.

### Common Customizations

The generated manifest provides a starting point. You may need to customize various aspects such as:

* **Resource limits and requests**: Adjust based on your workload and node capacity
* **Node selectors or tolerations**: Target specific nodes or allow scheduling despite taints
* **Environment variables**: Add additional configuration or credentials
* **Volume mounts**: Access additional host resources if needed
* **Security contexts**: Adjust permissions based on your security requirements

Always review and test your modifications in a non-production environment before deploying to production.

### Troubleshooting

If you encounter issues with your Qtap deployment, check the following:

1. **Pod Status**: Check if the pods are running

   ```bash
   kubectl get pods -n qpoint
   ```
2. **Pod Logs**: Examine the logs for error messages

   ```
   kubectl logs -n qpoint ds qtap
   # or, for a specific pod
   kubectl logs -n qpoint pod qtap-xxxx
   ```
3. **Configuration:** Verify your configuration is correctly formatted

   ```bash
   kubectl describe configmap -n qpoint qtap-config
   ```
4. **Permissions**: Ensure the pod has the necessary permissions

   ```bash
   kubectl describe pod -n qpoint qtap-xxxx
   ```
5. **Kernel Support**: Verify your nodes are running a supported kernel version (5.10+)

   ```bash
   kubectl debug node/node-name -it --image=ubuntu
   uname -r
   ```
