Installing Sophos Linux Sensor on Kubernetes
Overview
The Sophos Linux Sensor (SLS) is a lightweight agent installed on Linux hosts, collecting events from the hosts to trigger alert generation or automated response. SLS integrates with your existing logging and alerting infrastructure. You can deploy SLS wherever you have Linux – in a public or private cloud, containers or VMs, on-premise bare metal, and across different kernel versions and Linux distributions.
Requirements
- We recommend having a good understanding of Kubernetes, Docker, and command-line tools such as kubectl before following this guide.
- kubetcl v1.18 or higher. See Kubernetes Install Tools.
- You must have a Sophos package repository API key. See How to generate the Sophos Linux Sensor package repository API token.
Note
You must run the sensor in the root PID namespace. In most environments, kubelet and the node's container runtime run in the root PID namespace, so specifying hostPID: true
is sufficient. In some non-production environments, for example, minikube and kind, kubelet may run in a container, and not in the root PID namespace.
Installation
These instructions are an example meant to provide an outline for deploying the sensor container image via Kubernetes. You can use the guidance here to customize your deployment using your tools of choice.
To install SLS on Kubernetes, do as follows:
-
Verify that
kubectl
is configured to point to your target installation cluster by running the following command:kubectl config current-context
-
Set an environment variable in the terminal that you plan on using. Run the following command, replacing <TOKEN> with your Sophos package repository API key:
export SLS_TOKEN=$'<TOKEN>'
-
Create a new Kubernetes Secret. SLS uses this secret to authenticate your kubelet so that it can pull from our private container registry.
kubectl create secret docker-registry sophos-registry-secret --docker-server=registry.sophosupd.com --docker-username=$SLS_TOKEN --docker-password=$SLS_TOKEN
Run the following command to see your new secret:
kubectl get secrets
Note
Access is granted specifically for our manifest that references the K8's
docker-registry sophos-registry-secret
. -
Pull the available tags for the SLS and default content images.
curl -u {LINUX_REPO_API_KEY}:{LINUX_REPO_API_KEY} -X GET https://registry.sophosupd.com/v2/release/sophos-linux-sensor/tags/list |jq '.tags'
curl -u {LINUX_REPO_API_KEY}:{LINUX_REPO_API_KEY} -X GET https://registry.sophosupd.com/v2/release/sophos-linux-content/tags/list |jq '.tags'
Tip
Note the most recent version tags for both
sophos-linux-sensor
andsophos-linux-content
. You will need these tags for most of the following steps. If you're not sure, you can find the latest SLS version in the Release notes. See Release notes. -
Save the contents of the following manifest provided by Sophos as
kubernetes-manifest.yaml
. The location doesn't matter. Make sure you replace <CONTENT_TAG> and <SENSOR_TAG> in the file with the latest Sensor and Content version tags from the previous step.# This sensor does not communicate with a Console. # Its alerts are only configured to go to stdout, and its configuration comes from Sophos content mounted into /var/lib/sophos/content apiVersion: apps/v1 kind: DaemonSet metadata: name: sophos-linux-sensor labels: app: sophos-linux-sensor spec: selector: matchLabels: app: sophos-linux-sensor template: metadata: labels: app: sophos-linux-sensor spec: hostPID: true securityContext: runAsUser: 0 imagePullSecrets: - name: sophos-registry-secret initContainers: - name: sophos-linux-content image: "registry.sophosupd.com/release/sophos-linux-content:<CONTENT_TAG>" imagePullPolicy: IfNotPresent volumeMounts: - mountPath: /var/lib/sophos/content name: sophos-linux-content containers: - name: sophos-linux-sensor image: "registry.sophosupd.com/release/sophos-linux-sensor:<SENSOR_TAG>" imagePullPolicy: IfNotPresent env: - name: SENSOR_MONITOR_PORT value: "9010" livenessProbe: httpGet: path: /healthz port: 9010 initialDelaySeconds: 5 readinessProbe: httpGet: path: /healthz port: 9010 initialDelaySeconds: 5 resources: limits: memory: 2Gi requests: memory: 1Gi securityContext: privileged: true capabilities: add: - SYS_ADMIN - SETUID - SETGID - SETPCAP - SYS_PTRACE - KILL - DAC_OVERRIDE - IPC_LOCK - FOWNER - CHOWN - SYSLOG - NET_RAW - SYS_RESOURCE volumeMounts: - name: sensor-linux-sensor-config mountPath: /etc/sophos - name: sophos-linux-content mountPath: /var/lib/sophos/content - name: cgroup mountPath: /var/run/sophos/mnt/sys/fs/cgroup readOnly: true - name: debugfs mountPath: /var/run/sophos/mnt/sys/kernel/debug - name: hostname mountPath: /var/run/sophos/mnt/hostname readOnly: true - name: proc mountPath: /var/run/sophos/mnt/proc readOnly: true - name: var-lib-docker mountPath: /var/lib/docker readOnly: true - name: var-run-docker mountPath: /var/run/docker readOnly: true volumes: - name: sensor-linux-sensor-config configMap: name: sensor-linux-sensor-config - name: sophos-linux-content emptyDir: {} - name: cgroup hostPath: path: /sys/fs/cgroup - name: debugfs hostPath: path: /sys/kernel/debug - name: hostname hostPath: path: /etc/hostname - name: proc hostPath: path: /proc - name: var-lib-docker hostPath: path: /var/lib/docker - name: var-run-docker hostPath: path: /var/run/docker --- apiVersion: v1 kind: ConfigMap metadata: name: sensor-linux-sensor-config data: runtimedetections-rules.yaml: | # Blank, no custom rules. sophos-linux-content will still be used. # This file must be present when policy_input is unavailable runtimedetections.yaml : | cloud_meta: auto alert_output: outputs: - type: stdout enabled: true template: 'Alert Triggered: {{ .StrategyName}}'
The
kubernetes-manifest.yaml
file contains a ConfigMap and a daemonset for SLS. The configmap is mounted into/etc/sophos
, containing theruntimedetections.yaml
andruntimedetections-rules.yaml
files. The daemonset creates one Sensor pod per node. -
Apply the manifest file, replacing <filepath> with the path to the location of the manifest file.
kubectl apply -f <filepath>/kubernetes-manifest.yaml
-
Wait for the pods to come online, then run the following command:
kubectl get pods -l app=sophos-linux-sensor
This command returns all pods that have the label -'app=sophos-linux-sensor'. You will need the name of the pods to confirm Sensor functionality.
Tip
If you don't see SLS pods starting up, check your cluster's pod security policy and, if necessary, grant exceptions for the capabilities required by SLS. For a full list of these capabilities, please see the DaemonSet or contact Sophos support.
Confirm Sensor functionality
You must replace <SENSOR_POD_NAME> in the following commands with the name of a Sensor from the kubectl get pods -l app=sophos-linux-sensor
command.
The logs for the Sensor pods list all the configured policies:
kubectl logs <SENSOR_POD_NAME>
To generate a quick test alert, exec into one of the Sensor pods:
kubectl exec -it <SENSOR_POD_NAME> -- /bin/sh
Next, create an interactive shell:
/bin/sh -i
Starting an interactive shell in the container (except with sshd
or screen
) violates the interactiveShell
policy, which will trigger an alert. By default, SLS is configured to print alerts out to standard out, which can be seen if you view the logs from the pod that generated the alert by running the following command on the host:
kubectl logs <SENSOR_POD_NAME>
Alerts can also be sent out to webhooks, written directly to cloud blob storage buckets, written to local files on the file system with log rotation, and sent to syslog. See Exporting Alerts.