Auditing Kubernetes with Wazuh

| by | Wazuh 4.14.3
Post icon

Kubernetes is an open source platform that automates the deployment, scaling, and management of containerized applications across nodes in a cluster. By distributing workloads across multiple nodes in a cluster, Kubernetes provides high availability and operational efficiency. Kubernetes offers a high degree of control over applications and services running in its clusters, making Kubernetes environments attractive targets of cyberattacks. 

In this blog post, we demonstrate how to audit Kubernetes events with Wazuh by forwarding Kubernetes audit logs to the Wazuh server. This is accomplished by monitoring activity on the Kubernetes API server, which handles all operations on cluster resources. Forwarding the audit logs to the Wazuh server for analysis gives security teams visibility into cluster activity and enables monitoring of actions performed within the environment. 

Infrastructure

We use the following infrastructure to show how to audit Kubernetes with Wazuh:

  • A pre-built, ready-to-use Wazuh OVA 4.14.3, which includes the Wazuh central components (Wazuh server, Wazuh indexer, and Wazuh dashboard). Follow this guide to download and set up the Wazuh virtual machine.
  • An AmalLinux 9 endpoint to run a local Kubernetes cluster using Minikube (minimum 2 CPU cores, 4 GB RAM, and 20 GB disk space). 

Configuration

We perform the following steps to audit Kubernetes using Wazuh:

  • Install Minikube and all necessary dependencies on the AlmaLinux 9 endpoint. 
  • Create a webhook listener on the Wazuh server to receive logs from the Kubernetes cluster.
  • Enable auditing on the Kubernetes cluster and configure it to forward audit logs to the Wazuh webhook listener.
  • Create rules on the Wazuh server to alert about audit events received from Kubernetes.

Deploying Minikube

Perform the steps below to install MiniKube on the AlmaLinux endpoint.

  1. Create a bash script minikubesetup.sh and add the following content to it. The following script installs Minikube with the none driver and configures all required dependencies, including Docker, cri-dockerd, and CNI plugins. The none driver runs Kubernetes components directly on the host machine.
#!/bin/bash
set -e

# Disable SELinux
setenforce 0 || true
sed -i --follow-symlinks 's/^SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux || true
sed -i --follow-symlinks 's/^SELINUX=permissive/SELINUX=disabled/g' /etc/sysconfig/selinux || true

# Disable swap (required for kubelet)
swapoff -a || true
sed -ri '/\sswap\s/s/^#?/#/' /etc/fstab || true

# Base prerequisites (none driver needs these)
yum install -y yum-utils conntrack socat iptables ebtables ethtool curl wget tar containernetworking-plugins

# Install Docker
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin --allowerasing
systemctl enable --now docker

# Install Kubectl
curl -fsSLo /usr/bin/kubectl https://dl.k8s.io/release/v1.26.0/bin/linux/amd64/kubectl
chmod +x /usr/bin/kubectl

# Install Minikube
curl -fsSLo /usr/bin/minikube https://github.com/kubernetes/minikube/releases/download/v1.28.0/minikube-linux-amd64
chmod +x /usr/bin/minikube

# Install crictl
VERSION="v1.25.0"
curl -fsSLo /tmp/crictl.tgz https://github.com/kubernetes-sigs/cri-tools/releases/download/${VERSION}/crictl-${VERSION}-linux-amd64.tar.gz
tar -xzf /tmp/crictl.tgz -C /usr/bin/
rm -f /tmp/crictl.tgz

# Install cri-dockerd (RPM)
curl -fsSLo /tmp/cri-dockerd.rpm https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.6/cri-dockerd-0.2.6-3.el8.x86_64.rpm
rpm -Uvh /tmp/cri-dockerd.rpm
rm -f /tmp/cri-dockerd.rpm

# Ensure cri-dockerd is running (required for docker runtime on k8s v1.24+)
systemctl daemon-reload || true
systemctl enable --now cri-docker.socket || true
systemctl enable --now cri-docker.service || true

# Ensure directories Minikube expects for CNI
mkdir -p /etc/cni/net.d
mkdir -p /opt/cni/bin

# Copy CNI plugin binaries to /opt/cni/bin (CentOS installs them under /usr/libexec/cni)
if [ -d /usr/libexec/cni ]; then
  cp -a /usr/libexec/cni/* /opt/cni/bin/
elif [ -d /usr/lib/cni ]; then
  cp -a /usr/lib/cni/* /opt/cni/bin/
fi

# Kernel networking settings 
modprobe br_netfilter || true
cat >/etc/sysctl.d/99-k8s.conf <<'EOF'
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
EOF
sysctl --system >/dev/null

# Start clean to avoid loading an existing profile.
minikube delete --all --purge || true

# Start Minikube 
minikube start --driver=none --cni=bridge
  1. Execute the script with root privileges to set up Minikube:
# bash minikubesetup.sh
  1. Run the command below to verify the Kubernetes node is ready:
# kubectl get nodes
NAME                  STATUS   ROLES           AGE     VERSION
10.0.2.15   Ready    control-plane   5m57s   v1.25.3

Configuring a webhook listener

We perform the following steps to configure a webhook listener on the Wazuh server to receive the Kubernetes audit logs.

  • Generate TLS certificates: We create server and client certificates to secure communication between the Kubernetes cluster and the Wazuh webhook listener.
  • Configure and manage the webhook listener: We set up a webhook listener that listens on port 8080 and forwards incoming audit logs to the Wazuh analysis engine. We also create a systemd service to manage the listener and enable it to start automatically on system reboot.

Generate TLS certificates

Follow the steps below on the Wazuh server to generate the TLS certificates.

  1. Create a directory kubernetes-webhook in the /var/ossec/integrations/ directory to contain the certificates:
# mkdir /var/ossec/integrations/kubernetes-webhook/
  1. Create a certificate configuration file csr.conf in the /var/ossec/integrations/kubernetes-webhook/ directory, add the following information and replace <WAZUH_SERVER_IP> with your Wazuh server IP address:
[ req ]
prompt = no
default_bits = 2048
default_md = sha256
distinguished_name = req_distinguished_name
x509_extensions = v3_req
[req_distinguished_name]
C = US
ST = California
L = San Jose
O = Wazuh
OU = Research and development
emailAddress = info@wazuh.com
CN = <WAZUH_SERVER_IP>
[ v3_req ]
authorityKeyIdentifier=keyid,issuer
basicConstraints = CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
subjectAltName = @alt_names
[alt_names]
IP.1 = <WAZUH_SERVER_IP>
  1. Create the root CA public and private keys:
# openssl req -x509 -new -nodes -newkey rsa:2048 -keyout /var/ossec/integrations/kubernetes-webhook/rootCA.key -out /var/ossec/integrations/kubernetes-webhook/rootCA.pem -batch -subj "/C=US/ST=California/L=San Jose/O=Wazuh"
  1. Create the certificate signing request (csr) and the server private key:
# openssl req -new -nodes -newkey rsa:2048 -keyout /var/ossec/integrations/kubernetes-webhook/server.key -out /var/ossec/integrations/kubernetes-webhook/server.csr -config /var/ossec/integrations/kubernetes-webhook/csr.conf
  1. Generate the server certificate:
# openssl x509 -req -in /var/ossec/integrations/kubernetes-webhook/server.csr -CA /var/ossec/integrations/kubernetes-webhook/rootCA.pem -CAkey /var/ossec/integrations/kubernetes-webhook/rootCA.key -CAcreateserial -out /var/ossec/integrations/kubernetes-webhook/server.crt -extfile /var/ossec/integrations/kubernetes-webhook/csr.conf -extensions v3_req

Create the webhook listener

Perform the steps below on the Wazuh server to create the webhook listener.

  1. Install the Python Flask module with pip. This module is used to create the webhook listener and to receive JSON POST requests:
# /var/ossec/framework/python/bin/pip3 install flask
  1. Create the Python webhook listener /var/ossec/integrations/kubernetes-webhook.py. Replace <WAZUH_SERVER_IP> with your Wazuh server IP address:
#!/var/ossec/framework/python/bin/python3

import json
from socket import socket, AF_UNIX, SOCK_DGRAM
from flask import Flask, request

# CONFIG
PORT     = 8080
CERT     = '/var/ossec/integrations/kubernetes-webhook/server.crt'
CERT_KEY = '/var/ossec/integrations/kubernetes-webhook/server.key'

# Analysisd socket address
socket_addr = '/var/ossec/queue/sockets/queue'

def send_event(msg):
    string = '1:k8s:{0}'.format(json.dumps(msg))
    sock = socket(AF_UNIX, SOCK_DGRAM)
    sock.connect(socket_addr)
    sock.send(string.encode())
    sock.close()
    return True

app = Flask(__name__)
context = (CERT, CERT_KEY)

@app.route('/', methods=['POST'])
def webhook():
    if request.method == 'POST':
        if send_event(request.json):
            print("Request sent to Wazuh")
        else:
            print("Failed to send request to Wazuh")
    return "Webhook received!"

if __name__ == '__main__':
    app.run(host='<WAZUH_SERVER_IP>', port=PORT, ssl_context=context)
  1. Create a systemd service wazuh-webhook.service in the /lib/systemd/system/ directory and add the below content:
[Unit]
Description=Wazuh webhook
Wants=network-online.target
After=network.target network-online.target

[Service]
ExecStart=/var/ossec/framework/python/bin/python3 /var/ossec/integrations/kubernetes-webhook.py
Restart=on-failure

[Install]
WantedBy=multi-user.target
  1. Reload systemd daemon, then enable and start the webhook service:
# systemctl daemon-reload
# systemctl enable wazuh-webhook.service
# systemctl start wazuh-webhook.service
  1. Check the status of the webhook service to verify that it is running:
# systemctl status wazuh-webhook.service
● wazuh-webhook.service - Wazuh webhook
   Loaded: loaded (/usr/lib/systemd/system/wazuh-webhook.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2026-02-20 15:14:28 UTC; 5min ago
 Main PID: 2956 (python3)
   CGroup: /system.slice/wazuh-webhook.service
           └─2956 /var/ossec/framework/python/bin/python3 /var/ossec/integrations/kubernetes-webhook.py
  1. Enable access to port 8080 if the firewall on the Wazuh server is running (optional).
# firewall-cmd --permanent --add-port=8080/tcp
# firewall-cmd --reload

Configuring Kubernetes audit logging

To enable Kubernetes audit logging, we create an audit policy file that defines which events the cluster records and the level of detail captured for each event type. We also create a webhook configuration file that specifies the webhook address where the audit events will be sent. 

We apply the newly created audit policy and the webhook configuration to the cluster by modifying the Kubernetes API server configuration file. The Kubernetes API server exposes the Kubernetes API and processes all cluster requests. We log all user requests to the Kubernetes API by adding the audit policy and webhook configuration to the API server. 

Follow the steps below to configure Kubernetes audit logging on the CentOS endpoint.

  1. Create a policy file /etc/kubernetes/audit-policy.yaml to log the events:
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
    # Don’t log requests to the following API endpoints
    - level: None
      nonResourceURLs:
          - '/healthz*'
          - '/logs'
          - '/metrics'
          - '/swagger*'
          - '/version'

    # Limit requests containing tokens to Metadata level so the token is not included in the log
    - level: Metadata
      omitStages:
          - RequestReceived
      resources:
          - group: authentication.k8s.io
            resources:
                - tokenreviews

    # Extended audit of auth delegation
    - level: RequestResponse
      omitStages:
          - RequestReceived
      resources:
          - group: authorization.k8s.io
            resources:
                - subjectaccessreviews

    # Log changes to pods at RequestResponse level
    - level: RequestResponse
      omitStages:
          - RequestReceived
      resources:
          # core API group; add third-party API services and your API services if needed
          - group: ''
            resources: ['pods']
            verbs: ['create', 'patch', 'update', 'delete']

    # Log everything else at Metadata level
    - level: Metadata
      omitStages:
          - RequestReceived
  1. Create a webhook configuration file /etc/kubernetes/audit-webhook.yaml. Replace <WAZUH_SERVER_IP> with the IP address of your Wazuh server:
apiVersion: v1
kind: Config
preferences: {}
clusters:
  - name: wazuh-webhook
    cluster:
      insecure-skip-tls-verify: true
      server: https://<WAZUH_SERVER_IP>:8080 

# kubeconfig files require a context. Provide one for the API server.
current-context: webhook
contexts:
- context:
    cluster: wazuh-webhook
    user: kube-apiserver # Replace with name of API server if it’s different
  name: webhook
  1. Edit the Kubernetes API server configuration file /etc/kubernetes/manifests/kube-apiserver.yaml and add the highlighted lines under the relevant sections :
...
spec:
  containers:
  - command:
    - kube-apiserver
    - --audit-policy-file=/etc/kubernetes/audit-policy.yaml
    - --audit-webhook-config-file=/etc/kubernetes/audit-webhook.yaml
    - --audit-webhook-batch-max-size=1

...

    volumeMounts:
    - mountPath: /etc/kubernetes/audit-policy.yaml
      name: audit
      readOnly: true
    - mountPath: /etc/kubernetes/audit-webhook.yaml
      name: audit-webhook
      readOnly: true

...

  volumes:
  - hostPath:
      path: /etc/kubernetes/audit-policy.yaml
      type: File
    name: audit
  - hostPath:
      path: /etc/kubernetes/audit-webhook.yaml
      type: File
    name: audit-webhook
  1. Restart Kubelet to apply the changes:
# systemctl restart kubelet

Creating detection rules on the Wazuh server

We create custom rules to detect Kubernetes audit events received via the webhook listener. Perform the following steps on the Wazuh dashboard to create custom rules on the Wazuh server.

  1. Navigate to Server management > Rules.
  2. Click + Add new rules file.
  3. Copy and paste the rules below and name the file k8s_audit_rules.xml, then click Save.
<group name="k8s_audit,">
  <rule id="110002" level="0">
    <location>k8s</location>
    <field name="apiVersion">audit</field>
    <description>Kubernetes audit log.</description>
  </rule>

  <rule id="110003" level="5">
    <if_sid>110002</if_sid>
    <regex type="pcre2">requestURI\":.+", \"verb\": \"create</regex>
    <description>Kubernetes request to create resource</description>
  </rule>

  <rule id="110004" level="5">
    <if_sid>110002</if_sid>
    <regex type="pcre2">requestURI\":.+", \"verb\": \"delete</regex>
    <description>Kubernetes request to delete resource</description>
  </rule>

  <rule id="110005" level="5">
    <if_sid>110002</if_sid>
    <regex type="pcre2">requestURI\":.+", \"verb\": \"patch</regex>
    <description>Kubernetes request to patch resource</description>
  </rule>

  <rule id="110006" level="5">
    <if_sid>110002</if_sid>
    <regex type="pcre2">requestURI\":.+", \"verb\": \"update</regex>
    <description>Kubernetes request to update resource</description>
  </rule>
</group>

Where:

  • Rule ID 110002 is a base rule that matches all Kubernetes audit events.
  • Rule ID 110003  is triggered for Kubernetes “create” events.
  • Rule ID 110004  is triggered for Kubernetes “delete” events.
  • Rule ID 110005  is triggered for Kubernetes “patch” events.
  • Rule ID 110006  is triggered for Kubernetes “update” events.

Note

Alerting Kubernetes “update” and “patch” types of events will generate a large volume of alerts.

  1. Click Reload to apply the changes.

Test the configuration

Test the rules by creating, patching, and deleting a deployment on the Kubernetes cluster. We also create a ConfigMap and update it to generate an update audit event.

  1. Run the following command on the Kubernetes master node to create a new deployment:
# kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4
  1. Patch the deployment to add a label:
# kubectl patch deployment hello-minikube -p '{"spec":{"template":{"metadata":{"labels":{"patched":"true"}}}}}'
  1. Run the following command to create and update a ConfigMap:
# kubectl create configmap audit-test --from-literal=key=value
# kubectl create configmap audit-test --from-literal=key=value2 -o yaml --dry-run=client | kubectl apply -f -
  1. Run the following command to delete the deployment:
# kubectl delete deployment hello-minikube

Similar alerts will be generated on the Wazuh dashboard when resources are created or deleted in the monitored Kubernetes cluster.

Wazuh dashboard

A sample JSON alert for a delete audit event (Rule ID 110004) is shown below:

   "data": {
      "apiVersion": "audit.k8s.io/v1",
      "kind": "EventList",
      "items": [
        {
          "auditID": "5c8ffe41-760f-4d66-b37e-9693e93e8235",
          "requestReceivedTimestamp": "2026-02-23T10:16:53.926428Z",
          "objectRef": {
            "apiGroup": "apps",
            "apiVersion": "v1",
            "resource": "deployments",
            "namespace": "default",
            "name": "hello-minikube"
          },
          "level": "Metadata",
          "verb": "delete",
          "annotations": {
            "authorization.k8s.io/decision": "allow",
            "authorization.k8s.io/reason": ""
          },
          "userAgent": "kubectl/v1.26.0 (linux/amd64) kubernetes/b46a3f8",
          "requestURI": "/apis/apps/v1/namespaces/default/deployments/hello-minikube",
          "responseStatus": {
            "metadata": {},
            "code": 200,
            "details": {
              "uid": "0084770d-b20b-4024-b76a-dcfeda4a221d",
              "kind": "deployments",
              "name": "hello-minikube",
              "group": "apps"
            },
            "status": "Success"
          },
          "stageTimestamp": "2026-02-23T10:16:53.932392Z",
          "sourceIPs": [
            "10.0.2.15"
          ],
          "stage": "ResponseComplete",
          "user": {
            "groups": [
              "system:masters",
              "system:authenticated"
            ],
            "username": "minikube-user"
          }
        }
      ]
    },
    "rule": {
      "firedtimes": 2,
      "mail": false,
      "level": 5,
      "description": "Kubernetes request to delete resource",
      "groups": [
        "k8s_audit"
      ],
      "id": "110004"
    },
    "decoder": {
      "name": "json"
    },
    "full_log": "{\"kind\": \"EventList\", \"apiVersion\": \"audit.k8s.io/v1\", \"metadata\": {}, \"items\": [{\"level\": \"Metadata\", \"auditID\": \"5c8ffe41-760f-4d66-b37e-9693e93e8235\", \"stage\": \"ResponseComplete\", \"requestURI\": \"/apis/apps/v1/namespaces/default/deployments/hello-minikube\", \"verb\": \"delete\", \"user\": {\"username\": \"minikube-user\", \"groups\": [\"system:masters\", \"system:authenticated\"]}, \"sourceIPs\": [\"10.0.2.15\"], \"userAgent\": \"kubectl/v1.26.0 (linux/amd64) kubernetes/b46a3f8\", \"objectRef\": {\"resource\": \"deployments\", \"namespace\": \"default\", \"name\": \"hello-minikube\", \"apiGroup\": \"apps\", \"apiVersion\": \"v1\"}, \"responseStatus\": {\"metadata\": {}, \"status\": \"Success\", \"details\": {\"name\": \"hello-minikube\", \"group\": \"apps\", \"kind\": \"deployments\", \"uid\": \"0084770d-b20b-4024-b76a-dcfeda4a221d\"}, \"code\": 200}, \"requestReceivedTimestamp\": \"2026-02-23T10:16:53.926428Z\", \"stageTimestamp\": \"2026-02-23T10:16:53.932392Z\", \"annotations\": {\"authorization.k8s.io/decision\": \"allow\", \"authorization.k8s.io/reason\": \"\"}}]}",
    "input": {
      "type": "log"
    },
    "@timestamp": "2026-02-23T10:16:53.905Z",
    "location": "k8s",
    "id": "1771841813.3304232",
    "timestamp": "2026-02-23T10:16:53.905+0000"
  },
  "fields": {
    "timestamp": [
      "2026-02-23T10:16:53.905Z"
    ],
    "@timestamp": [
      "2026-02-23T10:16:53.905Z"
    ]
  }
}

Conclusion

Kubernetes audit logging provides visibility into control plane activity by recording all interactions with the Kubernetes API server. Monitoring these events helps to detect unauthorized access, privilege escalation, and configuration changes that may impact cluster security. By forwarding Kubernetes audit logs to Wazuh, organizations gain centralized visibility and enhanced detection capabilities across their containerized environments.

Discover more about Wazuh by exploring our other blog posts and becoming part of our growing community.

References