Detecting Maranhão Stealer with Wazuh
October 14, 2025
The Wazuh agent is a component of the Wazuh SIEM and XDR solution that protects monitored endpoints such as servers, laptops, and virtual machines. Deploying Wazuh agents in containerized endpoints orchestrated by Kubernetes requires a more resilient deployment strategy. In containerized environments where workloads are ephemeral and dynamic, maintaining a persistent identity and configuration for the Wazuh agent is key to effective monitoring. Having a persistent identity allows the Wazuh manager to maintain new Wazuh agent container instances to their previous records, ensuring accurate event tracking and continuous visibility.
This blog explores two effective strategies for deploying the Wazuh agent within a Kubernetes cluster to enable reliable and continuous security monitoring in dynamic containerized environments. The Sidecar and Included deployment strategies leverage Kubernetes StatefulSets and persistent volumes to ensure that Wazuh agents retain their registration keys and maintain consistent hostnames even when the pods are recreated or rescheduled.
When integrating the Wazuh agent into Kubernetes, one of the key architectural decisions is how to combine the agent with the application you want to monitor. This choice impacts visibility, modularity, and compliance with containerization best practices.
We run a lightweight K3s Kubernetes cluster and Longhorn on the Ubuntu endpoint. Docker is used to build a container image that bundles the Wazuh agent and OWASP Juice Shop application. We use Longhorn to provision persistent volumes in Kubernetes, allowing each Wazuh agent to retain its configuration and identity across pod restarts and rescheduling. Even when a pod is deleted or moved to a different node, Kubernetes preserves its name and volume, enabling the Wazuh manager to recognize it as the same agent.
Follow the commands below to set up a K3s Kubernetes cluster, Docker, and Longhorn.
curl
, wget
, jq
, open-iscsi
, nfs-common
, lvm2
, cryptsetup-bin
, and enable iscsid
:$ sudo apt update $ sudo apt install -y curl wget apt-transport-https ca-certificates gnupg lsb-release conntrack gpg open-iscsi nfs-common jq util-linux lvm2 cryptsetup-bin $ sudo systemctl enable --now iscsid
$ curl -sfL https://get.k3s.io | sh -
$ sudo apt-get install -y docker.io
kubectl
and docker
for your non-root user:$ mkdir -p ~/.kube $ sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config $ sudo chown $(id -u):$(id -g) ~/.kube/config $ export KUBECONFIG="$HOME/.kube/config" $ echo 'export KUBECONFIG="$HOME/.kube/config"' >> ~/.bashrc $ sudo usermod -aG docker $USER $ source ~/.bashrc
$ kubectl get nodes
$ kubectl apply -n longhorn-system -f https://raw.githubusercontent.com/longhorn/longhorn/v1.9.1/deploy/longhorn.yaml
$ kubectl get pods -n longhorn-system
NAME READY STATUS RESTARTS AGE csi-attacher-6cc66dfc7-22j2z 1/1 Running 0 21m csi-attacher-6cc66dfc7-6rqlv 1/1 Running 0 21m csi-attacher-6cc66dfc7-jc45n 1/1 Running 0 21m csi-provisioner-bf9f5dcf-99tjv 1/1 Running 0 21m csi-provisioner-bf9f5dcf-hc8rs 1/1 Running 0 21m csi-provisioner-bf9f5dcf-zqpbp 1/1 Running 0 21m csi-resizer-79f94cf664-f2dkf 1/1 Running 0 21m csi-resizer-79f94cf664-ltzf8 1/1 Running 0 21m csi-resizer-79f94cf664-xtzm9 1/1 Running 0 21m csi-snapshotter-55f6bf5866-7jmcb 1/1 Running 0 21m csi-snapshotter-55f6bf5866-d4rw6 1/1 Running 0 21m csi-snapshotter-55f6bf5866-z8l9x 1/1 Running 0 21m engine-image-ei-b4bcf0a5-jmb2s 1/1 Running 0 22m instance-manager-e214ef0a8ae4b6552b4209748014d38b 1/1 Running 0 22m longhorn-csi-plugin-hpt5h 3/3 Running 1 (16m ago) 21m longhorn-driver-deployer-6d5c74866f-tkt7x 1/1 Running 0 46m longhorn-manager-dpktk 2/2 Running 8 (27m ago) 46m longhorn-ui-6cb46c8ff9-6m78w 1/1 Running 0 61m longhorn-ui-6cb46c8ff9-mzj96 1/1 Running 0 61m
Note
Depending on the system resources, it may take a few minutes for all the pods to be running.
$ nohup kubectl -n longhorn-system port-forward --address 0.0.0.0 svc/longhorn-frontend 8080:80 > /tmp/longhorn-portforward.log 2>&1 &
https://<UBUNTU_ENDPOINT_IP>:8080
.Replace <UBUNTU_ENDPOINT_IP>
with the IP address of your Ubuntu endpoint
In this deployment, the Wazuh agent and application run in separate containers within the same pod. We deploy the Wazuh agent alongside the OWASP Juice Shop application inside Kubernetes. This setup enables the agent to monitor the application’s activity through a shared volume, while keeping both workloads isolated for easier maintenance.
Follow the steps below to deploy the Wazuh agent and the OWASP Juice Shop application.
$ kubectl create namespace wazuh-agent-nodejs-website
juice-shop-statefulset-side-container.yaml
:$ touch juice-shop-statefulset-side-container.yaml
juice-shop-statefulset-side-container.yaml
:apiVersion: v1 kind: Namespace metadata: name: wazuh-agent-nodejs-website --- apiVersion: apps/v1 kind: StatefulSet metadata: name: nodejs-website-wazuh-agent namespace: wazuh-agent-nodejs-website spec: serviceName: juice-shop replicas: 1 selector: matchLabels: app: nodejs-website-wazuh-agent template: metadata: labels: app: nodejs-website-wazuh-agent spec: terminationGracePeriodSeconds: 20 securityContext: fsGroup: 999 fsGroupChangePolicy: OnRootMismatch initContainers: # 1) Wipe stale PID/lock files each boot - name: cleanup-ossec-stale image: busybox:1.36 imagePullPolicy: IfNotPresent securityContext: runAsUser: 0 command: ["/bin/sh", "-lc"] args: - | set -e mkdir -p /agent/var/run /agent/queue/ossec rm -f /agent/var/run/*.pid /agent/queue/ossec/*.lock || true volumeMounts: - name: wazuh-agent-data mountPath: /agent # Seed full /var/ossec to PVC if empty - name: seed-ossec-tree image: wazuh/wazuh-agent:4.13.0 imagePullPolicy: IfNotPresent securityContext: runAsUser: 0 command: ["/bin/sh", "-lc"] args: - | set -euo pipefail mkdir -p /agent if [ ! -d /agent/bin ] && [ ! -f /agent/etc/ossec.conf ]; then echo "[init] Seeding /var/ossec into PVC..." tar -C /var/ossec -cf - . | tar -C /agent -xpf - else echo "[init] PVC already has ossec runtime; skipping seed." fi volumeMounts: - name: wazuh-agent-data mountPath: /agent # 2) Enforce ownership/perms ONLY on data dirs (keep bin/lib root-owned) - name: fix-ossec-perms image: busybox:1.36 imagePullPolicy: IfNotPresent securityContext: runAsUser: 0 command: ["/bin/sh","-lc"] args: - | set -e for d in etc logs queue var rids tmp "active-response"; do [ -d "/agent/$d" ] && chown -R 999:999 "/agent/$d" done [ -d /agent/bin ] && chown -R 0:0 /agent/bin || true [ -d /agent/lib ] && chown -R 0:0 /agent/lib || true [ -d /agent/bin ] && find /agent/bin -type f -exec chmod 0755 {} \; || true chmod 0755 /agent || true volumeMounts: - name: wazuh-agent-data mountPath: /agent # 5) Write ossec.conf (passwordless enrollment) with safe perms - name: write-ossec-config image: busybox:1.36 imagePullPolicy: IfNotPresent securityContext: runAsUser: 0 env: - name: WAZUH_MANAGER value: "<WAZUH_MANAGER_IP_ADDRESS_OR_HOSTNAME>" - name: WAZUH_PORT value: "1514" - name: WAZUH_PROTOCOL value: "tcp" - name: WAZUH_REGISTRATION_SERVER value: "<WAZUH_MANAGER_IP_ADDRESS_OR_HOSTNAME>" - name: WAZUH_REGISTRATION_PORT value: "1515" - name: WAZUH_AGENT_NAME valueFrom: fieldRef: fieldPath: metadata.name command: ["/bin/sh", "-lc"] args: - | set -euo pipefail umask 007 mkdir -p /agent/etc /agent/var/run /agent/var /agent/logs /agent/queue cat > /agent/etc/ossec.conf <<'EOF' <ossec_config> <client> <server> <address>${WAZUH_MANAGER}</address> <port>${WAZUH_PORT}</port> <protocol>${WAZUH_PROTOCOL}</protocol> </server> <enrollment> <enabled>yes</enabled> <agent_name>${WAZUH_AGENT_NAME}</agent_name> <manager_address>${WAZUH_REGISTRATION_SERVER}</manager_address> <port>${WAZUH_REGISTRATION_PORT}</port> </enrollment> </client> </ossec_config> EOF sed -i \ -e "s|\${WAZUH_MANAGER}|${WAZUH_MANAGER}|g" \ -e "s|\${WAZUH_PORT}|${WAZUH_PORT}|g" \ -e "s|\${WAZUH_PROTOCOL}|${WAZUH_PROTOCOL}|g" \ -e "s|\${WAZUH_REGISTRATION_SERVER}|${WAZUH_REGISTRATION_SERVER}|g" \ -e "s|\${WAZUH_REGISTRATION_PORT}|${WAZUH_REGISTRATION_PORT}|g" \ -e "s|\${WAZUH_AGENT_NAME}|${WAZUH_AGENT_NAME}|g" \ /agent/etc/ossec.conf chown 999:999 /agent/etc/ossec.conf chmod 0640 /agent/etc/ossec.conf volumeMounts: - name: wazuh-agent-data mountPath: /agent containers: - name: wazuh-agent image: wazuh/wazuh-agent:4.13.0 imagePullPolicy: IfNotPresent lifecycle: preStop: exec: command: ["/bin/sh", "-lc", "/var/ossec/bin/ossec-control stop || true; sleep 2"] command: ["/bin/sh", "-lc"] args: - | set -e ln -sf /var/ossec/etc/ossec.conf /etc/ossec.conf || true test -r /var/ossec/etc/ossec.conf exec /init env: - name: WAZUH_MANAGER value: "<WAZUH_MANAGER_IP_ADDRESS_OR_HOSTNAME>" - name: WAZUH_PORT value: "1514" - name: WAZUH_PROTOCOL value: "tcp" - name: WAZUH_REGISTRATION_SERVER value: "<WAZUH_MANAGER_IP_ADDRESS_OR_HOSTNAME>" - name: WAZUH_REGISTRATION_PORT value: "1515" - name: WAZUH_AGENT_NAME valueFrom: fieldRef: fieldPath: metadata.name securityContext: runAsUser: 0 runAsGroup: 0 allowPrivilegeEscalation: true capabilities: add: ["SETGID","SETUID"] volumeMounts: - name: wazuh-agent-data mountPath: /var/ossec - name: application-data mountPath: /application/ - name: nodejs-website image: matteodalgrande/nodejs-website:1.0.6 imagePullPolicy: IfNotPresent ports: - containerPort: 3000 volumeMounts: - name: application-data mountPath: /application/ volumeClaimTemplates: - metadata: name: wazuh-agent-data spec: accessModes: ["ReadWriteOnce"] storageClassName: longhorn resources: requests: storage: 3Gi - metadata: name: application-data spec: accessModes: ["ReadWriteOnce"] storageClassName: longhorn resources: requests: storage: 5Gi --- apiVersion: v1 kind: Service metadata: name: juice-shop namespace: wazuh-agent-nodejs-website spec: selector: app: nodejs-website-wazuh-agent type: NodePort ports: - protocol: TCP port: 80 targetPort: 3000 nodePort: 30012
Replace <WAZUH_MANAGER_IP_ADDRESS_OR_HOSTNAME>
with the Wazuh manager IP address or hostname.
Optional: Run the command below to pre-pull the container images matteodalgrande/nodejs-website:1.0.6 and wazuh/wazuh-agent:4.13.0 from Docker Hub into the node’s containerd cache. This proactive fetch speeds up subsequent pod startups and reduces the risk of deployment delays caused by on-demand image downloads or transient network issues. |
$ sudo k3s crictl pull docker.io/matteodalgrande/nodejs-website:1.0.6 $ sudo k3s crictl pull docker.io/wazuh/wazuh-agent:4.13.0
juice-shop-statefulset-side-container.yaml
:$ kubectl apply -f juice-shop-statefulset-side-container.yaml
wazuh-agent-nodejs-website
pod is running:$ kubectl -n wazuh-agent-nodejs-website get pods
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE nodejs-website-wazuh-agent-0 0/2 Init:0/4 0 8s nodejs-website-wazuh-agent-0 0/2 Init:1/4 0 26s nodejs-website-wazuh-agent-0 0/2 Init:1/4 0 27s nodejs-website-wazuh-agent-0 0/2 Init:2/4 0 30s nodejs-website-wazuh-agent-0 0/2 Init:3/4 0 32s nodejs-website-wazuh-agent-0 0/2 PodInitializing 0 32s nodejs-website-wazuh-agent-0 2/2 Running 0 35s
Navigate to the Wazuh dashboard to confirm the Wazuh agent nodejs-website-wazuh-agent-0
is reporting.
We test the resilience of the sidecar deployment by deleting the pod nodejs-website-wazuh-agent-0
and then waiting for it to come back with the same agent identity but a new pod IP address.
nodejs-website-wazuh-agent-0
pod (Kubernetes will recreate it automatically) $ kubectl delete pod -n wazuh-agent-nodejs-website nodejs-website-wazuh-agent-0
nodejs-website-wazuh-agent-0
pod status is running
.$ kubectl -n wazuh-agent-nodejs-website get pods -w
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE nodejs-website-wazuh-agent-0 0/2 Init:0/4 0 7s nodejs-website-wazuh-agent-0 0/2 Init:1/4 0 8s nodejs-website-wazuh-agent-0 0/2 Init:1/4 0 9s nodejs-website-wazuh-agent-0 0/2 Init:2/4 0 10s nodejs-website-wazuh-agent-0 0/2 Init:3/4 0 11s nodejs-website-wazuh-agent-0 0/2 PodInitializing 0 12s nodejs-website-wazuh-agent-0 2/2 Running 0 13s
Navigate to your Wazuh dashboard to confirm that the Wazuh agent nodejs-website-wazuh-agent-0 reports back with a new IP address, and the Wazuh manager maintains the same identity.
In this deployment, the Wazuh agent and application run within the same container. The Wazuh agent is bundled with the Juice Shop application and deployed as a single workload inside Kubernetes. In the following sections, we create a Docker image for the bundled application and deploy the image in the Kubernetes cluster.
Follow the steps below to build a Docker image that packages both the Wazuh agent and the OWASP Juice Shop application.
$ mkdir -p ~/wazuh-juice-shop $ cd ~/wazuh-juice-shop
entrypoint.sh
script that installs the Wazuh agent, starts the agent, and then launches OWASP Juice Shop as the container’s main process:$ touch entrypoint.sh
entrypoint.sh
file:#!/bin/bash # wazuh-agent + juice-shop set -euo pipefail echo "Starting Wazuh Agent setup..." # ENV (override via manifest) : "${WAZUH_MANAGER:=wazuh.wazuh.svc.cluster.local}" # Manager IP/DNS (events+enrollment) : "${WAZUH_EVENT_PORT:=1514}" : "${WAZUH_EVENT_PROTO:=tcp}" : "${WAZUH_REG_PORT:=1515}" # Ensure wazuh user/group if ! getent group wazuh >/dev/null; then echo "Creating group 'wazuh'..." groupadd -r wazuh fi if ! id -u wazuh >/dev/null 2>&1; then echo "Creating user 'wazuh'..." useradd -r -g wazuh -d /var/ossec -s /bin/false wazuh fi # Ensure directories/ownership mkdir -p /var/ossec/etc chown -R wazuh:wazuh /var/ossec || true [ -f /var/ossec/bin/wazuh-control ] && chmod +x /var/ossec/bin/wazuh-control || true # Add Wazuh APT repo (modern keyring) if missing if [ ! -f /usr/share/keyrings/wazuh.gpg ]; then echo "Adding Wazuh APT repository..." apt-get update apt-get install -y curl gnupg ca-certificates curl -fsSL https://packages.wazuh.com/key/GPG-KEY-WAZUH | gpg --dearmor >/usr/share/keyrings/wazuh.gpg echo "deb [signed-by=/usr/share/keyrings/wazuh.gpg] https://packages.wazuh.com/4.x/apt/ stable main" \ > /etc/apt/sources.list.d/wazuh.list fi # Install Wazuh Agent if missing if [ ! -x /var/ossec/bin/wazuh-control ]; then echo "Wazuh agent not found. Installing 4.13.0..." apt-get update apt-get install -y wazuh-agent=4.13.0-1 # Wazuh agent installation fi # =========== HOSTNAME_SHORT="$(hostname -s || hostname || echo agent)" # Write atomically to avoid partial/empty file TMP_CONF="$(mktemp)" cat > "${TMP_CONF}" <<EOF <ossec_config> <client> <server> <address>${WAZUH_MANAGER}</address> <port>${WAZUH_EVENT_PORT}</port> <protocol>${WAZUH_EVENT_PROTO}</protocol> </server> <notify_time>10</notify_time> <time-reconnect>60</time-reconnect> <auto_restart>yes</auto_restart> <crypto_method>aes</crypto_method> <enrollment> <enabled>yes</enabled> <manager_address>${WAZUH_MANAGER}</manager_address> <port>${WAZUH_REG_PORT}</port> <agent_name>${HOSTNAME_SHORT}</agent_name> <!-- No authorization_pass_path (passwordless) --> </enrollment> </client> <client_buffer> <disabled>no</disabled> <queue_size>5000</queue_size> <events_per_second>500</events_per_second> </client_buffer> <!-- Policy monitoring --> <rootcheck> <disabled>no</disabled> <check_files>yes</check_files> <check_trojans>yes</check_trojans> <check_dev>yes</check_dev> <check_sys>yes</check_sys> <check_pids>yes</check_pids> <check_ports>yes</check_ports> <check_if>yes</check_if> <frequency>900</frequency> <rootkit_files>etc/shared/rootkit_files.txt</rootkit_files> <rootkit_trojans>etc/shared/rootkit_trojans.txt</rootkit_trojans> <skip_nfs>no</skip_nfs> <ignore>/var/lib/containerd</ignore> <ignore>/var/lib/docker/overlay2</ignore> </rootcheck> <!-- CIS-CAT --> <wodle name="cis-cat"> <disabled>no</disabled> <timeout>1800</timeout> <interval>10m</interval> <scan-on-start>yes</scan-on-start> <java_path>wodles/java</java_path> <ciscat_path>wodles/ciscat</ciscat_path> </wodle> <!-- Osquery (disabled by default) --> <wodle name="osquery"> <disabled>yes</disabled> <run_daemon>yes</run_daemon> <log_path>/var/log/osquery/osqueryd.results.log</log_path> <config_path>/etc/osquery/osquery.conf</config_path> <add_labels>yes</add_labels> </wodle> <!-- System inventory --> <wodle name="syscollector"> <disabled>no</disabled> <interval>10m</interval> <scan_on_start>yes</scan_on_start> <hardware>yes</hardware> <os>yes</os> <network>yes</network> <packages>yes</packages> <ports all="yes">yes</ports> <processes>yes</processes> <synchronization> <max_eps>10</max_eps> </synchronization> </wodle> <sca> <enabled>yes</enabled> <scan_on_start>yes</scan_on_start> <interval>10m</interval> <skip_nfs>no</skip_nfs> </sca> <!-- File integrity monitoring --> <syscheck> <disabled>no</disabled> <frequency>900</frequency> <scan_on_start>yes</scan_on_start> <directories>/etc,/usr/bin,/usr/sbin</directories> <directories>/bin,/sbin,/boot</directories> <ignore>/etc/mtab</ignore> <ignore>/etc/hosts.deny</ignore> <ignore>/etc/mail/statistics</ignore> <ignore>/etc/random-seed</ignore> <ignore>/etc/random.seed</ignore> <ignore>/etc/adjtime</ignore> <ignore>/etc/httpd/logs</ignore> <ignore>/etc/utmpx</ignore> <ignore>/etc/wtmpx</ignore> <ignore>/etc/cups/certs</ignore> <ignore>/etc/dumpdates</ignore> <ignore>/etc/svc/volatile</ignore> <ignore type="sregex">.log$|.swp$</ignore> <nodiff>/etc/ssl/private.key</nodiff> <skip_nfs>no</skip_nfs> <skip_dev>no</skip_dev> <skip_proc>no</skip_proc> <skip_sys>no</skip_sys> <process_priority>10</process_priority> <max_eps>50</max_eps> <synchronization> <enabled>yes</enabled> <interval>5m</interval> <max_eps>10</max_eps> </synchronization> <directories check_all="yes" whodata="yes" realtime="yes">/Users</directories> <directories check_all="yes" whodata="yes" realtime="yes">/home</directories> </syscheck> <!-- Simple local commands --> <localfile> <log_format>command</log_format> <command>df -P</command> <frequency>360</frequency> </localfile> <localfile> <log_format>full_command</log_format> <command>netstat -tulpn | sed 's/\([[:alnum:]]\+\)\ \+[[:digit:]]\+\ \+[[:digit:]]\+\ \+\(.*\):\([[:digit:]]*\)\ \+\([0-9\.\:\*]\+\).\+\ \([[:digit:]]*\/[[:alnum:]\-]*\).*/\1 \2 == \3 == \4 \5/' | sort -k 4 -g | sed 's/ == \(.*\) ==/:\1/' | sed 1,2d</command> <alias>netstat listening ports</alias> <frequency>360</frequency> </localfile> <!-- Active response --> <active-response> <disabled>no</disabled> <ca_store>etc/wpk_root.pem</ca_store> <ca_verification>yes</ca_verification> </active-response> <!-- Internal logging format --> <logging> <log_format>plain</log_format> </logging> </ossec_config> EOF # Move into place only if non-empty if [ ! -s "${TMP_CONF}" ]; then echo "ERROR: ossec.conf template was empty" >&2 exit 1 fi mv -f "${TMP_CONF}" /var/ossec/etc/ossec.conf chown -R wazuh:wazuh /var/ossec/etc # =========== # Start Wazuh agent # =========== echo "Starting Wazuh Agent..." /var/ossec/bin/wazuh-control start # Show agent logs (background) tail -F /var/ossec/logs/ossec.log & # Ensure npm is available for Juice Shop if ! command -v npm >/dev/null 2>&1; then echo "Installing Node.js and npm for Juice Shop..." apt-get update && apt-get install -y nodejs npm fi # juice-shop application if [ -f "/application/package.json" ]; then cd /application npm start else mkdir -p /application cd / BUNDLE="juice-shop-17.1.1_node20_linux_x64.tgz" URL="https://github.com/juice-shop/juice-shop/releases/download/v17.1.1/${BUNDLE}" if [ ! -f "${BUNDLE}" ]; then echo "Downloading Juice Shop bundle" curl -fL -o "${BUNDLE}" "${URL}" fi tar -xzvf "${BUNDLE}" -C /application/ --strip-components=1 cd /application npm install --omit=dev --ignore-scripts npm start fi
Note
You can modify the highlighted line in the Entrypoint.sh
file to specify the Wazuh agent version you want to install.
entrypoint.sh
executable:$ chmod +x entrypoint.sh
Dockerfile
:$ touch Dockerfile
Dockerfile
:FROM ubuntu:22.04 ENV DEBIAN_FRONTEND=noninteractive # Base tools + Wazuh repo key (modern keyring) + NodeSource for Node 20.x RUN apt-get update && apt-get install -y --no-install-recommends \ ca-certificates curl gnupg xz-utils tar \ && curl -fsSL https://packages.wazuh.com/key/GPG-KEY-WAZUH \ | gpg --dearmor >/usr/share/keyrings/wazuh.gpg \ && echo "deb [signed-by=/usr/share/keyrings/wazuh.gpg] https://packages.wazuh.com/4.x/apt/ stable main" \ > /etc/apt/sources.list.d/wazuh.list \ # NodeSource repo (Node 20.x) && install -d -m 0755 /etc/apt/keyrings \ && curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key \ | gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg \ && echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_20.x nodistro main" \ > /etc/apt/sources.list.d/nodesource.list \ && apt-get update \ && apt-get install -y --no-install-recommends \ nodejs \ net-tools \ && rm -rf /var/lib/apt/lists/* # Workdir; entrypoint handles Wazuh + Juice Shop WORKDIR / # Copy entrypoint COPY entrypoint.sh /entrypoint.sh RUN chmod +x /entrypoint.sh # Default command ENTRYPOINT ["/entrypoint.sh"]
$ sudo docker build -t wazuh-juice-shop-agent:1.0.0 .
$ sudo docker save wazuh-juice-shop-agent:1.0.0 -o wazuh-juice-shop-agent.tar$ sudo k3s ctr images import wazuh-juice-shop-agent.tar
$ sudo k3s ctr images ls | grep wazuh-juice-shop-agent
docker.io/library/wazuh-juice-shop-agent:1.0.0 application/vnd.oci.image.manifest.v1+json sha256:581d30e75e9a219f29c86ace17cad4b8259a82a06eb2f57037537597e09df9af 906.9 MiB linux/amd64 io.cri-containerd.image=managed
Follow the steps below to deploy the Docker image you created in the previous section to your Kubernetes cluster.
$ kubectl create namespace juice-shop-wazuh-included
juice-shop-included.yaml
:$ nano juice-shop-included.yaml
juice-shop-included.yaml
:apiVersion: v1 kind: Namespace metadata: name: juice-shop-wazuh-included --- apiVersion: apps/v1 kind: StatefulSet metadata: name: juice-shop-wazuh-agent namespace: juice-shop-wazuh-included spec: serviceName: juice-shop replicas: 1 selector: matchLabels: app: juice-shop-wazuh-agent template: metadata: labels: app: juice-shop-wazuh-agent spec: securityContext: fsGroup: 0 fsGroupChangePolicy: OnRootMismatch containers: - name: wazuh-agent image: wazuh-juice-shop-agent:1.0.0 # <- your image/tag imagePullPolicy: IfNotPresent env: - name: WAZUH_MANAGER value: "<WAZUH_MANAGER_IP_ADDRESS_OR_HOSTNAME>" # <- set your manager IP or DNS here - name: WAZUH_EVENT_PORT value: "1514" - name: WAZUH_EVENT_PROTO value: "tcp" - name: WAZUH_REG_PORT value: "1515" securityContext: runAsUser: 0 allowPrivilegeEscalation: true capabilities: { add: ["SETGID","SETUID"] } volumeMounts: - name: application-data mountPath: /application - name: wazuh-agent-data mountPath: /var/ossec ports: - name: http containerPort: 3000 volumeClaimTemplates: - metadata: name: wazuh-agent-data spec: accessModes: ["ReadWriteOnce"] storageClassName: "longhorn" resources: requests: { storage: 2Gi } - metadata: name: application-data spec: accessModes: ["ReadWriteOnce"] storageClassName: "longhorn" resources: requests: { storage: 5Gi } --- apiVersion: v1 kind: Service metadata: name: juice-shop namespace: juice-shop-wazuh-included spec: selector: app: juice-shop-wazuh-agent type: NodePort ports: - protocol: TCP port: 80 targetPort: 3000 nodePort: 30011
Replace <WAZUH_MANAGER_IP_ADDRESS_OR_HOSTNAME>
with the Wazuh manager IP address or hostname.
juice-shop-included.yaml
:$ kubectl apply -f juice-shop-included.yaml
juice-shop-wazuh-included
pod is running:$ kubectl -n juice-shop-wazuh-included get pods
NAME READY STATUS RESTARTS AGE juice-shop-wazuh-agent-0 1/1 Running 0 7s
Navigate to your Wazuh dashboard to confirm the Wazuh agent juice-shop-wazuh-agent-0
is reporting.
We test the resilience of the included deployment by deleting the pod juice-shop-wazuh-agent-0
and then waiting for it to come back with the same Wazuh agent identity but a new pod IP address.
juice-shop-wazuh-agent-0
pod (Kubernetes will recreate it automatically): $ kubectl delete pod -n juice-shop-wazuh-included juice-shop-wazuh-agent-0
juice-shop-wazuh-agent-0
pod status is running
:$ kubectl -n juice-shop-wazuh-included get pods -w
NAME READY STATUS RESTARTS AGE juice-shop-wazuh-agent-0 0/1 ContainerCreating 0 4s juice-shop-wazuh-agent-0 1/1 Running 0 8s
Navigate to the Wazuh dashboard to confirm that the Wazuh agent juice-shop-wazuh-agent-0
reports back with a new IP address, and the Wazuh manager maintains the same identity.
Deploying the Wazuh agent in Kubernetes using sidecar or included deployment models ensures resilient, identity-preserving monitoring for containerized workloads. With StatefulSets and persistent volumes, organizations achieve seamless recovery, secure log collection, and consistent visibility across dynamic clusters, strengthening their overall security posture.
To learn more about Wazuh, explore our blog posts, and join the growing community.