Deploying Wazuh on Kubernetes using AWS EKS
![Post icon](https://wazuh.com/uploads/2020/04/aws_eks-post-icon.png)
Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications which has become the de-facto industry standard for container orchestration. In this post, we describe how to deploying Wazuh on Kubernetes with AWS EKS.
Before you start, you will need a Kubernetes cluster where the containers will be deployed. We will use Amazon EKS, a hosted Kubernetes service that helps you run your container workloads in AWS without having to manage the Kubernetes control plane for your cluster. Like other similar hosted Kubernetes services (e.g. Azure Kubernetes service or Google Kubernetes Engine), the advantage of AWS EKS is taking away all the operational tasks related to the control plane. Just deploy your Kubernetes worker nodes, and EKS will do the rest for you, ensuring high availability, security, and scalability.
This diagram shows a quick overview of the AWS EKS architecture:
This is how our Wazuh containers will look once they are deployed:
Let’s review each component of the deployment.
Our Wazuh cluster consists of one master and two worker nodes. Here the objects that we will use:
The services allow us access to the Kubernetes Pods. There are 3 services:
The Elastic Stack deployment is composed of one Elasticsearch node and one Kibana node.
There are 3 services:
We deploy an Nginx proxy to access the Kibana web interface using credentials.
In order to access our Wazuh cluster, we will use a Load Balancer. AWS offers three types:
In this section, we will learn how to deploying Wazuh in Kubernetes using the kubectl
tool.
First, we need to download the Wazuh Kubernetes repository and change the Load Balancer services for Node Port services:
git clone https://github.com/wazuh/wazuh-kubernetes.git curl https://wazuh.com/resources/blog/wazuh-cluster-on-eks/nginx-svc.yaml -o wazuh-kubernetes/elastic_stack/kibana/nginx-svc.yaml curl https://wazuh.com/resources/blog/wazuh-cluster-on-eks/wazuh-master-svc.yaml -o wazuh-kubernetes/wazuh_managers/wazuh-master-svc.yaml curl https://wazuh.com/resources/blog/wazuh-cluster-on-eks/wazuh-workers-svc.yaml -o wazuh-kubernetes/wazuh_managers/wazuh-workers-svc.yaml
Once we have the Kubernetes templates ready, we will apply them:
cd wazuh-kubernetes # Wazuh Namespace and StorageClass kubectl apply -f base/wazuh-ns.yaml kubectl apply -f base/aws-gp2-storage-class.yaml # Elasticsearch deployment kubectl apply -f elastic_stack/elasticsearch/elasticsearch-svc.yaml kubectl apply -f elastic_stack/elasticsearch/single-node/elasticsearch-api-svc.yaml kubectl apply -f elastic_stack/elasticsearch/single-node/elasticsearch-sts.yaml # Kibana and Nginx deployment kubectl apply -f elastic_stack/kibana/kibana-svc.yaml kubectl apply -f elastic_stack/kibana/nginx-svc.yaml kubectl apply -f elastic_stack/kibana/kibana-deploy.yaml kubectl apply -f elastic_stack/kibana/nginx-deploy.yaml # Wazuh cluster deployment kubectl apply -f wazuh_managers/wazuh-master-svc.yaml kubectl apply -f wazuh_managers/wazuh-cluster-svc.yaml kubectl apply -f wazuh_managers/wazuh-workers-svc.yaml kubectl apply -f wazuh_managers/wazuh-master-conf.yaml kubectl apply -f wazuh_managers/wazuh-worker-0-conf.yaml kubectl apply -f wazuh_managers/wazuh-worker-1-conf.yaml kubectl apply -f wazuh_managers/wazuh-master-sts.yaml kubectl apply -f wazuh_managers/wazuh-worker-0-sts.yaml kubectl apply -f wazuh_managers/wazuh-worker-1-sts.yaml
Now, we will verify that everything is working as expected.
The following command allows us to check the pod status. Wait until all the pods are in Running status.
kubectl -n wazuh get pods NAME READY STATUS RESTARTS AGE wazuh-elasticsearch-0 1/1 Running 0 2m8s wazuh-kibana-7c7c5b87dd-qgvvt 1/1 Running 0 116s wazuh-manager-master-0 1/1 Running 0 72s wazuh-manager-worker-0-0 1/1 Running 0 68s wazuh-manager-worker-1-0 1/1 Running 0 63s wazuh-nginx-869f588f5d-qqt28 1/1 Running 0 111s
Take a look at the services to review how they are exposed.
kubectl -n wazuh get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP 172.20.9.85 9200/TCP 104s kibana ClusterIP 172.20.190.118 5601/TCP 87s wazuh NodePort 172.20.155.222 1515:30405/TCP,55000:32250/TCP 67s wazuh-cluster ClusterIP None 1516/TCP 63s wazuh-elasticsearch ClusterIP None 9300/TCP 108s wazuh-nginx NodePort 172.20.84.91 443:32748/TCP 83s wazuh-workers NodePort 172.20.138.150 1514:30544/TCP 58s
Finally, we can access the manager container to check the Wazuh cluster:
kubectl -n wazuh exec -it wazuh-manager-master-0 bash /var/ossec/bin/cluster_control -l NAME TYPE VERSION ADDRESS wazuh-manager-master master 3.11.2 wazuh-manager-master-0.wazuh-cluster.wazuh.svc.cluster.local wazuh-manager-worker-0 worker 3.11.2 10.2.93.45 wazuh-manager-worker-1 worker 3.11.2 10.2.122.53
These are some other useful commands to check the resources created:
kubectl get namespaces
kubectl -n wazuh get pvc
kubectl -n wazuh get configmap
kubectl -n wazuh get deployments
kubectl -n wazuh get sts
At this point, our environment is ready for us to continue with the access configuration. We are going to create a Network Load Balancer in AWS with the following configuration:
32748
target group30544
target group30405
target group32250
target groupGo to your AWS Console > EC2 > Load balancers. Then click on Create Load Balancer and follow the next steps.
Select the VPC where our EKS cluster is deployed. Then, configure the Listeners with the proper port and protocol:
After creating our Listeners, we will set the security settings, choosing the certificate we want for each TLS Listener.
The next step is to configure the communication routes through target groups. We will assign an identifying name to each target group, select instances as target type, set the appropriate protocol according to the Listener and finally add the port of the NodePort service.
Here an example for the Wazuh workers service (listener port 1514, target group port 30544):
The last step is to register the instances of our Kubernetes cluster for the target group.
Repeat the previous steps for the rest of the listeners and target groups. The NLB that was created should look like this:
In order to test our deployment, we are going to register a Wazuh agent and we will look for it in the Wazuh WUI.
For the purposes of this post, we will install a Wazuh agent in an Ubuntu server (you may use any other operating system). The NLB DNS will be used as an address for the Wazuh manager, and we will change the protocol to TCP. The Load Balancer will send the registration to the Wazuh manager master and the events to the Wazuh manager workers.
WAZUH_MANAGER="eks-wazuh-cluster-103d5c73552188f0.elb.us-east-2.amazonaws.com" WAZUH_PROTOCOL="tcp" apt-get install wazuh-agent
As we have made the Wazuh API accessible, we can check if our agent has been added using the following API call:
curl -u foo:bar "https://eks-wazuh-cluster-103d5c73552188f0.elb.us-east-2.amazonaws.com:55000/agents/summary?pretty" { "error": 0, "data": { "Total": 2, "Active": 2, "Disconnected": 0, "Never connected": 0, "Pending": 0 } }
Just type the NLB DNS in your browser using HTTPS and enter the credentials established in Nginx.
We have seen a simple design to run Wazuh on Kubernetes, the orchestration tool designed by Google to run containerized applications at scale in public and private clouds. We also used AWS EKS to simplify our work. This is just the beginning, and we will explain different designs to run Wazuh on Kubernetes painlessly and smoothly in future posts.
If you have any questions about how to deploying Wazuh on Kubernetes with AWS EKS, join our community. Our team and contributors will help you.