Wazuh multi-site implementation offers a solution that helps organizations unify their security monitoring capabilities across multiple geographically dispersed locations or sites. This implementation focuses on having Wazuh cluster components that collect, process, and store logs from the Wazuh agents within each site. A single Wazuh dashboard displays security alerts generated from events occurring in monitored endpoints for every site.
In the Wazuh multi-site implementation, the Wazuh components are decentralized to enhance scalability, distribute workload, and improve fault tolerance. Additional Wazuh indexer and server nodes can be added to a site without necessarily disrupting the entire Wazuh multi-site implementation.
Scenario
In this blog post, a problem statement and proposed solution are shown below to describe a use case for the Wazuh multi-site implementation.
Problem statement
An XYZ company with operations in two geographically dispersed sites wants to make use of the Wazuh unified XDR and SIEM platform. Their requirements are stipulated below:
- Ingest logs from endpoints in the respective sites.
- Ensure that log data collection for each site is independent of the other.
- Ensure that Wazuh agents only connect to the Wazuh manager closest to them to minimize network congestion.
- Ensure that each site’s logs are backed up.
- Ensure that security events from both sites can be viewed by the security team using a single Wazuh dashboard.
Proposed solution
In our solution, we explore the implementation of different Wazuh clusters per site. The sites are categorized into site A and site B, with the following Wazuh components assigned:
Note: The proposed solution is only for demonstration purposes.
- Each site can be scaled horizontally to add more nodes.
- The entire multi-site implementation can be scaled to include another site in case there is an organizational expansion.
- Joining an already existing Wazuh cluster to a multi-site implementation will not work.
- Site A comprises two Wazuh server nodes and two Wazuh indexer nodes in a cluster state.
- Site B comprises a Wazuh server node and a Wazuh indexer node.
- A Wazuh dashboard node is installed in an independent site with connectivity to site A and site B cluster components. The Wazuh dashboard displays alerts from events generated on monitored endpoints in sites A and B. For high availability, multiple Wazuh dashboards can be deployed on the same or different sites and accessed through a load balancer.
The implementation has the following characteristics:
- Wazuh agents installed on endpoints are connected to the Wazuh server geographically closest to them. For example, a Wazuh agent installed on a Linux server in site A only reports to the Wazuh server nodes in site A.
- Alerts generated on the Wazuh server are indexed on the Wazuh indexer connected to the same site where they are deployed. For example, the Wazuh indexer in site B only connects to the Wazuh server in site B.
- Unique index patterns are created per site to distinguish the alerts from the different sites;
site-a-alerts-*
for site A alerts andsite-b-alerts-*
for site B alerts. - Data indexed on the Wazuh indexer within a site is replicated to Wazuh indexers in the other sites, thereby providing backups for the stored log data. For example, data is replicated from the indexer in site B to the indexers in site A.
- The Wazuh dashboard is connected to Wazuh indexers and servers in all the sites and queries data using their unique APIs and index patterns.
Benefits of Wazuh multi-site implementation
We describe some benefits of implementing the Wazuh multi-site solution below.
- Unified dashboard: Security alerts from all sites can be viewed on a single, centralized Wazuh dashboard, providing a comprehensive overview.
- Data replication for backup: Shards are replicated across Wazuh indexer nodes in different sites, serving as data backup. This allows for alert indices to present on all the Wazuh indexer nodes in case of a node failure.
- Reduced mean time to recovery (MTTR): The multi-site setup minimizes the mean time to recovery in case of a disaster. Alerts before the disaster are replicated to Wazuh indexer nodes in other sites, making it possible to view the last log before the disaster. This feature contributes to the entire resilience of the system.
- Elimination of network bottlenecks: By connecting Wazuh agents to local servers instead of a centralized one, network bottlenecks are significantly reduced.
- Site-specific security alerts: Security alerts are easily distinguishable by site, facilitating targeted and efficient incident response.
- Enhanced performance and scalability: The implementation enhances fault tolerance and scalability. New nodes and sites can be onboarded, to accommodate growing security needs.
Infrastructure
To demonstrate how to implement the Wazuh multi-site use case described above, we make use of the following resources:
- Two CentOS 7 endpoints, each hosting a Wazuh server 4.7.5 and a Wazuh indexer 4.7.5 for site A.
- A CentOS 7 endpoint hosting a Wazuh server 4.7.5 and a Wazuh indexer 4.7.5 for site B.
- A CentOS 7 Linux node hosting a Wazuh dashboard 4.7.5 and connected to site A and B Wazuh components.
Note: The dashboard node can be deployed anywhere irrespective of site, including on-premises and cloud. The Wazuh dashboard, site A, and site B all belong to different networks but need to have network connectivity between them.
The table below shows a further breakdown of the infrastructure.
Sites | Component | IP addresses | Node names | Index patterns |
Anywhere | Wazuh dashboard | 192.168.100.100 | wazuh-dashboard-1 | |
Site A | Wazuh indexer 1 | 192.168.186.151 | sa-wazuh-indexer-1 | site-a-alerts-* |
Wazuh server 1 | sa-wazuh-server-1 | |||
Wazuh indexer 2 | 192.168.186.152 | sa-wazuh-indexer-2 | ||
Wazuh server 2 | sa-wazuh-server-2 | |||
Site B | Wazuh indexer 1 | 192.168.10.11 | sb-wazuh-indexer-1 | site-b-alerts-* |
Wazuh server 1 | sb-wazuh-server-1 |
Configuration
In this section, we provide steps on how to configure the Wazuh central components; Wazuh server, Wazuh indexer, and Wazuh dashboard in a multi-site environment.
Generating certificates
Perform the steps in this section to generate the certificates used to encrypt the communication between the Wazuh central components.
Wazuh dashboard node
Perform the following steps on the Wazuh dashboard node to generate the dashboard certificate and root CA:
1. Download the wazuh-certs-tool.sh
script and the config.yml
configuration file. The wazuh-certs-tool.sh
script is used to generate certificates for the cluster. The config.yml
file is used to define the IP addresses and node names of the Wazuh central components to be deployed.
# curl -sO https://packages.wazuh.com/4.7/wazuh-certs-tool.sh # curl -sO https://packages.wazuh.com/4.7/config.yml
2. Edit the config.yml
file and replace the node names and IP values with the corresponding names and IP addresses:
nodes: # Wazuh dashboard nodes dashboard: - name: wazuh-dashboard-1 ip: "192.168.100.100" # - name: wazuh-dashboard-2 # ip: "<DASHBOARD_NODE_IP>"
3. Run the wazuh-certs-tool.sh
script with option -A
to create the root-ca
, admin, and wazuh-dashboard
certificates:
# bash ./wazuh-certs-tool.sh -A
4. Copy the root CA certificate to the working directory of any Wazuh component node in sites A and B using the scp
utility. The root CA is used later in each site to generate certificates for other nodes.
# scp -r wazuh-certificates/root-ca.* <USER_NAME>@<IP_ADDRESS>:<DESTINATION_DIRECTORY>
Replace:
<USER_NAME>
with the destination server’s username.<IP_ADDRESS>
with the destination server’s IP address.<DESTINATION_DIRECTORY>
with the destination server’s working directory.
5. Compress all the certificate files and remove the uncompressed version to allow for easier transfer to other component nodes if need be:
# tar -cvf ./wazuh-certificates.tar -C ./wazuh-certificates/ . # rm -rf ./wazuh-certificates
Note: If you are using multiple dashboards, copy the wazuh-certificates.tar
file to the other Wazuh dashboard nodes using the scp
utility.
Site A
Perform the following steps on the node with the root CA certificate within site A to generate certificates for the Wazuh components:
Note: Make sure that a copy of the root-ca.key
and root-ca.pem
files created during the configuration of the Dashboard node are in your working directory.
1. Download the wazuh-certs-tool.sh
script and the config.yml
configuration file:
# curl -sO https://packages.wazuh.com/4.7/wazuh-certs-tool.sh # curl -sO https://packages.wazuh.com/4.7/config.yml
2. Edit the config.yml
file and replace the node names and IP values with the corresponding names and IP addresses for the Wazuh server and indexer nodes in site A:
nodes: # Wazuh indexer nodes indexer: - name: sa-wazuh-indexer-1 ip: "192.168.186.151" - name: sa-wazuh-indexer-2 ip: "192.168.186.152" # Wazuh server nodes # If there is more than one Wazuh server node, each one must have a node_type server: - name: sa-wazuh-server-1 ip: "192.168.186.151" node_type: master - name: sa-wazuh-server-2 ip: "192.168.186.152" node_type: worker
3. Run the wazuh-certs-tool.sh
script with option -A
and indicate the root-ca
certificate and key created earlier to create the admin
, and node
certificates:
# bash ./wazuh-certs-tool.sh -A ./root-ca.pem ./root-ca.key
22/05/2024 13:44:22 INFO: Admin certificates created. 22/05/2024 13:44:23 INFO: Wazuh indexer certificates created. 22/05/2024 13:44:24 INFO: Wazuh server certificates created.
4. Compress all the certificate files and remove the uncompressed version to allow for easier transfer to other component nodes if need be:
# tar -cvf ./wazuh-certificates.tar -C ./wazuh-certificates/ . # rm -rf ./wazuh-certificates
5. Copy the wazuh-certificates.tar
file to all the Wazuh indexer and server nodes within the cluster for site A using the scp
utility:
# scp wazuh-certificates.tar <USER_NAME>@<IP_ADDRESS>:<DESTINATION_DIRECTORY>
Site B
Perform the following steps on the node with the root CA certificate within site B to generate certificates for the Wazuh components:
Note: Make sure that a copy of the root-ca.key
and root-ca.pem
files created during the configuration of the Dashboard node are in your working directory.
1. Download the wazuh-certs-tool.sh
script and the config.yml
configuration file:
# curl -sO https://packages.wazuh.com/4.7/wazuh-certs-tool.sh # curl -sO https://packages.wazuh.com/4.7/config.yml
2. Edit the ./config.yml
file and replace the node names and IP values with the corresponding names and IP addresses for the Wazuh server and indexer nodes in site B:
nodes: # Wazuh indexer nodes indexer: - name: sb-wazuh-indexer-1 ip: "192.168.10.11" # - name: sb-wazuh-indexer-2 # ip: "<INDEXER_NODE_IP>" # Wazuh server nodes # If there is more than one Wazuh server node, each one must have a node_type server: - name: sb-wazuh-server-1 ip: "192.168.10.11" # node_type: master # - name: sb-wazuh-server-2 # ip: "<WAZUH_MANAGER_IP>" # node_type: worker
3. Run the ./wazuh-certs-tool.sh
script with option -A
and indicate the root-ca
certificate and key created earlier to create the admin
, and node
certificates:
4. Compress all the certificate files and remove the uncompressed version to allow for easier transfer to other component nodes if need be:
# tar -cvf ./wazuh-certificates.tar -C ./wazuh-certificates/ . # rm -rf ./wazuh-certificates
Note: If you have multiple nodes in site B, copy the wazuh-certificates.tar
file to the nodes using the scp
utility.
Setting up the Wazuh indexer
Perform the following configuration steps on each Wazuh indexer node for each site.
1. Install the necessary dependencies:
# yum install -y coreutils
2. Import the Wazuh GPG key and add the Wazuh repository:
# rpm --import https://packages.wazuh.com/key/GPG-KEY-WAZUH # echo -e '[wazuh]\ngpgcheck=1\ngpgkey=https://packages.wazuh.com/key/GPG-KEY-WAZUH\nenabled=1\nname=EL-$releasever - Wazuh\nbaseurl=https://packages.wazuh.com/4.x/yum/\nprotect=1' | tee /etc/yum.repos.d/wazuh.repo
3. Install the Wazuh indexer package:
# yum -y install wazuh-indexer
4. Edit the /etc/wazuh-indexer/opensearch.yml
configuration file and replace the following values:
network.host
with the IP address of the indexer node.node.name
with the name of the Wazuh indexer node set in theconfig.yml
file.cluster.initial_master_nodes
with the names of all the Wazuh indexer nodes in the multi-site cluster.discovery.seed_hosts
with the IP addresses of all the Wazuh indexer nodes in the multi-site cluster.node.max_local_storage_nodes
with a number indicating the maximum number of indexer storage nodes in the cluster.plugins.security.nodes_dn
with the list of the distinguished names of the certificates of all the Wazuh indexer cluster nodes. The common names (CN) are to be replaced with the node names provided in theconfig.yml
file.
network.host: "192.168.186.151" node.name: "sa-wazuh-indexer-1" cluster.initial_master_nodes: - "sa-wazuh-indexer-1" - "sa-wazuh-indexer-2" - "sb-wazuh-indexer-1" cluster.name: "wazuh-cluster" discovery.seed_hosts: - "192.168.186.151" - "192.168.186.152" - "192.168.10.11" node.max_local_storage_nodes: "10" path.data: /var/lib/wazuh-indexer path.logs: /var/log/wazuh-indexer ... plugins.security.nodes_dn: - "CN=sa-wazuh-indexer-1,OU=Wazuh,O=Wazuh,L=California,C=US" - "CN=sa-wazuh-indexer-2,OU=Wazuh,O=Wazuh,L=California,C=US" - "CN=sb-wazuh-indexer-1,OU=Wazuh,O=Wazuh,L=California,C=US" ...
Note: Make sure that a copy of the wazuh-certificates.tar file created during the generating certificates step is placed in your current working directory.
5. Run the following commands, replacing <INDEXER_NODE_NAME>
with the value of node.name
configured in the /etc/wazuh-indexer/opensearch.yml
file:
# NODE_NAME=<INDEXER_NODE_NAME> # mkdir /etc/wazuh-indexer/certs # tar -xf ./wazuh-certificates.tar -C /etc/wazuh-indexer/certs/ ./$NODE_NAME.pem ./$NODE_NAME-key.pem ./admin.pem ./admin-key.pem ./root-ca.pem # mv -n /etc/wazuh-indexer/certs/$NODE_NAME.pem /etc/wazuh-indexer/certs/indexer.pem # mv -n /etc/wazuh-indexer/certs/$NODE_NAME-key.pem /etc/wazuh-indexer/certs/indexer-key.pem # chmod 500 /etc/wazuh-indexer/certs # chmod 400 /etc/wazuh-indexer/certs/* # chown -R wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/certs
6. Enable and start the Wazuh indexer service:
# systemctl daemon-reload # systemctl enable wazuh-indexer # systemctl start wazuh-indexer
Note: Repeat steps 1 to 6 on every Wazuh indexer node across all sites before proceeding to initialize the Wazuh indexer cluster.
Initialize the Wazuh indexer cluster
Perform the steps below on any Wazuh indexer node after setting up the indexer nodes on all the sites.
1. Run the Wazuh indexer indexer-security-init.sh
script on any of the Wazuh indexer nodes to load the new certificate information and initialize the Wazuh indexer cluster:
# /usr/share/wazuh-indexer/bin/indexer-security-init.sh
2. Check information about the cluster by running the following commands:
# curl -k -u admin:admin https://<WAZUH_INDEXER_IP>:9200
[root@Site-A ~]# curl -k -u admin:admin https://192.168.186.151:9200 { "name" : "sa-wazuh-indexer-1", "cluster_name" : "wazuh-cluster", "cluster_uuid" : "CpdKjP7GT1K********", "version" : { "number" : "7.10.2", "build_type" : "rpm", "build_hash" : "db90a415ff2fd428b4f7b3f800a51dc229287cb4", "build_date" : "2023-06-03T06:24:25.112415503Z", "build_snapshot" : false, "lucene_version" : "9.6.0", "minimum_wire_compatibility_version" : "7.10.0", "minimum_index_compatibility_version" : "7.0.0" }, "tagline" : "The OpenSearch Project: https://opensearch.org/" }
# curl -k -u admin:admin https://<WAZUH_INDEXER_IP>:9200/_cat/nodes?v
[root@Site-A ~]# curl -k -u admin:admin https://192.168.186.151:9200/_cat/nodes?v ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role node.roles cluster_manager name 192.168.186.151 23 65 12 0.47 1.25 0.67 dimr cluster_manager,data,ingest,remote_cluster_client - sa-wazuh-indexer-1 192.168.186.152 28 64 13 0.59 1.57 0.86 dimr cluster_manager,data,ingest,remote_cluster_client * sa-wazuh-indexer-2 192.168.10.11 44 65 2 0.07 0.44 0.57 dimr cluster_manager,data,ingest,remote_cluster_client * sb-wazuh-indexer-1
Setting up the Wazuh server
Perform the following Wazuh server configuration steps for each Wazuh server node in each site.
1. Import the Wazuh GPG key and add the Wazuh repository:
# rpm --import https://packages.wazuh.com/key/GPG-KEY-WAZUH # echo -e '[wazuh]\ngpgcheck=1\ngpgkey=https://packages.wazuh.com/key/GPG-KEY-WAZUH\nenabled=1\nname=EL-$releasever - Wazuh\nbaseurl=https://packages.wazuh.com/4.x/yum/\nprotect=1' | tee /etc/yum.repos.d/wazuh.repo
2. Install the Wazuh manager package:
# yum -y install wazuh-manager
3. Enable and start the Wazuh manager service:
# systemctl daemon-reload # systemctl enable wazuh-manager # systemctl start wazuh-manager
4. Install the Filebeat package:
# yum -y install filebeat
5. Download the preconfigured Filebeat configuration file:
# curl -so /etc/filebeat/filebeat.yml https://packages.wazuh.com/4.7/tpl/wazuh/filebeat/filebeat.yml
6. Edit the /etc/filebeat/filebeat.yml
configuration file and enter the IP address of the Wazuh indexer nodes in your site’s cluster in the hosts
section. For example in site A:
# Wazuh - Filebeat configuration file output.elasticsearch: hosts: ["192.168.186.151:9200", "192.168.186.152:9200"] protocol: https username: ${username} password: ${password} ...
Note: Only the IP addresses of the indexer in the site being configured should be entered in the hosts
field above.
7. Create a Filebeat keystore to securely store authentication credentials and add the default username and password admin:admin
to the secrets keystore:
# filebeat keystore create # echo admin | filebeat keystore add username --stdin --force # echo admin | filebeat keystore add password --stdin --force
8. Download the alerts template for the Wazuh indexer and grant appropriate read permissions:
# curl -so /etc/filebeat/wazuh-template.json https://raw.githubusercontent.com/wazuh/wazuh/v4.7.5/extensions/elasticsearch/7.x/wazuh-template.json # chmod go+r /etc/filebeat/wazuh-template.json
9. Edit the alerts template file /etc/filebeat/wazuh-template.json
and add the index pattern that matches the site name. Replace the default wazuh-alerts-4.x-*
index pattern with the custom index pattern. For example, use site-a-alerts-*
index pattern for site A:
{ "order": 0, "index_patterns": [ "site-a-alerts-*", "wazuh-archives-4.x-*" ],
10. Install the Wazuh module for Filebeat:
# curl -s https://packages.wazuh.com/4.x/filebeat/wazuh-filebeat-0.3.tar.gz | tar -xvz -C /usr/share/filebeat/module
11. Edit the Wazuh alerts configuration file /usr/share/filebeat/module/wazuh/alerts/manifest.yml
and replace the index name with the custom index pattern created in the previous step:
module_version: 0.1 var: - name: paths default: - /var/ossec/logs/alerts/alerts.json - name: index_prefix default: site-a-alerts- input: config/alerts.yml ingest_pipeline: ingest/pipeline.json
Note: Make sure that a copy of the wazuh-certificates.tar
file created during the generating certificates step is placed in your working directory.
12. Replace <SERVER_NODE_NAME>
with your Wazuh server node name, the same one set in config.yml
when creating the certificates, and move the certificates to their corresponding directories:
# NODE_NAME=<SERVER_NODE_NAME> # mkdir /etc/filebeat/certs # tar -xf ./wazuh-certificates.tar -C /etc/filebeat/certs/ ./$NODE_NAME.pem ./$NODE_NAME-key.pem ./root-ca.pem # mv -n /etc/filebeat/certs/$NODE_NAME.pem /etc/filebeat/certs/filebeat.pem # mv -n /etc/filebeat/certs/$NODE_NAME-key.pem /etc/filebeat/certs/filebeat-key.pem # chmod 500 /etc/filebeat/certs # chmod 400 /etc/filebeat/certs/* # chown -R root:root /etc/filebeat/certs
13. Enable and start the FIlebeat service:
# systemctl daemon-reload # systemctl enable filebeat # systemctl start filebeat
14. Run the following command to verify that Filebeat is successfully installed:
# filebeat test output
[root@Site-A ~]# filebeat test output elasticsearch: https://192.168.186.151:9200... parse url... OK connection... parse host... OK dns lookup... OK addresses: 192.168.186.151 dial up... OK TLS... security: server's certificate chain verification is enabled handshake... OK TLS version: TLSv1.3 dial up... OK talk to server... OK version: 7.10.2 elasticsearch: https://192.168.186.152:9200... parse url... OK connection... parse host... OK dns lookup... OK addresses: 192.168.186.152 dial up... OK TLS... security: server's certificate chain verification is enabled handshake... OK TLS version: TLSv1.3 dial up... OK talk to server... OK version: 7.10.2
Multi-node configuration
Perform the following steps to cluster the Wazuh server nodes in your sites.
For this blog post, we configured sa-wazuh-server-1
as the master node and sa-wazuh-server-2
as the worker node in site A.
Master node
1. Generate a random hexadecimal key for the master and worker node communication:
# openssl rand -hex 16
2. Edit the Wazuh configuration /var/ossec/etc/ossec.conf
file on the Wazuh server master node and replace the highlighted values:
<cluster> <name>wazuh</name> <node_name>sa-wazuh-server-1</node_name> <node_type>master</node_type> <key>42977f78f55b2c0***************</key> <port>1516</port> <bind_addr>0.0.0.0</bind_addr> <nodes> <node>192.168.186.151</node> </nodes> <hidden>no</hidden> <disabled>no</disabled> </cluster>
Where:
<name>
: Indicates the name of the cluster.<node_name>
: Indicates the name of the current node.<node_type>
: Specifies the role of the node. It has to be set to master for themaster
node andworker
for the worker node.<key>
: Is the key used to encrypt communication between the cluster nodes. The key must be 32 characters long and the same for all of the nodes in the cluster. The following command can be used to generate a random key:openssl rand -hex 16
.<port>:
Indicates the destination port for cluster communication The default is1516
.<bind_addr>
: Indicates the network IP to which the node is bound to listen for incoming requests (0.0.0.0
for any IP).<nodes>
: Indicates the IP address of themaster node
and can be either an IP or a DNS. This parameter must be specified in all nodes, including the master itself.<hidden>
: Indicates whether to show or hide the cluster information in the generated alerts. It can be set toyes
orno
. The default is set tono
.<disabled>
: Indicates whether the node is enabled or disabled in the cluster. This option must be set tono
.
Use these same options for the Wazuh worker node.
3. Restart the Wazuh manager for the changes to take effect:
# systemctl restart wazuh-manager
Worker node
1. Edit the /var/ossec/etc/ossec.conf
file on the Wazuh server worker node and replace the highlighted values:
<cluster> <name>wazuh</name> <node_name>sa-wazuh-server-2</node_name> <node_type>worker</node_type> <key>42977f78f55b2c0***************</key> <port>1516</port> <bind_addr>0.0.0.0</bind_addr> <nodes> <node>192.168.186.151</node> </nodes> <hidden>no</hidden> <disabled>no</disabled> </cluster>
2. Restart the Wazuh manager for the changes to take effect:
# systemctl restart wazuh-manager
Testing the Wazuh server cluster
Run the following command on any of the Wazuh server nodes to test that the Wazuh server cluster is enabled and all nodes are connected:
# /var/ossec/bin/cluster_control -l
[root@Site-A ~]# /var/ossec/bin/cluster_control -l NAME TYPE VERSION ADDRESS sa-wazuh-server-1 master 4.7.5 192.168.186.151 sa-wazuh-server-2 worker 4.7.5 192.168.186.152
Setting up the Wazuh dashboard
Perform the following steps on the Wazuh dashboard node.
1. Install the necessary dependencies:
# yum install libcap
2. Import the Wazuh GPG key, and add the Wazuh repository:
# rpm --import https://packages.wazuh.com/key/GPG-KEY-WAZUH # echo -e '[wazuh]\ngpgcheck=1\ngpgkey=https://packages.wazuh.com/key/GPG-KEY-WAZUH\nenabled=1\nname=EL-$releasever - Wazuh\nbaseurl=https://packages.wazuh.com/4.x/yum/\nprotect=1' | tee /etc/yum.repos.d/wazuh.repo
3. Install the Wazuh dashboard package:
# yum -y install wazuh-dashboard
4. Edit the /etc/wazuh-dashboard/opensearch_dashboards.yml
file to add the URL values of the Wazuh indexers for all sites in the opensearch.hosts
section. Also, add the IP address value of the dashboard node to the server.host
section.
server.host: 192.168.186.150 server.port: 443 opensearch.hosts: ["https://192.168.186.151:9200", "https://192.168.186.152:9200", "https://192.168.10.11:9200"] opensearch.ssl.verificationMode: certificate
Note: Make sure that a copy of the wazuh-certificates.tar
file created during the generating certificates step is placed in your working directory.
5. Replace <DASHBOARD_NODE_NAME>
with your Wazuh server node name, the same one set in config.yml
when creating the certificates, and move the certificates to their corresponding directories:
# NODE_NAME=<DASHBOARD_NODE_NAME> # mkdir /etc/wazuh-dashboard/certs # tar -xf ./wazuh-certificates.tar -C /etc/wazuh-dashboard/certs/ ./$NODE_NAME.pem ./$NODE_NAME-key.pem ./root-ca.pem # mv -n /etc/wazuh-dashboard/certs/$NODE_NAME.pem /etc/wazuh-dashboard/certs/dashboard.pem # mv -n /etc/wazuh-dashboard/certs/$NODE_NAME-key.pem /etc/wazuh-dashboard/certs/dashboard-key.pem # chmod 500 /etc/wazuh-dashboard/certs # chmod 400 /etc/wazuh-dashboard/certs/* # chown -R wazuh-dashboard:wazuh-dashboard /etc/wazuh-dashboard/certs
6. Enable and start the Wazuh dashboard service:
# systemctl daemon-reload # systemctl enable wazuh-dashboard # systemctl start wazuh-dashboard
7. Edit the /usr/share/wazuh-dashboard/data/wazuh/config/wazuh.yml
file and add the following values:
ip.ignore
: Set the value to the defaultwazuh-alerts-*
index pattern. This will remove it from the list of index patterns to be selected for the sites.ip.selector
: This enables the selection of index patterns for each site.url
: This is the IP address value of the Wazuh server master node for the respective sites. Specify this for each site under thehosts
section.port
: This is the Wazuh API communication port. By default, the value is set to55000
. Specify this for each site under thehosts
section.username
: This is the username used to authenticate the connection to the Wazuh API. By default, the value is set towazuh-wui
. Specify this for each site under thehosts
section.password
: This is the password used to authenticate the connection to the Wazuh API. By default, the value is set towazuh-wui
. Specify this for each site under thehosts
section.run_as
: By default, this is set tofalse
. Set the value totrue
for Wazuh server role mapping to take effect. Specify this for each site under thehosts
section.
ip.ignore: wazuh-alerts-* ip.selector: true hosts: - SITE A: url: https://192.168.186.151 port: 55000 username: wazuh-wui password: wazuh-wui run_as: true - SITE B: url: https://192.168.10.11 port: 55000 username: wazuh-wui password: wazuh-wui run_as: true
8. Restart the Wazuh dashboard service:
# systemctl restart wazuh-dashboard
9. You can now access the Wazuh dashboard with your credentials:
URL: https://<WAZUH_DASHBOARD_IP> Username: admin Password: admin
Using the Wazuh multi-site dashboard
In this section, we show how to configure and use the Wazuh dashboard after successfully configuring the other Wazuh components for multi-site use.
Add index pattern to Wazuh dashboard
Follow the steps below to create the index patterns for each site on the Wazuh dashboard.
1. Navigate to Stack Management > Index Patterns and select Create index pattern.
2. Enter the custom index patterns created earlier in the configuring the Wazuh server section, site-a-alerts-*
, and select Next step.
3. Select timestamp as the primary time field.
4. Select Create index pattern to create the index pattern.
Repeat steps 2 to 4 to create the index pattern for site-b-alerts-*
.
Navigating the Wazuh dashboard
To view alerts for each site, the index pattern and the API selection must match on the Wazuh dashboard.
1. To view site A alerts, select the following:
- Index pattern:
site-a-alerts-*
- API:
SITE A
2. To view site B alerts, select the following:
- Index pattern:
site-b-alerts-*
- API:
SITE B
Role-based access control
Role-based access control (RBAC) is necessary to configure proper identity and access management (IAM). It simplifies management and enhances security by assigning permissions to roles rather than user accounts while mapping user accounts to roles. This ensures that users only have access to alerts they are authorized to view. You can refer to our role-based access control documentation for more information.
In a multi-site implementation, not every user is expected to view alerts from all the sites. This makes it necessary to implement role-based access control. With RBAC, we can have administrators with access to all sites, and users with access to all sites, single, or multiple sites.
In this section, we explore the creation and mapping of user accounts and roles in a multi-site infrastructure.
Note: Reserved roles are restricted for any permission customizations. You can create custom roles with the same permissions or duplicate a reserved role for further customization.
For the role mapping to take effect, make sure that run_as
is set to true
in the /usr/share/wazuh-dashboard/data/wazuh/config/wazuh.yml
configuration file. Restart the Wazuh dashboard service and reload the dashboard page.
Creating a user account
Follow the steps below to create an internal user. This is the first step in setting up RBAC, as the user will be mapped later to a Wazuh indexer role, and a Wazuh server role. These roles contain a set of permissions that determine what the user can do.
1. Login to the Wazuh dashboard as an administrator.
2. Click the upper-left menu icon ☰ to open the options, select Security, and then Internal users to open the internal users page.
3. Click Create internal user, provide a username and password, and click Create to complete the action.
Creating a multi-site admin user
This user account is suited for performing administrative functions in the Wazuh cluster across all sites.
Wazuh indexer role mapping
Map a user created in the creating a user account section to the admin
role by following the steps below. This gives access to view and perform administrator actions on all indexes across the multi-site.
1. Click the upper-left menu icon ☰ to open the options, select Security, and then Roles to open the roles page.
2. Search for the all_access
role in the roles list and select it to open the details window.
3. Click Duplicate role, assign a name to the new role, for example, custom_all_access
, then click Create to confirm the action.
4. Select the Mapped users tab and click Manage mapping.
5. Select the user you created earlier and click Map to confirm the action.
6. Elevate the selected user by adding a backend role to enable it perform security functions:
- Go to Security > Internal users, and select the user account.
- Add
admin
to the backend role and save changes.
Wazuh server role mapping
Map the user as an admin
for the Wazuh servers by following the steps below. This gives access to perform administrator actions on all Wazuh servers across the multi-site.
1. Click the upper-left menu icon ☰ to open the available options, and click Wazuh > Wazuh.
2. Click Wazuh to open the Wazuh dashboard menu, select Security, and then Roles mapping.
3. Click Create Role mapping and complete the empty fields with the following parameters:
- Role mapping name: Assign a name to the role mapping, for example,
custom_admin
. - Roles: Select administrator.
- Internal users: Select the internal user created previously.
4. Click Save role mapping to save and map the user as an administrator.
5. Toggle the API on the top-right corner of the dashboard to select another Wazuh server, and repeat steps 1 to 4 to perform similar role mapping.
Creating a multi-site read_only user
This user account is suited for only viewing alerts on the Wazuh dashboard across all sites.
Wazuh indexer role mapping
Map a user created in the creating a user account section to the read_only
role by following the steps below. This gives access to perform read-only
user actions on all indexes across the multi-site.
1. Click the upper-left menu icon ☰ to open the options, select Security, and then Roles to open the roles page.
2. Click Create role, complete the empty fields with the following parameters, and then click Create to complete the task.
- Name: Assign a name to the role,
custom_readall
. - Cluster permissions: Select
cluster_composite_ops_ro
from the dropdown list. - Index:
*
- Index permissions: Select
read
from the dropdown list. - Tenant permissions: Select
global_tenant
and select the Read only option.
3. Select the Mapped users tab and click Manage mapping.
4. Select the user created earlier and click Map to confirm the action.
Wazuh server role mapping
Map the user as a read_only
user for the Wazuh servers by following the steps below. This gives access to perform read-only
user actions on all Wazuh servers across the multi-site.
1. Click the upper-left menu icon ☰ to open the available options, and click Wazuh.
2. Click Wazuh to open the Wazuh dashboard menu, select Security, and then Roles mapping.
3. Click Create Role mapping and complete the empty fields with the following parameters:
- Role mapping name: Assign a name to the role mapping,
custom_user
. - Roles: Select readonly.
- Internal users: Select the internal user created previously.
4. Click Save role mapping to save and map the user as a read_only user.
5. Toggle the API on the top-right corner of the dashboard to select another Wazuh server, and repeat steps 1 to 4 to perform similar role mapping.
Creating a single-site read_only user
This user account is suited for only reading alerts on the Wazuh dashboard for a single site.
Wazuh indexer role mapping
Map a user created in the creating a user account section to the read_only
role for a single site by following the steps below. This gives access to perform read only user actions for the selected site.
1. Click the upper-left menu icon ☰ to open the options, select Security, and then Roles to open the roles page.
2. Click Create role, complete the empty fields with the following parameters, and then click Create to complete the task.
- Name: Assign a name to the role,
custom_read_site_a
. - Cluster permissions:
cluster_composite_ops_ro
- Index:
*
- Index permissions:
read
- Tenant permissions: Select
global_tenant
and select theread_only
option.
3. Select the Mapped users tab and click Manage mapping.
4. Select the user created earlier and click Map to confirm the action.
Wazuh server role mapping
Map the user as a read_only
user for the Wazuh servers in a site by following the steps below. This action gives a user access to perform read-only actions on Wazuh servers in the selected site.
1. Click the upper-left menu icon ☰ to open the available options, and click Wazuh > Wazuh.
2. Click Wazuh to open the Wazuh dashboard menu, select Security, and then Roles mapping.
3. Click Create Role mapping and complete the empty fields with the following parameters:
- Role mapping name: Assign a name to the role mapping,
site_a_user
. - Roles: Select readonly.
- Internal users: Select the internal user created previously.
Note: Make sure that step 3 is performed under the API matching the alert index pattern you want to assign to the user.
4. Click Save role mapping to save and map the user as a read_only
user.
Recommendations
Below are some security and storage optimization suggestions.
- Store the
root-ca.key
androot-ca.pem
files in a secure location for future use, and remove them from all nodes after generating the certificates. - Remove the
wazuh-certificates.tar
file by runningrm -f ./wazuh-certificates.tar
if no other Wazuh component is to be installed on a node. - Change the default
admin
password by following our password management documentation. - Configure index retention policies to remove old indices from the Wazuh indexer to free up storage space. This is necessary to manage storage due to the replication of indices across all Wazuh indexer nodes.
Conclusion
In this post, we explored the implementation of Wazuh in geographically distributed sites to ensure comprehensive security coverage. This approach enables organizations to monitor and respond to security events, across distributed IT infrastructures effectively from a centralized point. By centralizing log data and security alerts from multi-sites, Wazuh provides a unified view of the organization’s security posture, facilitating streamlined incident response and compliance efforts.
Additionally, Wazuh multi-site implementation enhances scalability and resilience. This allows organizations to adapt to evolving security challenges while maintaining operational continuity across their entire infrastructure.
If you have any questions or require assistance regarding this setup, refer to our community channels.
Reference