Amazon Security Lake is a fully managed service that helps organizations aggregate, store, and analyze security data from various sources, such as AWS services, on-premise logs, and third-party SaaS applications. Security administrators can use AWS services like Athena to query the security data, which gives them insight into potential threats and vulnerabilities across an organization’s infrastructure.
Amazon Security Lake has adopted the Open Cybersecurity Schema Framework (OCSF), an open source standard that offers an extensible framework for developing schemas. This vendor-agnostic core security schema ensures seamless integration and interoperability across various log sources and security data in AWS, providing a unified view of your security data across your organization.
A properly configured Logstash instance forwards Wazuh security events to an AWS S3 bucket. This action triggers the AWS Lambda function to transform and transmit the security events to the S3 bucket dedicated to the Amazon Security Lake.
The diagram below illustrates the process of converting Wazuh security events to OCSF events and Parquet format for Amazon Security Lake:
Benefits of integrating Wazuh with Amazon Security Lake
Integrating Wazuh with Amazon Security Lake offers several benefits that can significantly enhance security operations. These benefits include the following:
- Centralized security data management: Unifying security data from Wazuh with logs from other sources within Amazon Security Lake simplifies data management and improves visibility across the infrastructure.
- Enhanced threat detection: Analyze enriched security data sets across your IT infrastructure to gain deeper insights into potential security threats. The rich security context provided by Wazuh, and the advanced analytics capabilities provided by Amazon Security Lake empowers you to identify and prioritize threats effectively.
- Improved security operations efficiency: Reduce the time and effort spent on data collection and management. Integrating Wazuh with Amazon Security Lake simplifies data access and analysis, allowing security professionals to focus on proactive threat hunting and incident response.
- Simplified compliance management: Integrating Wazuh with Amazon Security Lake helps organizations maintain a secure and auditable trail of all security events and logs. This integration enhances the ability to perform thorough and efficient compliance audits and investigations by providing centralized, immutable, and easily searchable log data.
Requirements
We use the following infrastructure to demonstrate the integration of Wazuh with Amazon Security Lake as a custom source:
- A pre-built, ready-to-use Wazuh OVA 4.8.0 that hosts the Wazuh central components (Wazuh server, Wazuh indexer, and Wazuh dashboard). Follow this guide to download the virtual machine.
- AWS account to configure and enable Amazon Security Lake. The account must have administrator access to perform all the necessary actions.
- A Logstash instance installed on a dedicated endpoint or the Wazuh server.
- An AWS S3 bucket to store events.
- An AWS Lambda function using the Python 3.12 runtime.
- Amazon Athena to query Amazon Security Lake data.
Configuration
Amazon Web Services
Configure Amazon Security Lake
Perform the following steps to enable Amazon Security Lake.
1. Navigate to the AWS portal, search for Security Lake in the search bar, and select it.
2. Select Get started if Security Lake was not previously configured otherwise, skip to the next step.
3. Select the Ingest the AWS default sources (recommended) or Ingest specific AWS sources if you want to enable Security Lake for specific sources.
4. Select All supported Regions (recommended) or Specific Regions. Security Lake ingests data from selected regions.
5. Select Create and use a new service role or Use an existing service role if you had a role previously defined. This role grants Security Lake permission to process data from your sources.
6. Click Next > Next > Create.
Create a custom source for Wazuh
Perform the following steps to register Wazuh as a custom source.
1. Click Custom sources on the left side menu of the Amazon Security Lake console.
2. Click Create custom source.
3. Enter the Data source name. You can use any descriptive name, but in this blog post, we use Wazuh_Source
.
4. Select Security Finding
as the OCSF Event class.
5. Enter the AWS account ID
and External ID
of the custom source that will write logs and events to the data lake. The External ID can be any random value consisting of 12 digits.
6. Select Create and use a new service role or Use an existing service role. This role permits Security Lake to invoke AWS Glue under the Service Access section.
7. Click Create. During creation, Amazon Security Lake automatically creates an AWS Service Role with permission to push files into the Security Lake bucket under the proper prefix named after the custom source name. An AWS Glue Crawler is also created to automatically populate the AWS Glue Data Catalog.
Note: Copy and save the details of the AWS S3 bucket for Security Lake in the image above. This S3 bucket stores the transformed events These details will be needed in the subsequent step. Make sure you have the following information:
- The Amazon Security Lake S3 bucket region.
- The AWS S3 bucket name. The name is under the Location field in the image above with the exclusion of
s3://
.
Create an AWS S3 bucket to store events
Perform the following steps to create an AWS S3 bucket to store Wazuh security data sent from Logstash.
1. Navigate to the AWS portal and search for S3
. Click on the S3 service.
2. Choose Create bucket.
3. Confirm the AWS Region your S3 bucket will be created under the General configuration.
4. Select General purpose under Bucket type.
5. Enter a name for the bucket under the Bucket name. You can use a descriptive name like wazuh-aws-security-lake-events
.
6. Leave the other configuration options at their default and click Create bucket.
Create and configure an AWS Lambda function
Perform the following steps to create and configure the AWS Lambda function.
1. Navigate to the AWS portal and search for Lamda
. Click the Lamda service.
2. Select Create Function.
3. Select Author from scratch.
4. Enter the Function name. You can use any descriptive name of your choice, but in this blog post, we use wazuh-aws-security-lake
.
5. Choose Python 3.12 as the Runtime.
6. Leave architecture set to x86_64.
7. Expand Change default execution role and select Create a new role with basic Lambda permissions under the Execution role section.
8. Click Create function.
9. Select Configuration > General configuration from the wazuh-aws-security-lake
function page and click Edit.
10. Change the value of Memory under Basic settings to 512
. Also, change the Timeout value to 30
seconds. Then click Save.
11. Navigate to Triggers > Add trigger, search, and select S3
in the select a source search field.
12. Choose the S3 bucket you created earlier under the Bucket field. For this blogpost the name of the bucket created earlier is wazuh-aws-security-lake-events
.
13. Enter .txt
under Suffix. Select the acknowledge checkbox and click Add.
14. Navigate to Permissions and click the Role name link. The wazuh-aws-security-lake-role-hl5vrcdo
role page will open in a new browser tab. This role was created along with the Lambda function.
Note: The name of your role will differ from the one in this blog post.
15. Select Add permissions > Create inline policy.
16. Select JSON as the Policy editor on the Specify permissions page.
17. Remove the default policy from the Policy editor workspace.
18. Copy and paste the below policy into the Policy editor workspace.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "GetObjectPermissions", "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::<S3_BUCKET_FOR_LOGSTASH>/*" ] }, { "Sid": "PutObjectPermissions", "Effect": "Allow", "Action": [ "s3:PutObject" ], "Resource": [ "arn:aws:s3:::<S3_BUCKET_FOR_SECURITY_LAKE>*" ] } ] }
Replace:
<S3_BUCKET_FOR_LOGSTASH>
with the name of the AWS S3 bucket that stores Wazuh security events sent from Logstash.<S3_BUCKET_FOR_SECURITY_LAKE>
with the name of the AWS S3 bucket that stores the transformed Wazuh security events in Amazon Security Lake.
19. Click Next, and type a name in the Policy name field. You can use any descriptive name of your choice, but in this blog post, we use wazuh-aws-lamdafunction-permission
.
20. Click Create policy.
21. Navigate to Environment variables from the wazuh-aws-security-lake
function page and click Edit.
22. Click Add environment variable. Enter the key and the corresponding value, respectively. Configure the Lambda function with the required environment variables at the minimum below:
Environment variable | Required | Value |
AWS_BUCKET | True | The name of the Amazon S3 bucket in which Security Lake stores your custom source data |
SOURCE_LOCATION | True | The Data source name of the custom source |
ACCOUNT_ID | True | Enter the AWS account ID that you specified when creating your Amazon Security Lake custom source |
REGION | True | AWS Region to which the data is written |
S3_BUCKET_OCSF | False | S3 bucket to which the mapped events are written |
OCSF_CLASS | False | The OCSF class to map the events into. It can be “SECURITY_FINDING” (the default) or “DETECTION_FINDING.” |
23. Click Save after entering the environment variables.
Wazuh server
Perform the following steps on your Wazuh server.
Installing and configuring Logstash
For this blog post, we installed Logstash 8.10 on the Wazuh server, but you can also install it separately on a dedicated server.
Install Logstash and the required plugin on the Wazuh server to forward security data from the Wazuh indexer to the AWS S3 bucket.
1. Perform the following steps to install Logstash.
a. Run the following command to download and install the public signing key:
# sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
b. Create logstash.repo file in the /etc/yum.repos.d/
directory:
# sudo touch /etc/yum.repos.d/logstash.repo
c. Add the following content to the /etc/yum.repos.d/logstash.repo
file:
[logstash-8.x] name=Elastic repository for 8.x packages baseurl=https://artifacts.elastic.co/packages/8.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md
d. Install Logstash with the command below:
# sudo yum install logstash
2. Run the following command to install the logstash-input-opensearch and the logstash-output-s3 plugins.
# sudo /usr/share/logstash/bin/logstash-plugin install logstash-input-opensearch
3. Copy the Wazuh indexer root certificate to the Logstash server hosted on the Wazuh server. In this blog post, we add the certificate to the /usr/share/logstash/
directory on the Wazuh server. The Wazuh indexer root certificate in a pre-built, ready-to-use Wazuh OVA is located in the /etc/wazuh-indexer/certs/
directory.
# cp /etc/wazuh-indexer/certs/root-ca.pem /usr/share/logstash/
4. Give the logstash user permission to read the Wazuh indexer root certificate. Replace /usr/share/logstash/root-ca.pem
with your Wazuh indexer root certificate local path on the Wazuh server.
# sudo chmod -R 755 /usr/share/logstash/root-ca.pem
Configure the Logstash pipeline
Configure a Logstash pipeline that allows Logstash to use plugins to read data from the Wazuh indexer and send it to an Amazon S3. The Logstash pipeline requires access to the following credentials:
- Wazuh indexer credentials:
INDEXER_USERNAME
andINDEXER_PASSWORD
. - AWS IAM credentials of the administrator account to write to the S3 bucket:
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
. - AWS S3 bucket details:
AWS_REGION
andS3_BUCKET
(bucket name).
We use the Logstash keystore to store these values securely.
1. Run the following commands on your Logstash server to set a keystore password. You need to create the /etc/sysconfig
directory as root
if it does not exist on your server.
# set +o history # echo 'LOGSTASH_KEYSTORE_PASS="<MY_KEYSTORE_PASSWORD>"'| sudo tee /etc/sysconfig/logstash LOGSTASH_KEYSTORE_PASS="<MY_KEYSTORE_PASSWORD>" # export LOGSTASH_KEYSTORE_PASS=<MY_KEYSTORE_PASSWORD> # set -o history # sudo chown root /etc/sysconfig/logstash # sudo chmod 600 /etc/sysconfig/logstash # sudo systemctl start logstash
Replace <MY_KEYSTORE_PASSWORD>
with your keystore password.
2. Run the following commands to securely store the Wazuh indexer, AWS IAM, and AWS bucket credentials.
a. Create a new Logstash keystore:
# sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash create
b. Store your Wazuh indexer username and password:
# sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add INDEXER_USERNAME # sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add INDEXER_PASSWORD
c. Store your AWS IAM credentials:
# sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add AWS_ACCESS_KEY_ID # sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add AWS_SECRET_ACCESS_KEY
d. Store your AWS S3 bucket details:
# sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add AWS_REGION # sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add S3_BUCKET
Note: INDEXER_USERNAME
, INDEXER_PASSWORD
, AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
, AWS_REGION
, and S3_BUCKET
in the commands above are not placeholders, but keys representing the secret values you are adding to the Logstash keystore. These keys will be used in the Logstash pipeline.
When you run each of the commands, you will be prompted to enter your credentials, and the credentials will not be visible as you enter them.
3. Perform the following steps to configure the Logstash pipeline.
a. Create the configuration file indexer-to-s3.conf
in the /etc/logstash/conf.d/
directory:
# sudo touch /etc/logstash/conf.d/indexer-to-s3.conf
b. Add the following configuration to the indexer-to-s3.conf
file. This sets the parameters required to run Logstash.
input { opensearch { hosts => ["<WAZUH_INDEXER_ADDRESS>:9200"] user => "${INDEXER_USERNAME}" password => "${INDEXER_PASSWORD}" ssl => true ca_file => "/usr/share/logstash/root-ca.pem" index => "wazuh-alerts-4.x-*" query => '{ "query": { "range": { "@timestamp": { "gt": "now-5m" } } } }' schedule => "*/5 * * * *" } } output { stdout { id => "output.stdout" codec => json_lines } s3 { id => "output.s3" access_key_id => "${AWS_ACCESS_KEY_ID}" secret_access_key => "${AWS_SECRET_ACCESS_KEY}" region => "${AWS_REGION}" bucket => "${S3_BUCKET}" codec => "json_lines" retry_count => 0 validate_credentials_on_root_bucket => false prefix => "%{+YYYY}%{+MM}%{+dd}" server_side_encryption => true server_side_encryption_algorithm => "AES256" additional_settings => { "force_path_style" => true } time_file => 5 } }
Replace <WAZUH_INDEXER_ADDRESS>
with the Wazuh indexer IP address as available in the network.host
field of the /etc/wazuh-indexer/opensearch.yml
file.
Verify the Logstash configuration
Perform the following steps to confirm that the configurations load correctly.
1. Run Logstash from the CLI with your configuration:
# sudo systemctl stop logstash # sudo -E /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/indexer-to-s3.conf --path.settings /etc/logstash --config.test_and_exit
2. After confirming that the configuration loads correctly without errors, enable and run Logstash as a service:
# sudo systemctl enable logstash # sudo systemctl start logstash
Note: The /var/log/logstash/logstash-plain.log
file in the Logstash instance stores events generated when Logstash runs. View this file to troubleshoot any issue you may encounter.
Generating zip deployment package for AWS Lambda function
Perform the following steps to generate a zip deployment package containing the source code and the required dependencies for the AWS Lambda function. The Makefile file in the wazuh-indexer/integrations/amazon-security-lake/
repository automates the generation of the zip deployment package.
Docker is required to generate the zip deployment package using the make command.
1. Run the following command to clone the wazuh-indexer repository:
# cd /tmp # git clone https://github.com/wazuh/wazuh-indexer.git
2. Perform the following steps to install docker.
a. Run the following command to update all the packages on the Wazuh server.
# sudo yum update -y
b. Run the command below to install docker.
# sudo amazon-linux-extras install docker
c. Start the docker service.
# sudo service docker start
3. Run the following commands to generate the wazuh_to_amazon_security_lake.zip
deployment package.
# sudo cd /tmp/wazuh-indexer/integrations/amazon-security-lake # make
4. Run the below command to view the deployment package.
ls | grep "wazuh_to_amazon_security_lake.zip"
wazuh_to_amazon_security_lake.zip
Note: You can generate the deployment package on a different endpoint and upload it to the S3 bucket.
AWS CLI
AWS CLI is installed by default on Wazuh OVA version 4.4.5 and later. Perform the following steps to configure AWS CLI and upload the wazuh_to_amazon_security_lake.zip
file to the S3 bucket created earlier. We also create and upload a sample test file containing sample events to validate that the Lambda function works as expected.
1. Verify that AWS CLI is installed on the Wazuh server.
# aws --version
2. Run the following command to configure AWS CLI to access your AWS instance. You will be prompted to enter the AWS Access Key ID
, AWS Secret Access Key
, Default region name
, and Default output format
. When prompted for the Default output format
, press the Enter key on your keyboard to accept the default JSON format.
# sudo aws configure
Where:
AWS Access Key ID
: represents the access key of your AWS IAM account.AWS Secret Access Key
: represents the secret access key of your AWS IAM account.Default region name
: represents the region where your AWS account is located.Default output format
: represents the output format of the AWS CLI.
3. Run the following command to upload the wazuh_to_amazon_security_lake.zip
file to the wazuh-aws-security-lake-events
S3 bucket:
# sudo aws s3 cp /tmp/wazuh_to_amazon_security_lake.zip s3://wazuh-aws-security-lake-events/wazuh_to_amazon_security_lake.zip
4. Run the following commands to create a sample test file and upload it to the wazuh-aws-security-lake-events
S3 bucket.
a. Create a sample file with the name 20240422_ls.s3.2f062956-5a30-4c2a-b693-a0f5d878294c.2024-04-22T14.20.part39.txt
in the /tmp directory.
{"cluster":{"name":"wazuh-cluster","node":"wazuh-manager"},"timestamp":"2024-04-22T14:20:46.976+0000","rule":{"mail":false,"gdpr":["IV_30.1.g"],"groups":["audit","audit_command"],"level":3,"firedtimes":1,"id":"80791","description":"Audit: Command: /usr/sbin/crond"},"location":"","agent":{"id":"004","ip":"47.204.15.21","name":"Ubuntu"},"data":{"audit":{"type":"NORMAL","file":{"name":"/etc/sample/file"},"success":"yes","command":"cron","exe":"/usr/sbin/crond","cwd":"/home/wazuh"}},"predecoder":{},"manager":{"name":"wazuh-manager"},"id":"1580123327.49031","decoder":{},"@version":"1","@timestamp":"2024-04-22T14:20:46.976Z"} {"cluster":{"name":"wazuh-cluster","node":"wazuh-manager"},"timestamp":"2024-04-22T14:22:03.034+0000","rule":{"mail":false,"gdpr":["IV_30.1.g"],"groups":["audit","audit_command"],"level":3,"firedtimes":1,"id":"80790","description":"Audit: Command: /usr/sbin/bash"},"location":"","agent":{"id":"007","ip":"24.273.97.14","name":"Debian"},"data":{"audit":{"type":"PATH","file":{"name":"/bin/bash"},"success":"yes","command":"bash","exe":"/usr/sbin/bash","cwd":"/home/wazuh"}},"predecoder":{},"manager":{"name":"wazuh-manager"},"id":"1580123327.49031","decoder":{},"@version":"1","@timestamp":"2024-04-22T14:22:03.034Z"} {"cluster":{"name":"wazuh-cluster","node":"wazuh-manager"},"timestamp":"2024-04-22T14:22:08.087+0000","rule":{"id":"1740","mail":false,"description":"Sample alert 1","groups":["ciscat"],"level":9},"location":"","agent":{"id":"006","ip":"207.45.34.78","name":"Windows"},"data":{"cis":{"rule_title":"CIS-CAT 5","timestamp":"2024-04-22T14:22:08.087+0000","benchmark":"CIS Ubuntu Linux 16.04 LTS Benchmark","result":"notchecked","pass":52,"fail":0,"group":"Access, Authentication and Authorization","unknown":61,"score":79,"notchecked":1,"@timestamp":"2024-04-22T14:22:08.087+0000"}},"predecoder":{},"manager":{"name":"wazuh-manager"},"id":"1580123327.49031","decoder":{},"@version":"1","@timestamp":"2024-04-22T14:22:08.087Z"}
c. Run the following command to copy the 20240422_ls.s3.2f062956-5a30-4c2a-b693-a0f5d878294c.2024-04-22T14.20.part39.txt
file to the wazuh-aws-security-lake-events
S3 bucket:
# sudo aws s3 cp /tmp/20240422_ls.s3.2f062956-5a30-4c2a-b693-a0f5d878294c.2024-04-22T14.20.part39.txt s3://wazuh-aws-security-lake-events/20240422_ls.s3.2f062956-5a30-4c2a-b693-a0f5d878294c.2024-04-22T14.20.part39.txt
5. Run the below command to confirm the wazuh_to_amazon_security_lake.zip
and 20240422_ls.s3.2f062956-5a30-4c2a-b693-a0f5d878294c.2024-04-22T14.20.part39.txt
files were uploaded successfully to the S3 bucket:
# sudo aws s3 ls s3://wazuh-aws-security-lake-events
20240422_ls.s3.2f062956-5a30-4c2a-b693-a0f5d878294c.2024-04-22T14.20.part39.txt 2024-05-27 21:32:14 67442781 wazuh_to_amazon_security_lake.zip
Validating and querying Amazon Security Lake Data
Verifying Lambda Function is working properly
1. Navigate to the AWS portal and search for Lamda
. Click the Lamda service.
2. Click the wazuh-aws-security-lake Lambda function.
3. Click Code from the Function overview page and select Upload from. Select Amazon S3 location and enter the Amazon S3 link URL for the wazuh_to_amazon_security_lake.zip
file. The format of the AWS S3 URL for a file is https://<BUCKET_NAME>.s3.<S3_REGION>.amazonaws.com/<PATH_OF_THE_FILE>
. Click Save.
Where:
<BUCKET_NAME>
represents the name of the AWS S3 bucket containing thewazuh_to_amazon_security_lake.zip
file. In this case, the name iswazuh-aws-security-lake-events
.<S3_REGION>
represents the region where your AWS account is located. In this scenario, the value isus-east-1
.<PATH_OF_THE_FILE>
represents the location of thewazuh_to_amazon_security_lake.zip
file on the AWS S3 bucket. In this scenario, the path is/wazuh_to_amazon_security_lake.zip
, as it is located in the root directory of thewazuh-aws-security-lake-events
AWS S3 bucket.
4. Click Test from the Function overview page
5. Select Create new event and enter Wazuhtest
as the Event name. Leave the Event sharing settings at the default.
6. Remove the content from the Event JSON. Copy and paste the test code below in the Event JSON field. Ensure to replace <S3_BUCKET_REGION>
with the region of your S3 bucket and <S3_BUCKET_NAME>
with the name of your S3 bucket.
{ "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": "<S3_BUCKET_REGION>", "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": "<S3_BUCKET_NAME>", "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3:::<S3_BUCKET_NAME>" }, "object": { "key": "20240422_ls.s3.2f062956-5a30-4c2a-b693-a0f5d878294c.2024-04-22T14.20.part39.txt", "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] }
7. Click on Test to invoke and run the Lambda function. An Execution function: succeeded message with details of the execution logs is displayed, showing the transformation of the Wazuh security events to OCSF events.
[INFO] 2024-06-10T01:02:40.526Z Found credentials in environment variables. START RequestId: a078a4e9-2404-425e-9f0a-ddf98f33fa3d Version: $LATEST [INFO] 2024-06-10T01:02:40.695Z a078a4e9-2404-425e-9f0a-ddf98f33fa3d Lambda function invoked due to 20240422_ls.s3.2f062956-5a30-4c2a-b693-a0f5d878294c.2024-04-22T14.20.part39.txt. [INFO] 2024-06-10T01:02:40.696Z a078a4e9-2404-425e-9f0a-ddf98f33fa3d Source bucket name is wazuh-aws-security-lake-events. Destination bucket is aws-security-data-lake-us-east-1-dtjenixd0c25ajvuo7ic21u2mst8kk. [INFO] 2024-06-10T01:02:40.696Z a078a4e9-2404-425e-9f0a-ddf98f33fa3d Reading 20240422_ls.s3.2f062956-5a30-4c2a-b693-a0f5d878294c.2024-04-22T14.20.part39.txt. [INFO] 2024-06-10T01:02:40.854Z a078a4e9-2404-425e-9f0a-ddf98f33fa3d Transforming Wazuh security events to OCSF. [INFO] 2024-06-10T01:02:41.019Z a078a4e9-2404-425e-9f0a-ddf98f33fa3d Uploading data to aws-security-data-lake-us-east-1-dtjenixd0c25ajvuo7ic21u2mst8kk. END RequestId: a078a4e9-2404-425e-9f0a-ddf98f33fa3d
Verifying logs are sent to the S3 bucket from Logstash
Perform the following steps to validate that Logstash sends logs to the wazuh-aws-security-lake-events
S3 bucket.
1. Navigate to the AWS portal and search for S3
. Click the S3 service
2. Search for the name of the S3 bucket you created earlier and click on it to display its content. In this blog post, wazuh-aws-security-lake-events
is the name of the S3 bucket.
3. Click the S3 object with the named format <YEARMONTHDAY>/
. This name represents the day the log was sent from Logstash to the S3 bucket. For example, the object’s name in the screenshot below is 20240610/
.
Verifying and querying logs with Amazon Athena
Amazon Athena is a powerful, serverless, interactive query service that allows you to analyze data directly in Amazon S3 using standard SQL. It is important to verify that the AWS Glue crawler ran when the logs were sent to the S3 bucket. Otherwise, it would need to be run manually so it can populate the database table with the latest data in the custom source prefix of the Amazon Security Lake S3 bucket. The AWS Glue crawler is a tool that automates the discovery and cataloging of your data, making it easy to use Amazon Athena for querying data stored in S3.
Perform the following steps to view and query your security events with Amazon Athena.
1. Navigate to the AWS portal and search for AWS Glue
. Click on the AWS Glue service.
a. Click Crawlers under Data Catalog.
b. Click Wazuh_Source
. The name should be the same as that of your Amazon Security Lake custom source.
c. Check the last time the crawler ran under the Crawler runs.
d. Click on Run crawler to manually initialize the crawler to discover and properly catalog your security events. It is necessary to repeat this process at given intervals.
2. Navigate to the AWS portal and search for Athena
. Click on the Athena service.
a. Navigate to the Settings page and click Manage. Click Browse S3 and select the S3 bucket to store the result Athena queries. For this blog post, we selected the Amazon Security Lake S3 bucket that stores the transformed security event.
b. Click Save and navigate to the Editor page.
c. Select AwsDataCatalog
as the Data source.
d. Select amazon_security_lake_glue_db_us_east_1
as the Database. The database name might differ for different implementations based on the region where the custom source was created.
e. Click on the ellipsis by the amazon_security_lake_table_us_east_1_ext_wazuh_source
table and select Preview Table. The table name is unique for every custom source as it ends with the name of the custom source. This populates a query within the Query Editor, and the result is shown with the Query Results.
Conclusion
Integrating Wazuh with Amazon Security Lake allows organizations to streamline their security data management and gain deeper insights into potential threats. This integration centralizes security logs and leverages AWS’s analytics tools, like QuickSight and Athena, to provide actionable intelligence.
Wazuh is a free and open-source enterprise security solution for threat detection, incident response, and compliance. It integrates with various third-party solutions and technologies. Wazuh provides extensive and continuous support to our community. For more information, explore our documentation and blog posts.
References