Leveraging Claude Haiku in the Wazuh dashboard for LLM-powered insights

A Large Language Model (LLM) is an Artificial Intelligence (AI) program that recognizes, processes, and generates human-like texts. Claude Haiku is an LLM model designed by Antropic that can perform code completion, interactive chatbots, and content moderation tasks.
The Claude Haiku model can be integrated as a chatbox feature in the Wazuh dashboard. Performing this integration provides an interface within the Wazuh dashboard where users can type in security-related questions and query the Claude Haiku model.
This blog post describes how to integrate the Claude 3.5 Haiku model into the Wazuh dashboard.
Requirements
We use the following infrastructure to demonstrate the integration of Claude 3.5 Haiku with Wazuh:
Configuration
Perform the following steps:
Perform the following steps on the AWS console to enable the Claude 3.5 Haiku model.
1. Sign in to the AWS Management Console and search for “Amazon Bedrock”.
2. Choose Model access in the left navigation pane.
3. Click Enable specific models, and enable the Claude 3.5 Haiku model. You may need to contact support to enable this model.
4. After enabling the model, you should have an interface similar to the image below.
Perform the following steps on the AWS console to create an IAM user and obtain the access credentials.
1. Search for “IAM“.
2. Choose Users in the left navigation pane and click Create user.
3. Assign a name to the new user and complete the user creation process.
4. Choose Users on the left navigation pane and click on the newly created user.
5. Go to the Security credentials tab and click Create access key.
Save the credentials securely, they will be used later when creating a connector to the Claude Haiku model from the Wazuh dashboard.
6. Select the Application running outside AWS option and click Next.
7. Add a description tag for the secret keys and click Create access key.
8. Save the Access key and Secret access key.
Note: If you don’t copy the credentials before you click Done, you cannot recover it later. However, you can create a new secret access key.
Perform the following steps on the AWS console to configure the policies required for the integration.
1. Search for “IAM“.
2. Choose Policies in the left navigation pane, click Create policy, and paste the policy below. The policy allows access to Amazon Bedrock models and enables an IAM user to assume the previously created role for Amazon Bedrock.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "MarketplaceBedrock", "Effect": "Allow", "Action": [ "aws-marketplace:ViewSubscriptions", "aws-marketplace:Unsubscribe", "aws-marketplace:Subscribe" ], "Resource": "*" } ] }
3. Assign a policy name and click Create policy.
4. Select the newly created policy, go to the Entities attached tab, and click Attach. Search for the previously created IAM user and attach the policy to it.
5. Confirm the policy was attached successfully.
6. Click on Policies on the left navigation pane and search for the “AmazonBedrockFullAccess” AWS-managed policy. Select the policy, click Actions, and Attach.
7. Search for the previously created AWS IAM user and attach the policy.
8. Confirm the policy was successfully added.
Perform the following steps on the server hosting the Wazuh dashboard to install the necessary OpenSearch plugins.
1. Download the OpenSearch Dashboard plugins file:
$ curl https://artifacts.opensearch.org/releases/bundle/opensearch-dashboards/2.13.0/opensearch-dashboards-2.13.0-linux-x64.tar.gz -o opensearch-dashboards.tar.gz
2. Decompress the plugin file:
$ tar -xvzf opensearch-dashboards.tar.gz
3. Copy the following plugins to the Wazuh dashboard plugins folder. We make use of the observabilityDashboards
, mlCommonsDashboards
, and assistantDashboards
plugins:
# cp -r opensearch-dashboards-2.13.0/plugins/observabilityDashboards/ /usr/share/wazuh-dashboard/plugins/ # cp -r opensearch-dashboards-2.13.0/plugins/mlCommonsDashboards/ /usr/share/wazuh-dashboard/plugins/ # cp -r opensearch-dashboards-2.13.0/plugins/assistantDashboards/ /usr/share/wazuh-dashboard/plugins/
4. Set permissions and ownerships for the plugins so that the Wazuh dashboard can use them:
# chown -R wazuh-dashboard:wazuh-dashboard /usr/share/wazuh-dashboard/plugins/observabilityDashboards/ # chown -R wazuh-dashboard:wazuh-dashboard /usr/share/wazuh-dashboard/plugins/mlCommonsDashboards/ # chown -R wazuh-dashboard:wazuh-dashboard /usr/share/wazuh-dashboard/plugins/assistantDashboards/ # chmod -R 750 /usr/share/wazuh-dashboard/plugins/observabilityDashboards/ # chmod -R 750 /usr/share/wazuh-dashboard/plugins/mlCommonsDashboards/ # chmod -R 750 /usr/share/wazuh-dashboard/plugins/assistantDashboards/
5. Append the following settings to the /etc/wazuh-dashboard/opensearch_dashboards.yml
file. This adds and enables the OpenSearch Assistant on the Wazuh dashboard. OpenSearch Assistant is an AI-powered user interface.
assistant.chat.enabled: true observability.query_assist.enabled: true
6. Switch to the /usr/share/wazuh-indexer/
directory:
# cd /usr/share/wazuh-indexer/
7. Install the opensearch-flow-framework
and opensearch-skills
plugins. The plugins allow the usage of AI applications and machine learning features on the Wazuh dashboard.
# ./bin/opensearch-plugin install org.opensearch.plugin:opensearch-flow-framework:2.13.0.0 # ./bin/opensearch-plugin install org.opensearch.plugin:opensearch-skills:2.13.0.0
8. Restart the Wazuh dashboard service:
# systemctl restart wazuh-dashboard
The OpenSearch Assistant displays after you restart the Wazuh dashboard service as shown in the image below.
Perform the following steps on the Wazuh dashboard. Navigate to the Indexer management > DevTools tab. Click the play button after inputting each query to send the API requests.
1. Set the machine learning jobs to run on any node in the cluster:
PUT /_cluster/settings { "persistent" : { "plugins.ml_commons.only_run_on_ml_node":"false" } }
{ "acknowledged": true, "persistent": { "plugins": { "ml_commons": { "only_run_on_ml_node": "false" } } }, "transient": {} }
2. Create an API connector to allow access to the Claude 3.5 Haiku model hosted on AWS:
POST /_plugins/_ml/connectors/_create { "name": "Amazon Bedrock Claude Haiku", "description": "Connector for Amazon Bedrock Claude Haiku", "version": 1, "protocol": "aws_sigv4", "credential": { "access_key": "<ACCESS_KEY>", "secret_key": "<SECRET_KEY>" }, "parameters": { "region": "<REGION>", "service_name": "bedrock", "auth": "Sig_V4", "response_filter": "$.content[0].text", "max_tokens_to_sample": "8000", "anthropic_version": "<ANTHROPIC_VERSION>", "model": "<MODEL>" }, "actions": [ { "action_type": "predict", "method": "POST", "headers": { "content-type": "application/json" }, "url": "https://bedrock-runtime.<REGION>.amazonaws.com/model/${parameters.model}/invoke" , "request_body": "{\"messages\":[{\"role\":\"user\",\"content\":[{\"type\":\"text\",\"text\":\"${parameters.prompt}\"}]}],\"anthropic_version\":\"${parameters.anthropic_version}\",\"max_tokens\":${parameters.max_tokens_to_sample}}" } ] }
The necessary parameters are highlighted below.
access_key and secret_key
: These are the access key pairs used to authenticate to the IAM user created here. Replace <ACCESS_KEY>
and <SECRET_KEY>
with their values.region
: This is the AWS region that hosts the model. The Claude Haiku model is available in the us-west-2
region. Replace <REGION>
with your region. service_name
: The name of the AWS service. auth
: This is the AWS signing protocol for adding authentication information to AWS API requests.anthropic_version
and model
:
<ANTHROPIC_VERSION>
and <MODEL>
. In our case, they are bedrock-2023-05-31
and anthropic.claude-3-5-haiku-20241022-v1:0
respectively.{ "connector_id": "5KWlh5MB9UAfvOK_Onx6" }
Save the connector_id
because you need it in another API request.
3. Register a model group. A model group is needed to create a new model.
POST /_plugins/_ml/model_groups/_register { "name": "AWS Bedrock", "description": "This is a public model group" }
{ "model_group_id": "3qWeh5MB9UAfvOK_xnyI", "status": "CREATED" }
Save the model_group_id
.
4. Register and deploy a model to the model group previously created:
POST /_plugins/_ml/models/_register?deploy=true { "name": "Bedrock Claude model", "function_name": "remote", "model_group_id": "<MODEL_GROUP_ID>", "description": "Test Model", "connector_id": "<CONNECTOR_ID>" }
Replace <MODEL_GROUP_ID>
and <CONNECTOR_ID>
with the values of model_group_id
and connector_id
.
{ "task_id": "5aWmh5MB9UAfvOK_oXxk", "status": "CREATED", "model_id": "5qWmh5MB9UAfvOK_oXyQ" }
Save the model_id
.
5. Use the following request to test the model:
POST /_plugins/_ml/models/<MODEL_ID>/_predict { "parameters": { "prompt": "\n\nHuman:hello\n\nAssistant:" } }
Replace <MODEL_ID>
with the value of model_id
.
{ "inference_results": [ { "output": [ { "name": "response", "dataAsMap": { "response": "Hi there! How are you doing today? I'm happy to help you with any questions or tasks you may have. What would you like to chat about?" } } ], "status_code": 200 } ] }
Note: You need to change your current region if you get the response below. This means that your current region does not have access to the Claude 3.5 Haiku model. The model is available in the us-west-2
region.
{ "error": { "root_cause": [ { "type": "status_exception", "reason": "Error from remote service: {\"message\":\"Invocation of model ID anthropic.claude-3-5-haiku-20241022-v1:0 with on-demand throughput isn't supported. Retry your request with the ID or ARN of an inference profile that contains this model.\"}" } ], "type": "status_exception", "reason": "Error from remote service: {\"message\":\"Invocation of model ID anthropic.claude-3-5-haiku-20241022-v1:0 with on-demand throughput isn't supported. Retry your request with the ID or ARN of an inference profile that contains this model.\"}" }, "status": 400 }
6. Register an agent to the Claude 3.5 Haiku model:
POST /_plugins/_ml/agents/_register { "name": "Haiku_3.5_Claude_agent", "type": "conversational", "description": " This is a Haiku 3.5 Claude agent", "llm": { "model_id": "<MODEL_ID>", "parameters": { "max_iteration": 5, "stop_when_no_tool_found": true } }, "memory": { "type": "conversation_index" }, "tools": [ { "type": "MLModelTool", "name": "bedrock_claude_model", "description": "A general tool to answer any question", "parameters": { "model_id": "<MODEL_ID>", "prompt": "Human: You're an Artificial intelligence analyst and you're going to help me with cybersecurity related tasks.\n\n${parameters.chat_history:-}\n\nHuman: ${parameters.question}\n\nAssistant:" } } ] }
{ "agent_id": "UZjEh5MBBzF0ul56sFFZ" }
Save the agent_id
.
7. Test the agent:
POST _plugins/_ml/agents/<AGENT_ID>/_execute { "parameters": { "question": "Who are you?", "verbose": false } }
Replace <AGENT_ID>
with the value of agent_id
.
{ "inference_results": [ { "output": [ { "name": "memory_id", "result": "V5jIh5MBBzF0ul56RFEB" }, { "name": "parent_interaction_id", "result": "WJjIh5MBBzF0ul56RFEQ" }, { "name": "response", "dataAsMap": { "response": "I am an AI assistant designed to help with a wide range of tasks, with deep expertise in areas like OpenSearch, logs, traces, and metrics. I can understand and process large amounts of text, engage in natural conversations, and provide informative and accurate responses to various questions while always prioritizing helpful and ethical interactions.", "additional_info": {} } } ] } ] }
8. Connect the agent to OpenSearch Assistant:
PUT .plugins-ml-config/_doc/os_chat { "type":"os_chat_root_agent", "configuration":{ "agent_id": "<AGENT_ID>" } }
Replace <AGENT_ID>
with the value of agent_id
.
{ "_index": ".plugins-ml-config", "_id": "os_chat", "_version": 2, "result": "updated", "_shards": { "total": 1, "successful": 1, "failed": 0 }, "_seq_no": 2, "_primary_term": 2 }
9. Refresh the Wazuh dashboard.
The section below shows how we test the model’s knowledge of Wazuh and the respective responses.
Note:
1. How do I install a Wazuh agent on a Windows endpoint? A good response is generated by the LLM in the image below.
2. What do I do when I see a Wazuh vulnerability alert? The LLM generates a good response in the image below.
3. What is the MITRE ID for obfuscation? The LLM generates a good response, as shown in the image below.
4. How can I configure the Wazuh active response module to block an IP address after multiple failed SSH authentication attempts? The LLM generates a partially correct response in the image below.
5. Write a Wazuh decoder for the log sample “Nov 11 15:19:38 TrendDeepSecurity CEF:0|Trend Micro|Deep Security Manager|20.0.605|1556|Anti-Malware scan exclusion setting update|3|src=10.70.20.236 suser=System target=V-PRD-0014.example.com (V-PRD-0014) msg=The Anti-Malware scan exclusion setting contains the following errors:\n\nAn item specified in an Anti-Malware exclusion list cannot be modified by users.\nC:\\windows\\system32\\spoolsv.exe\n\n TrendMicroDsTenant=Primary TrendMicroDsTenantId=0
“, extracting the fields product, version, and severity. The LLM generates an incorrect response in the images below.
Conclusion
Wazuh is an open source security monitoring platform that unifies SIEM and XDR capabilities. Integrating LLMs like Claude Haiku introduces a chat-like interface within the Wazuh dashboard where users can type security-related questions and query the model.
Wazuh is deployed and managed on-premises, or on Wazuh cloud. Check out our community for support and updates.
References