33 Downloads Updated 1 month ago
Advanced LLaMA 3.1 8B Instruct models fine-tuned for Wazuh security log analysis with instruction-following capabilities.
| Model | Size | Context | Input | Description |
|---|---|---|---|---|
mranv/siem-llama-3.1:base |
4.7GB | 128K | Text | Base model for Wazuh security log analysis |
mranv/siem-llama-3.1:v1 |
4.7GB | 128K | Text | Enhanced version with improved analysis capabilities |
ollama pull mranv/siem-llama-3.1:base
ollama pull mranv/siem-llama-3.1:v1
# Base model
ollama run mranv/siem-llama-3.1:base
# V1 model
ollama run mranv/siem-llama-3.1:v1
# Analyze SSH login attempt
ollama run mranv/siem-llama-3.1:v1 "Analyze this Wazuh alert: SSH login from 192.168.1.100 to root account"
# Analyze failed login attempts
ollama run mranv/siem-llama-3.1:v1 "Analyze this security event: Multiple failed login attempts detected"
ollama run mranv/siem-llama-3.1:v1 '{
"timestamp": "2025-01-15T14:00:00Z",
"agent": {
"id": "000",
"name": "malware_detection-00",
"ip": "10.0.0.100"
},
"rule": {
"id": "600",
"level": 13,
"description": "Rootkit detected in system",
"groups": ["rootkit", "malware", "attack"],
"category": "malware_detection",
"mitre": {
"id": ["T1014"],
"tactic": ["Defense Evasion"]
}
},
"data": {
"severity": "high",
"file": "/tmp/malware_0.exe",
"malware_name": "Trojan.Generic",
"action": "quarantined",
"scanner": "ClamAV"
},
"location": "/var/log/malware_detection/security.log"
}'
curl http://localhost:11434/api/generate -d '{
"model": "mranv/siem-llama-3.1:v1",
"prompt": "Analyze this security event: Multiple failed login attempts detected",
"stream": false
}'
import requests
import json
def analyze_security_log(log_data):
response = requests.post(
'http://localhost:11434/api/generate',
json={
'model': 'mranv/siem-llama-3.1:v1',
'prompt': f"Analyze this Wazuh log: {log_data}",
'stream': False
}
)
return response.json()
# Example usage
log = {
"timestamp": "2025-01-15T14:00:00Z",
"rule": {
"id": "5710",
"level": 5,
"description": "sshd: Attempt to login using a non-existent user"
},
"data": {
"srcip": "192.168.1.100",
"dstuser": "admin"
}
}
result = analyze_security_log(json.dumps(log))
print(result['response'])
mranv/siem-llama-3.1:base)mranv/siem-llama-3.1:v1)Temperature: 0.7
Top-p: 0.9
Top-k: 40
Context Window: 128K tokens
You can adjust parameters for different use cases:
# More deterministic output (lower temperature)
ollama run mranv/siem-llama-3.1:v1 --temperature 0.3 "Analyze this alert..."
# More creative analysis (higher temperature)
ollama run mranv/siem-llama-3.1:v1 --temperature 0.9 "Analyze this alert..."
ollama run mranv/siem-llama-3.1:v1 "Analyze: Failed password for invalid user admin from 10.0.0.50"
ollama run mranv/siem-llama-3.1:v1 '{
"rule": {"level": 10, "description": "Brute force attack detected"},
"data": {"srcip": "203.0.113.5", "attempts": 50}
}'
cat security_logs.json | while read log; do
ollama run mranv/siem-llama-3.1:v1 "$log"
done
# Use GPU for faster inference
OLLAMA_GPU=1 ollama run mranv/siem-llama-3.1:v1
# Adjust context size for memory constraints
ollama run mranv/siem-llama-3.1:v1 --num-ctx 4096 "Analyze..."
#!/usr/bin/env python3
import json
import requests
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
class WazuhLogHandler(FileSystemEventHandler):
def on_modified(self, event):
if event.src_path.endswith('alerts.json'):
with open(event.src_path) as f:
for line in f.readlines():
log = json.loads(line)
analyze_with_llm(log)
def analyze_with_llm(log):
response = requests.post(
'http://localhost:11434/api/generate',
json={
'model': 'mranv/siem-llama-3.1:v1',
'prompt': f"Analyze: {json.dumps(log)}",
'stream': False
}
)
print(response.json()['response'])
if __name__ == '__main__':
observer = Observer()
observer.schedule(WazuhLogHandler(), '/var/ossec/logs/alerts/', recursive=False)
observer.start()
observer.join()
Based on OpenNix/wazuh-llama-3.1-8B models: - Original Base: OpenNix/wazuh-llama-3.1-8B-base - Original V1: OpenNix/wazuh-llama-3.1-8B-v1
Please refer to the original model licenses and LLaMA 3.1 licensing terms.
For issues and questions: - Open an issue on the model page - Refer to Ollama documentation - Check Wazuh community forums
Note: These models are designed for security analysis purposes. Always validate AI-generated analysis with security best practices and expert review.