33 1 month ago

Advanced LLaMA 3.1 8B Instruct models fine-tuned for Wazuh security log analysis with instruction-following capabilities.

Models

View all →

Readme

SIEM LLaMA 3.1 - Wazuh Security Log Analysis Models

Advanced LLaMA 3.1 8B Instruct models fine-tuned for Wazuh security log analysis with instruction-following capabilities.

Models

Model Size Context Input Description
mranv/siem-llama-3.1:base 4.7GB 128K Text Base model for Wazuh security log analysis
mranv/siem-llama-3.1:v1 4.7GB 128K Text Enhanced version with improved analysis capabilities

Installation

Base Model

ollama pull mranv/siem-llama-3.1:base

V1 Model (Recommended)

ollama pull mranv/siem-llama-3.1:v1

Usage

Interactive Session

# Base model
ollama run mranv/siem-llama-3.1:base

# V1 model
ollama run mranv/siem-llama-3.1:v1

Security Log Analysis

# Analyze SSH login attempt
ollama run mranv/siem-llama-3.1:v1 "Analyze this Wazuh alert: SSH login from 192.168.1.100 to root account"

# Analyze failed login attempts
ollama run mranv/siem-llama-3.1:v1 "Analyze this security event: Multiple failed login attempts detected"

Wazuh JSON Log Analysis

ollama run mranv/siem-llama-3.1:v1 '{
  "timestamp": "2025-01-15T14:00:00Z",
  "agent": {
    "id": "000",
    "name": "malware_detection-00",
    "ip": "10.0.0.100"
  },
  "rule": {
    "id": "600",
    "level": 13,
    "description": "Rootkit detected in system",
    "groups": ["rootkit", "malware", "attack"],
    "category": "malware_detection",
    "mitre": {
      "id": ["T1014"],
      "tactic": ["Defense Evasion"]
    }
  },
  "data": {
    "severity": "high",
    "file": "/tmp/malware_0.exe",
    "malware_name": "Trojan.Generic",
    "action": "quarantined",
    "scanner": "ClamAV"
  },
  "location": "/var/log/malware_detection/security.log"
}'

API Usage

curl http://localhost:11434/api/generate -d '{
  "model": "mranv/siem-llama-3.1:v1",
  "prompt": "Analyze this security event: Multiple failed login attempts detected",
  "stream": false
}'

Python Integration

import requests
import json

def analyze_security_log(log_data):
    response = requests.post(
        'http://localhost:11434/api/generate',
        json={
            'model': 'mranv/siem-llama-3.1:v1',
            'prompt': f"Analyze this Wazuh log: {log_data}",
            'stream': False
        }
    )
    return response.json()

# Example usage
log = {
    "timestamp": "2025-01-15T14:00:00Z",
    "rule": {
        "id": "5710",
        "level": 5,
        "description": "sshd: Attempt to login using a non-existent user"
    },
    "data": {
        "srcip": "192.168.1.100",
        "dstuser": "admin"
    }
}

result = analyze_security_log(json.dumps(log))
print(result['response'])

Model Details

Base Model (mranv/siem-llama-3.1:base)

  • Base: LLaMA 3.1 8B Instruct
  • GGUF: wazuh-llama-3.1-8B-base-standalone-q4_0.gguf
  • Quantization: Q4_0
  • Size: 4.7 GB
  • Context Window: 128K tokens

V1 Model (mranv/siem-llama-3.1:v1)

  • Base: LLaMA 3.1 8B Instruct
  • GGUF: wazuh-llama-3.1-8B-v1-standalone-q4_0.gguf
  • Quantization: Q4_0
  • Size: 4.7 GB
  • Context Window: 128K tokens
  • Improvements: Enhanced security analysis and threat detection

Default Parameters

Temperature: 0.7
Top-p: 0.9
Top-k: 40
Context Window: 128K tokens

Custom Parameters

You can adjust parameters for different use cases:

# More deterministic output (lower temperature)
ollama run mranv/siem-llama-3.1:v1 --temperature 0.3 "Analyze this alert..."

# More creative analysis (higher temperature)
ollama run mranv/siem-llama-3.1:v1 --temperature 0.9 "Analyze this alert..."

Use Cases

1. Real-time Log Analysis

  • SSH authentication attempts
  • Failed login monitoring
  • Brute force detection
  • Privilege escalation attempts

2. Malware Detection

  • Rootkit detection
  • Trojan analysis
  • Ransomware indicators
  • Suspicious file execution

3. MITRE ATT&CK Mapping

  • Automatic tactic identification
  • Technique classification
  • Attack chain analysis

4. Incident Response

  • Alert prioritization
  • Threat severity assessment
  • Remediation recommendations
  • IOC extraction

Example Queries

Basic Alert Analysis

ollama run mranv/siem-llama-3.1:v1 "Analyze: Failed password for invalid user admin from 10.0.0.50"

Complex JSON Analysis

ollama run mranv/siem-llama-3.1:v1 '{
  "rule": {"level": 10, "description": "Brute force attack detected"},
  "data": {"srcip": "203.0.113.5", "attempts": 50}
}'

Batch Processing

cat security_logs.json | while read log; do
  ollama run mranv/siem-llama-3.1:v1 "$log"
done

Performance Optimization

GPU Acceleration

# Use GPU for faster inference
OLLAMA_GPU=1 ollama run mranv/siem-llama-3.1:v1

Memory Management

# Adjust context size for memory constraints
ollama run mranv/siem-llama-3.1:v1 --num-ctx 4096 "Analyze..."

Integration with Wazuh

Direct Integration

  1. Configure Wazuh to output JSON logs
  2. Use filebeat or custom script to send logs to Ollama
  3. Process responses for alerting/dashboards

Example Integration Script

#!/usr/bin/env python3
import json
import requests
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler

class WazuhLogHandler(FileSystemEventHandler):
    def on_modified(self, event):
        if event.src_path.endswith('alerts.json'):
            with open(event.src_path) as f:
                for line in f.readlines():
                    log = json.loads(line)
                    analyze_with_llm(log)

def analyze_with_llm(log):
    response = requests.post(
        'http://localhost:11434/api/generate',
        json={
            'model': 'mranv/siem-llama-3.1:v1',
            'prompt': f"Analyze: {json.dumps(log)}",
            'stream': False
        }
    )
    print(response.json()['response'])

if __name__ == '__main__':
    observer = Observer()
    observer.schedule(WazuhLogHandler(), '/var/ossec/logs/alerts/', recursive=False)
    observer.start()
    observer.join()

Links

Credits

Based on OpenNix/wazuh-llama-3.1-8B models: - Original Base: OpenNix/wazuh-llama-3.1-8B-base - Original V1: OpenNix/wazuh-llama-3.1-8B-v1

License

Please refer to the original model licenses and LLaMA 3.1 licensing terms.

Support

For issues and questions: - Open an issue on the model page - Refer to Ollama documentation - Check Wazuh community forums


Note: These models are designed for security analysis purposes. Always validate AI-generated analysis with security best practices and expert review.