58 Downloads Updated 1 year ago
ollama run harishkumar56278/VisionContentModerator:12b
Updated 1 year ago
1 year ago
b9bb141674ac · 8.1GB ·
This repository contains a custom Cybersecurity Content Moderation AI model built using Ollama. The model is designed to analyze and classify images for harmful or inappropriate content.
gemma3:12bThe model classifies images into the following categories:
If no harmful content is detected, the image is classified as Safe.
FROM gemma3:12b
# Set model parameters
PARAMETER temperature 0.2
PARAMETER num_ctx 4096
PARAMETER top_p 0.8
PARAMETER repeat_penalty 1.2
The model is configured to strictly analyze and classify images. It does not engage in discussions, explanations, or opinions.
{
"error": "Invalid input. Provide a valid image file."
}
{
"classification": {
"violence_gore": {
"confidence_score": 0.92,
"justification": "Detected bloodstains and visible injuries."
}
},
"max_confidence_category": "violence_gore",
"final_verdict": "Harmful Content",
"safe_content": false
}
{
"classification": {},
"max_confidence_category": null,
"final_verdict": "Not Harmful Content",
"safe_content": true
}
To load the model into Ollama, ensure you have ollama installed and run:
ollama create moderation_model -f <path_to_model_file>
To use the model for moderation:
ollama run moderation_model "<image_path>"
< Harish Kumar S , Email: harishkumar56278@gmail.com, Site: harish-nika.github.io >