This model is a fine-tuned version of CodeLlama-7B
(4-bit quantized) trained on the CyberNative dataset. The dataset contains secure and insecure coding samples with detailed vulnerability annotations. The model is fine-tuned using LoRA with Unsloth and merged with the base model to produce a performant .gguf
model ready for local inference via Ollama.
- Base Model: CodeLlama-7B
- Dataset: CyberNative Dataset
- Fine-tuning Type: Supervised fine-tuning (SFT)
- Task: Code review with a focus on security vulnerabilities and suggestions for secure alternatives
- Training Framework: Unsloth (LoRA fine-tuning)
- Quantization: Q8_0
- Architecture: LLaMA 2 family
- Format: GGUF (compatible with Ollama, llama.cpp)
🧑💻 Intended Use
- Detect security flaws in code snippets
- Suggest secure refactored code alternatives
- Educational tool for secure coding practices
🚫 Limitations
- May not detect complex zero-day vulnerabilities
- Might hallucinate when provided with ambiguous inputs
- Only supports languages in the training dataset (mostly Python/C/C++)