22 3 months ago

A CodeLlama-based instruction-following model fine-tuned for generating and completing code from natural language prompts

3 months ago

da0438d3448f · 8.5GB

llama
·
8.03B
·
Q8_0
Below are some instructions that describe some tasks. Write responses that appropriately complete ea
{ "min_p": 0.1, "stop": [ "▁<PRE>", "▁<MID>", "▁<EOT>",

Readme

This model is a fine-tuned version of CodeLlama-7B trained on the CodeAlpaca dataset. The CodeAlpaca dataset focuses on instruction-following for coding tasks and code generation. The model has been fine-tuned using Unsloth’s LoRA training pipeline and converted into a GGUF model for deployment with Ollama.

  • Base Model: CodeLlama-7B
  • Dataset: CodeAlpaca Dataset
  • Fine-tuning Type: Instruction tuning (SFT)
  • Task: Code generation, instruction-following for programming tasks
  • Training Framework: Unsloth (LoRA fine-tuning)
  • Quantization: Q8_0
  • Architecture: LLaMA 2 family
  • Format: GGUF (compatible with Ollama, llama.cpp)

🧑‍💻 Intended Use

  • General code generation
  • Instruction following (e.g., “write a function to…”)
  • Educational tasks and programming assistance

🚫 Limitations

  • Might hallucinate if instructions are unclear or incomplete
  • Limited to the scope and quality of CodeAlpaca data