94 3 weeks ago

Specialized CodeLlama variant fine-tuned specifically for Python code generation. 13B params, Q6_K quant (very high quality, minimal loss).

ollama run fauxpaslife/codellama-python-13b-q6

Details

3 weeks ago

e491380c655b · 11GB ·

llama
·
13B
·
Q6_K
{{ .Prompt }}

Readme

See the author’s website for more details. Below, I have summarized their content.

CodeLlama 13B Python (Q6_K)

Original GGUF by TheBloke | Base model by Meta

What it does

Specialized CodeLlama variant fine-tuned specifically for Python code generation. 13B params, Q6_K quant (very high quality, minimal loss).

Usage

ollama run fauxpaslife/codellama-python-13b-q6

Prompt format:

[INST] Write code to solve the following coding problem that obeys the constraints and passes the example test cases. Please wrap your code answer using ```:
{your_request}
[/INST]

Quick example

ollama run fauxpaslife/codellama-python-13b-q6 "[INST] Write a Python function to calculate fibonacci numbers recursively [/INST]"

Stats

  • Quantization: Q6_K (extremely low quality loss)
  • Size: ~10.68 GB
  • Context: 16K tokens
  • Best for: Python-specific code tasks, scripts, algorithms