Code generation model based on Code Llama.
34b
77.6K Pulls Updated 12 months ago
Updated 12 months ago
12 months ago
a199982685d4 · 15GB
model
archllama
·
parameters33.7B
·
quantizationQ3_K_S
15GB
system
You are an intelligent programming assistant.
46B
params
{
"stop": [
"### System Prompt:",
"### User Message:",
"### Assistant:"
69B
template
{{- if .System }}
### System Prompt
{{ .System }}
{{- end }}
### User Message
{{ .Prompt }}
### As
108B
license
LLAMA 2 COMMUNITY LICENSE AGREEMENT
Llama 2 Version Release Date: July 18, 2023
"Agreement" means
7.0kB
Readme
Phind CodeLlama is a code generation model based on CodeLlama 34B fine-tuned for instruct use cases. There are two versions of the model: v1
and v2
. v1
is based on CodeLlama 34B and CodeLlama-Python 34B. v2
is an iteration on v1
, trained on an additional 1.5B tokens of high-quality programming-related data.
Usage
CLI
Open the terminal and run ollama run phind-codellama
API
Example
curl -X POST http://localhost:11434/api/generate -d '{
"model": "phind-codellama",
"prompt":"Implement a linked list in C++"
}'
Memory requirements
- 34b models generally require at least 32GB of RAM