Run Code Llama locally

August 24, 2023

Code Llama

Today, Meta Platforms, Inc., releases Code Llama to the public, based on Llama 2 to provide state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks.

Code Llama is now available on Ollama to try!

If you haven’t already, installed Ollama, please download it here.

For users to play with Code Llama:

Available with 7 billion, 13 billion (16GB+ of memory requirement) and 34 billion (32GB+ of memory requirement) parameters:

ollama run codellama:7b

ollama run codellama:13b

ollama run codellama:34b

Example prompt:

In Bash, how do I list all text files in the current directory (excluding subdirectories) that have been modified in the last month?

Foundation models and Python specializations are available for code generation/completions tasks

Code Llama example

Foundation models:

ollama pull codellama:7b-code

ollama pull codellama:13b-code

ollama pull codellama:34b-code

Python specializations:

ollama pull codellama:7b-python

ollama pull codellama:13b-python

ollama pull codellama:34b-python