General use chat model based on Llama and Llama 2 with 2K to 16K context sizes.

48.7K Pulls Updated 6 months ago

111 Tags

Readme

Vicuna is a chat assistant model. It includes 3 different variants in 3 different sizes. v1.3 is trained by fine-tuning Llama and has a context size of 2048 tokens. v1.5 is trained by fine-tuning Llama 2 and has a context size of 2048 tokens. v1.5-16k is trained by fine-tuning Llama 2 and has a context size of 16k tokens. All three variants are trained using conversations collected from ShareGPT.

Example prompts

What is the meaning of life? Explain it in 5 paragraphs.

References

HuggingFace