Saiga Mistral 7B with 128k context window

7B

62 Pulls Updated 6 months ago

6 months ago

81a904694c55 · 7.8GB

model
llama
·
7.24B
·
Q8_0
adapter

Readme

Based on https://huggingface.co/evilfreelancer/saiga_mistral_7b_128k_lora

If you use this model with langchain, make sure to edit your ai_prefix and human_prefix of your Memory classes like this:

memory = ConversationBufferWindowMemory(input_key='input', memory_key="history", k=5, ai_prefix='Bot', human_prefix='User')

Don’t forget to change your prompts accordingly!