latest
7.8GB
Saiga Mistral 7B with 128k context window
7B
62 Pulls Updated 6 months ago
Updated 6 months ago
6 months ago
81a904694c55 · 7.8GB
model
archllama
·
parameters7.24B
·
quantizationQ8_0
7.7GB
adapter
55MB
Readme
Based on https://huggingface.co/evilfreelancer/saiga_mistral_7b_128k_lora
If you use this model with langchain, make sure to edit your ai_prefix
and human_prefix
of your Memory classes like this:
memory = ConversationBufferWindowMemory(input_key='input', memory_key="history", k=5, ai_prefix='Bot', human_prefix='User')
Don’t forget to change your prompts accordingly!