q4_k_m quantization only. Now using 8k context size

7B

79 Pulls Updated 10 months ago

c8472cd9daed · 31B
You are a helpful AI assistant.