q4_k_m quantization only. Now using 8k context size

83 13 months ago

c8472cd9daed · 31B
You are a helpful AI assistant.