q4_k_m quantization only. Now using 8k context size

7B

79 Pulls Updated 10 months ago

624d5605c62b · 190B
{ "num_ctx": 8192, "stop": [ "<|end_of_turn|>", "<|end\\_of\\_turn|>", "<|end_of\\_turn|>", "GPT4User", "User 1:", "GPT4 User", "Reddit User", "GPT4:", "GPT4 Correct User" ] }