Codelama with 16k context unlocked in modelfile

13B 34B

172 Pulls Updated 9 months ago

Readme

13b version is q4_K_M quantization

34b version is q3_K_M quantization