Codelama with 16k context unlocked in modelfile
217 Pulls Updated 11 months ago
Readme
13b version is q4_K_M quantization
34b version is q3_K_M quantization
217 Pulls Updated 11 months ago
13b version is q4_K_M quantization
34b version is q3_K_M quantization