-
bn_chat
Fine-tuned version of llama2-v0.1-instruct from BanglaLLM in huggingface. Quantized to 4bit -> q4_k_m using llama.cpp.
28 Pulls 1 Tag Updated 1 year ago
-
bn_chat_2
Fine-tuned version of llama2-v0.1-instruct from BanglaLLM in huggingface. Quantized to 4bit -> q4_k_m using llama.cpp. Trained on 2 * T4.
14 Pulls 1 Tag Updated 1 year ago