-
taiwanllm-7b-v2.1-chat
π¦ Taiwan LLM is an advanced language model tailored for Traditional Chinese, focusing on the linguistic and cultural contexts of Taiwan.
377 Pulls 1 Tag Updated 9 months ago
-
llama3-8b-chinese-chat
π¦π¦π¦ Llama3-8B-Chinese-Chat is an instruction-tuned language model for Chinese & English users with various abilities such as roleplaying & tool-using built upon the Meta-Llama-3-8B-Instruct model.
316 Pulls 1 Tag Updated 7 months ago
-
mistral-7b-v0.3-chinese
Mistral-7B-v0.3-Chinese is an instruction-tuned language model for Chinese & English users with various abilities such as roleplaying & tool-using built upon the Mistral-7B-Instruct-v0.3.
278 Pulls 1 Tag Updated 7 months ago
-
taiwanllm-13b-v2.0-chat
π¦ Taiwan LLM is an advanced language model tailored for Traditional Chinese, focusing on the linguistic and cultural contexts of Taiwan.
239 Pulls 1 Tag Updated 9 months ago
-
llama3-70b-chinese-chat
π¦π¦π¦ Llama3-70B-Chinese-Chat is an instruction-tuned language model for Chinese & English users with various abilities such as roleplaying & tool-using built upon the Meta-Llama-3-70B-Instruct model.
190 Pulls 1 Tag Updated 7 months ago
-
openchat
Llama-3 based version OpenChat 3.6 20240522, outperforming official Llama 3 8B Instruct and open-source finetunes/merges.
164 Pulls 1 Tag Updated 7 months ago
-
sfr-iterative-dpo-llama-3-8b-r
SFR-Iterative-DPO-LLaMA-3-8B-R is a further (SFT and RLHF) fine-tuned model on LLaMA-3-8B, which provides good performance. The model is from Salesforce team.
157 Pulls 1 Tag Updated 7 months ago
-
faro-yi-9b-dpo
The Faro chat model focuses on practicality and long-context modeling. It handles various downstream tasks with higher quality, delivering stable and reliable results even when inputs contain lengthy documents or complex instructions.
130 Pulls 1 Tag Updated 6 months ago
-
mistral-7b-v0.3-chinese-chat
π₯ Mistral-7B-v0.3-Chinese-Chat is an instruction-tuned language model for Chinese & English users with various abilities such as roleplaying & tool-using built upon the mistralai/Mistral-7B-Instruct-v0.3.
105 Pulls 1 Tag Updated 6 months ago
-
aurora
π³ Aurora represents the Chinese version of the MoE model, refined from the Mixtral-8x7B architecture. It adeptly unlocks the modelβs potential for bilingual dialogue in both Chinese and English across a wide range of open-domain topics.
66 Pulls 1 Tag Updated 9 months ago
-
minicpm2.6v