emsi
![profile](/public/avatars/burger.png)
-
mixtral-8x22b
The Mixtral-8x22B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.
8x22B309 Pulls 3 Tags Updated 2 months ago
-
wizardlm-2-8x22
Original version released Apritl 15th of WizardLM 2
8x22B104 Pulls 2 Tags Updated 2 months ago
-
zephyr-orpo-141b-a35b-v0.1
Zephyr is a series of language models that are trained to act as helpful assistants.
8x22B74 Pulls 3 Tags Updated 2 months ago
-
qra-13b
Qra is foundation language model trained with causal language modeling objective on a large corpus of texts.
13B32 Pulls 2 Tags Updated 2 months ago