A set of Mixture of Experts (MoE) model with open weights by Mistral AI in 8x7b and 8x22b parameter sizes.
47B
141B
243.2K Pulls Updated 3 days ago
instruct
26GB
text
26GB
v0.1
80GB
8x22b-instruct
80GB
8x22b-text
80GB
8x22b-instruct-v0.1-q4_0
80GB
8x7b-instruct-v0.1-q4_0
26GB
8x7b-instruct-v0.1-q4_1
29GB
8x22b-instruct-v0.1-q4_1
88GB
8x7b-instruct-v0.1-q5_0
32GB
8x22b-instruct-v0.1-q5_0
97GB
8x7b-instruct-v0.1-q5_1
35GB
8x22b-instruct-v0.1-q5_1
106GB
8x22b-instruct-v0.1-q8_0
149GB
8x7b-instruct-v0.1-q8_0
50GB
8x7b-instruct-v0.1-q2_K
16GB
8x22b-instruct-v0.1-q2_K
52GB
8x22b-instruct-v0.1-q3_K_S
62GB
8x7b-instruct-v0.1-q3_K_S
20GB
8x7b-instruct-v0.1-q3_K_M
20GB
8x22b-instruct-v0.1-q3_K_M
68GB
8x7b-instruct-v0.1-q3_K_L
20GB
8x22b-instruct-v0.1-q3_K_L
73GB
8x7b-instruct-v0.1-q4_K_S
26GB
8x22b-instruct-v0.1-q4_K_S
80GB
8x7b-instruct-v0.1-q4_K_M
26GB
8x22b-instruct-v0.1-q4_K_M
86GB
8x22b-instruct-v0.1-q5_K_S
97GB
8x7b-instruct-v0.1-q5_K_S
32GB
8x7b-instruct-v0.1-q5_K_M
32GB
8x22b-instruct-v0.1-q5_K_M
100GB
8x7b-instruct-v0.1-q6_K
38GB
8x22b-instruct-v0.1-q6_K
116GB
8x22b-instruct-v0.1-fp16
281GB
8x7b-instruct-v0.1-fp16
93GB
8x22b-text-v0.1-q4_0
80GB
8x7b-text-v0.1-q4_0
26GB
8x7b-text-v0.1-q4_1
29GB
8x22b-text-v0.1-q4_1
88GB
8x22b-text-v0.1-q5_0
97GB
8x7b-text-v0.1-q5_0
32GB
8x22b-text-v0.1-q5_1
106GB
8x7b-text-v0.1-q5_1
35GB
8x22b-text-v0.1-q8_0
149GB
8x7b-text-v0.1-q8_0
50GB
8x7b-text-v0.1-q2_K
16GB
8x22b-text-v0.1-q2_K
52GB
8x22b-text-v0.1-q3_K_S
61GB
8x7b-text-v0.1-q3_K_S
20GB
8x7b-text-v0.1-q3_K_M
20GB
8x22b-text-v0.1-q3_K_M
68GB
8x22b-text-v0.1-q3_K_L
73GB
8x7b-text-v0.1-q3_K_L
20GB
8x7b-text-v0.1-q4_K_S
26GB
8x22b-text-v0.1-q4_K_S
80GB
8x22b-text-v0.1-q4_K_M
86GB
8x7b-text-v0.1-q4_K_M
26GB
8x7b-text-v0.1-q5_K_S
32GB
8x22b-text-v0.1-q5_K_S
97GB
8x7b-text-v0.1-q5_K_M
32GB
8x22b-text-v0.1-q5_K_M
100GB
8x22b-text-v0.1-q6_K
116GB
8x7b-text-v0.1-q6_K
38GB
8x7b-text-v0.1-fp16
93GB
8x22b-text-v0.1-fp16
281GB
v0.1-instruct
80GB