170 Downloads Updated 10 months ago
Name
3 models
Size
Context
Input
dolphin-mistral-24b:24b-instruct-q5_0
16GB · 32K context window · Text · 10 months ago
16GB
32K
Text
dolphin-mistral-24b:24b-instruct-q8_0
25GB · 32K context window · Text · 10 months ago
25GB
dolphin-mistral-24b:24b-instruct-fp16
47GB · 32K context window · Text · 10 months ago
47GB
GGUF conversion and quantization of Dolphin3.0-Mistral-24B model.
Original model by Eric Hartford, Ben Gitter, BlouseJury and Cognitive Computations
Part of the Dolphin 3.0 Collection