Models
GitHub
Discord
Turbo
Sign in
Download
Models
Download
GitHub
Discord
Sign in
trinsition
/
minicpmv
:latest
479
Downloads
Updated
1 year ago
minicpm-llama3-2.5-8b-16-v With only 8B parameters, it surpasses widely used proprietary models like GPT-4V-1106, Gemini Pro, Claude 3 and Qwen-VL-Max and greatly outperforms other Llama 3-based MLLMs
minicpm-llama3-2.5-8b-16-v With only 8B parameters, it surpasses widely used proprietary models like GPT-4V-1106, Gemini Pro, Claude 3 and Qwen-VL-Max and greatly outperforms other Llama 3-based MLLMs
Cancel
Updated 1 year ago
1 year ago
4afe8c21f45c · 16GB
model
arch
llama
·
parameters
8.03B
·
quantization
F16
16GB
params
{ "num_gpu": 12, "num_keep": 32, "num_predict": -2, "stop": [ "[<|eot_id|>"
103B
template
{{ if .System }}<|start_header_id|>system<|end_header_id|>{{ .System }}<|eot_id|>{{ end }}{{ if .Pro
248B
Readme
No readme
Write
Preview
Paste, drop or click to upload images (.png, .jpeg, .jpg, .svg, .gif)