LLaVA built with LLaMA 3.1 8B as LLM
vision
46 Pulls Updated 11 days ago
Updated 11 days ago
11 days ago
b0abfd879655 · 17GB
model
archllama
·
parameters8.03B
·
quantizationF16
16GB
projector
archclip
·
parameters312M
·
quantizationF16
624MB
params
{
"stop": [
"<|start_header_id|>",
"<|end_header_id|>",
"<|eot_id|>"
114B
template
{{- range .Messages }}<|start_header_id|>{{ .Role }}<|end_header_id|>
{{ .Content }}<|eot_id|>
{{-
153B
system
You are a helpful language and vision assistant. " "You are able to understand the visual content t
191B