1,234 Downloads Updated 9 months ago
ollama run riven/smolvlm
curl http://localhost:11434/api/chat \ -d '{ "model": "riven/smolvlm", "messages": [{"role": "user", "content": "Hello!"}] }'
from ollama import chat response = chat( model='riven/smolvlm', messages=[{'role': 'user', 'content': 'Hello!'}], ) print(response.message.content)
import ollama from 'ollama' const response = await ollama.chat({ model: 'riven/smolvlm', messages: [{role: 'user', content: 'Hello!'}], }) console.log(response.message.content)
Name
1 model
Size
Context
Input
smolvlm:latest
546MB · 8K context window · Text, Image · 9 months ago
546MB
8K
Text, Image
A port of ggml-org/SmolVLM-500M-Instruct-GGUF, you may use it with:
curl http://localhost:11434/api/generate -d '{ "model": "smolvlm", "prompt": "What do you see in this image?", "images": ["<base64_encoded_image>"] }'