126 1 month ago

Recommended for transcribing and summarizing text from screenshots.

vision tools
ollama run mirage335/Qwen-3-VL-8B-Instruct-virtuoso

Details

1 month ago

2d0dfff094b5 · 9.8GB ·

qwen3vl
·
8.77B
·
Q8_0
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR US
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR US
{ "num_ctx": 12288, "num_keep": 12288, "num_predict": 8096, "temperature": 1, "t

Readme

Apache License Version 2.0

https://ollama.com/library/qwen3-vl

https://ollama.com/adelnazmy2002/Qwen3-VL-8B-Instruct

https://huggingface.co/Qwen/Qwen3-VL-8B-Instruct

https://huggingface.co/Qwen/Qwen3-VL-30B-A3B-Thinking

https://huggingface.co/Qwen/Qwen3-VL-32B-Thinking

NOTICE

Design

Recommended for transcribing and summarizing text from screenshots.

Usage

ollama_pull_virtuoso() {
ollama pull mirage335/"$1"
ollama cp mirage335/"$1" "$1"
ollama rm mirage335/"$1"
}

ollama_pull_virtuoso Qwen-3-VL-8B-Instruct-virtuoso

Recommended environment variables. KV_CACHE quantization “q4_0” in particular RECOMMENDED, unless “q8_0” is needed (eg. by Qwen-2_5-VL-7B-Instruct-virtuoso, etc). This ‘Qwen-3-VL’ model does NOT require “q8_0” KV_CACHE quantization, and for that reason, is very strongly recommended as a replacement.

export OLLAMA_NUM_THREADS=18
export OLLAMA_FLASH_ATTENTION=1
export OLLAMA_KV_CACHE_TYPE="q4_0"
export OLLAMA_NEW_ENGINE=true
export OLLAMA_NOHISTORY=true
export OLLAMA_NUM_PARALLEL=1
export OLLAMA_MAX_LOADED_MODELS=1

Adjust OLLAMA_NUM_THREADS and/or disable HyperThreading, etc, to prevent crippling performance loss.

CAUTION - Preservation

Pulling the model this way relies on the ollama repository, and more generally, reliability of internet services, which has been rather significantly fragile.

If possible, you should use the “Llama-3-virtuoso” project, which automatically caches an automatically installable backup copy.

https://github.com/mirage335-colossus/Llama-3-virtuoso