With the plug and playness of Ollama models on limited systems; why should some models be left out of reach? or why should some models not be here due to porting quirks when the base models are already here?
-
qwen3.5-opus
Qwen3.5-Claude-4.6-Opus-Reasoning-Distilled-v2; https://huggingface.co/Jackrong/; has vision properly merged and efficiently quantified.
vision tools thinking 4b 9b 27b2,028 Pulls 4 Tags Updated 4 days ago
-
unsloth-deepseek-r1
unslothed-deepseek-r1, converted to ollama from the repo: https://huggingface.co/unsloth
8b 14b656 Pulls 2 Tags Updated 1 year ago
-
mistral3-unsloth
Mistral Small 3 sets a new benchmark in the “small” Large Language Models category below 70B. This model is unsloth version of the model.
tools177 Pulls 1 Tag Updated 1 year ago
-
Jan-code
Built on top of Jan-v3-4B-base-instruct; Jan-code is designed to be a practical coding model you can run locally and iterate on quickly—useful for everyday code tasks and as a lightweight “worker” model in agentic workflows.
tools81 Pulls 4 Tags Updated 1 week ago
-
Jan-v2-VL
Jan-v2-VL is a family of 8B-parameter vision–language models for long-horizon, multi-step tasks in real software environments (e.g., browsers and desktop apps).
vision tools thinking56 Pulls 10 Tags Updated 2 weeks ago
-
Jan-v3
Jan-v3 is a compact 4B-parameter model that leverages distillation from a larger teacher to maintain strong general performance and broad applicability while avoiding typical capacity limitations.
tools35 Pulls 4 Tags Updated 3 weeks ago
-
Mistral-Small-3.1
unsloth
tools35 Pulls 1 Tag Updated 11 months ago