https://deepneuro.ai/richard
-
qwen3-coder
The most powerful open-source coding AI - 480B parameters with Mixture of Experts architecture for exceptional code generation and understanding.
1,646 Pulls 8 Tags Updated 2 weeks ago
-
olmocr2
State-of-the-art OCR (Optical Character Recognition) vision language model based on [allenai/olmOCR-2-7B-1025](https://huggingface.co/allenai/olmOCR-2-7B-1025).
vision929 Pulls 1 Tag Updated 1 month ago
-
deepseek-r1-32b-uncensored
Advanced reasoning model with uncensored capabilities, perfect for complex problem-solving and unrestricted conversations without refusal behavior
816 Pulls 1 Tag Updated 2 weeks ago
-
qwen2.5-14b-1m-heretic
Ultra long-context model supporting 1M tokens with uncensored outputs, ideal for analyzing entire books, codebases, and extensive documents.
317 Pulls 1 Tag Updated 2 weeks ago
-
qwen3-14b-abliterated
Abliterated Qwen3-14B with 80% reduced refusals while preserving coherence (KL 0.98)
255 Pulls 5 Tags Updated 1 week ago
-
uigen-x-30b-moe
Unsloth-tuned Qwen3 30B mixture‑of‑experts model built for heavy coding, reasoning, and agentic workflows.
232 Pulls 6 Tags Updated 2 months ago
-
deepseek-coder-33b-heretic
State-of-the-art coding AI trained on 2T tokens with project-level understanding and no content restrictions for unrestricted code generation.
155 Pulls 1 Tag Updated 2 weeks ago
-
kat-dev-72b
A 72B parameter coding model optimized for software engineering tasks, based on the Qwen2.5-72B architecture.
98 Pulls 6 Tags Updated 1 month ago
-
openbiollm
openbiollm
91 Pulls 1 Tag Updated 1 year ago
-
smolvlm2-2.2b-instruct
SmolVLM2-2.2B-Instruct is a lightweight yet powerful vision-language model that can understand images, read documents, and analyze video frames. At just 2.2B parameters, it runs efficiently on consumer hardware including laptops and smartphones, making
73 Pulls 7 Tags Updated 1 week ago
-
calme-3.2
Calme 3.2 Instruct 78B - GGUF Q8_0 quantization of MaziyarPanahi's powerful Qwen2.5-based model
68 Pulls 1 Tag Updated 5 months ago
-
llama-medx_v32
https://huggingface.co/skumar9/Llama-medx_v3.2
tools65 Pulls 1 Tag Updated 1 year ago
-
kimi-vl-a3b-thinking
Kimi-VL-A3B-Thinking is a powerful vision-language model from Moonshot AI featuring extended thinking capabilities. Built on the DeepSeek2 architecture with Mixture of Experts (MoE), it excels at complex visual reasoning tasks, mathematical problem-s
59 Pulls 7 Tags Updated 1 week ago
-
dolphin-yi-34b-heretic
Exceptional conversational AI with 77.4 MMLU score, offering natural dialogue and multi-domain expertise without any content filtering.
52 Pulls 1 Tag Updated 2 weeks ago
-
qwen3-32b
Revolutionary model with unique thinking/non-thinking modes, delivering superior reasoning performance with seamless mode switching for any task.
21 Pulls 1 Tag Updated 2 weeks ago
-
olmo-3-7b-rlzero-math
A 7B math reasoning model from Allen AI, trained with RL-Zero to solve problems step-by-step like a skilled tutor. Supports 65K context for complex multi-step problems - runs on any laptop.
20 Pulls 7 Tags Updated 2 weeks ago
-
bfs-prover-v2-32b
ByteDance Seed’s BFS-Prover-V2 is a 32B Qwen2.5-based Lean4 tactic generator trained with multi-turn off-policy RL plus multi-agent best-first search on Mathlib, Lean GitHub, and NuminaMath.
16 Pulls 3 Tags Updated 2 months ago
-
kimi-k2
Kimi-K2 is a large language model built with a Mixture-of-Experts (MoE) architecture: sparse activation so that only a subset of the total parameters is used per input. Hugging Face +3 deepinfra.com +3 Medium +3 Total parameter count: ~1 trillion parame