
Founder and CEO of aquif AI, where we tinker with models
-
aquif-moe-400m
Our smallest model is also Mixture of Experts. Having a total of 0.4b active and 1.3b total parameters, it beats models way bigger than it in efficiency and performance.
tools96 Pulls 1 Tag Updated 4 months ago
-
aquif-3.0-preview-1
Our latest SOTA for sub-3b models, based on a whole new architecture and trained on an all-new class of trainers, now with vision.
vision tools82 Pulls 1 Tag Updated 6 months ago
-
aquif-3.0-preview-8b
Our biggest and smartest model, with GPT-4o-level performance.
tools78 Pulls 1 Tag Updated 5 months ago
-
aquif-moe-800m
Our first Mixture of Experts model, with 800 million active parameters, reaches state-of-the-art performance in the sub-1b range.
tools71 Pulls 1 Tag Updated 4 months ago
-
aquif-3.5
State-of-the-art compact LLMs from aquif, with two MoE models (2.6B-A0.6B and 12B-A4B-Think) and three dense models (3B, 7B and 8B-Think), released on August 30th 2025.
3b 7b 8b44 Pulls 3 Tags Updated 2 weeks ago
-
aqui-vl-24b
The first ever open weights model from Aqui Solutions, the company behind AquiGPT, and also the first Mistral-based Aqui-VL model.
43 Pulls 1 Tag Updated 2 months ago
-
aquif-3.0-preview-2
The second iteration of our most popular model, finetuned for coding, reasoning and instruction following, now with thinking capabilities.
tools34 Pulls 1 Tag Updated 4 months ago
-
aquif-3.0-cosmos
Our first full preview version, getting us out of the research preview state. Brings the lowest hallucination rates, SOTA performance across different evals and an efficient architecture.
tools27 Pulls 1 Tag Updated 4 months ago
-
aquif-2
The newest model by aquif AI, based on the qwen2 architecture.
tools10 Pulls 1 Tag Updated 9 months ago
-
aquif-3-preview
A lightweight, efficient and ultra powerful MoE model, setting the new standard for aquif AI, available only in Huggingface.