# SmolVLM2-2.2B-Instruct IQ4_XS: Ultra-Compact Vision-Language Model
## Overview
SmolVLM2-2.2B-Instruct IQ4_XS quantization - Smallest variant, great for constrained devices.
## Available Versions
| Tag | Size | Description |
|-----|------|-------------|
| `q6_k` | 1.4 GB | High quality, near-lossless |
| `q5_k_m` | 1.2 GB | Balanced quality/size |
| `q4_k_m` | 1.0 GB | Recommended - best quality/size ratio |
| `iq4_xs` | 956 MB | Smallest, good for constrained devices |
| `q8_0` | 1.8 GB | Near-full precision |
| `f16` | 3.4 GB | Full precision |
## Links
- **Original Model**: [HuggingFaceTB/SmolVLM2-2.2B-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-2.2B-Instruct)
- **GGUF Files**: [richardyoung/SmolVLM2-2.2B-Instruct-GGUF](https://huggingface.co/richardyoung/SmolVLM2-2.2B-Instruct-GGUF)
Apache 2.0 License - Free for commercial and personal use.