939 4 weeks ago

Blazingly fast chat model for conversations and tool use

ollama run LiquidAI/lfm2.5-1.2b-instruct:q8_0

Details

4 weeks ago

2286ca7057af · 1.2GB ·

lfm2
·
1.17B
·
Q8_0
<|startoftext|>{{ if .System }}<|im_start|>system {{ .System }}<|im_end|> {{ end }}{{ if .Prompt }}<
You are a helpful assistant trained by Liquid AI.
{ "repeat_penalty": 1.05, "stop": [ "<|im_start|>", "<|im_end|>", "<

Readme

Liquid AI

LFM2.5-1.2B-Instruct

LFM2.5 is a new family of hybrid models designed for on-device deployment. It builds on the LFM2 architecture with extended pre-training and reinforcement learning.

  • Best-in-class performance: A 1.2B model rivaling much larger models, bringing high-quality AI to your pocket.
  • Fast edge inference: 239 tok/s decode on AMD CPU, 82 tok/s on mobile NPU. Runs under 1GB of memory with day-one support for llama.cpp, MLX, and vLLM.
  • Scaled training: Extended pre-training from 10T to 28T tokens and large-scale multi-stage reinforcement learning.

image

Find more information about LFM2.5 in our blog post.

🗒️ Model Details

LFM2.5-1.2B-Instruct is a general-purpose text-only model with the following features:

  • Number of parameters: 1.17B
  • Number of layers: 16 (10 double-gated LIV convolution blocks + 6 GQA blocks)
  • Training budget: 28T tokens
  • Context length: 32,768 tokens
  • Vocabulary size: 65,536
  • Knowledge cutoff: Mid-2024
  • Languages: English, Arabic, Chinese, French, German, Japanese, Korean, Spanish
  • Generation parameters:
    • temperature: 0.1
    • top_k: 50
    • repetition_penalty: 1.05

We recommend using it for agentic tasks, data extraction, and RAG. It is not recommended for knowledge-intensive tasks and programming.

📊 Performance

Benchmarks

We compared LFM2.5-1.2B-Instruct with relevant sub-2B models on a diverse suite of benchmarks.

Model GPQA MMLU-Pro IFEval IFBench Multi-IF AIME25 BFCLv3
LFM2.5-1.2B-Instruct 38.89 44.35 86.23 47.33 60.98 14.00 49.12
Qwen3-1.7B (instruct) 34.85 42.91 73.68 21.33 56.48 9.33 46.30
Granite 4.0-1B 24.24 33.53 79.61 21.00 43.65 3.33 52.43
Llama 3.2 1B Instruct 16.57 20.80 52.37 15.93 30.16 0.33 21.44
Gemma 3 1B IT 24.24 14.04 63.25 20.47 44.31 1.00 16.64

GPQA, MMLU-Pro, IFBench, and AIME25 follow ArtificialAnalysis’s methodology. For IFEval and Multi-IF, we report the average score across strict and loose prompt and instruction accuracies. For BFCLv3, we report the final weighted average score with a custom Liquid handler to support our tool use template.

Inference speed

LFM2.5-1.2B-Instruct offers extremely fast inference speed on CPUs with a low memory profile compared to similar-sized models.

image

In addition, we are partnering with AMD, Qualcomm, and Nexa AI to bring the LFM2.5 family to NPUs. These optimized models are available through our partners, enabling highly efficient on-device inference. The following numbers have been calculated using 1K prefill and 100 decode tokens:

Device Inference Framework Model Prefill (tok/s) Decode (tok/s) Memory (GB)
Qualcomm Snapdragon® X Elite NPU NexaML LFM2.5-1.2B-Instruct 2591 63 0.9GB
Qualcomm Snapdragon® Gen4 (ROG Phone9 Pro) NPU NexaML LFM2.5-1.2B-Instruct 4391 82 0.9GB
Qualcomm Snapdragon® Gen4 (Samsung Galaxy S25 Ultra) CPU llama.cpp (Q4_0) LFM2.5-1.2B-Instruct 335 70 719MB
Qualcomm Snapdragon® Gen4 (Samsung Galaxy S25 Ultra) CPU llama.cpp (Q4_0) Qwen3-1.7B 181 40 1306MB

These capabilities unlock new deployment scenarios across various devices, including vehicles, mobile devices, laptops, IoT devices, and embedded systems.

📬 Contact