457 Downloads Updated 1 week ago
ollama run maternion/Qianfan-OCR:4b-q4_K_M
Qianfan-OCR is a 4B-parameter end-to-end document intelligence model developed by the Baidu Qianfan Team. It unifies document parsing, layout analysis, and document understanding within a single vision-language architecture.
Unlike traditional multi-stage OCR pipelines that chain separate layout detection, text recognition, and language comprehension modules, Qianfan-OCR performs direct image-to-Markdown conversion and supports a broad range of prompt-driven tasks — from structured document parsing and table extraction to chart understanding, document question answering, and key information extraction — all within one model.
⟨think⟩ tokensQianfan-OCR adopts the multimodal bridging architecture from Qianfan-VL, consisting of three core components:
| Component | Details |
|---|---|
| Vision Encoder | Qianfan-ViT, 24 Transformer layers, AnyResolution design (up to 4K), 256 visual tokens per 448×448 tile, max 4,096 tokens per image |
| Language Model | Qwen3-4B (3.6B non-embedding), 36 layers, 2560 hidden dim, GQA (32 query / 8 KV heads), 32K context (extendable to 131K) |
| Cross-Modal Adapter | 2-layer MLP with GELU activation, projecting from 1024-dim to 2560-dim |
A key innovation is Layout-as-Thought: an optional thinking phase triggered by ⟨think⟩ tokens, where the model generates structured layout representations (bounding boxes, element types, reading order) before producing final outputs.
This mechanism serves two purposes: 1. Functional: Recovers layout analysis capability within the end-to-end paradigm — users obtain structured layout results directly 2. Enhancement: Provides targeted accuracy improvements on documents with complex layouts, cluttered elements, or non-standard reading orders
When to use: Enable thinking for heterogeneous pages with mixed element types (exam papers, technical reports, newspapers). Disable for homogeneous documents (single-column text, simple forms) for better results and lower latency.
| Model | Type | Overall ↑ | TextEdit ↓ | FormulaCDM ↑ | TableTEDs ↑ | TableTEDss ↑ | R-orderEdit ↓ |
|---|---|---|---|---|---|---|---|
| Qianfan-OCR (Ours) | End-to-end | 93.12 | 0.041 | 92.43 | 91.02 | 93.85 | 0.049 |
| DeepSeek-OCR-v2 | End-to-end | 91.09 | 0.048 | 90.31 | 87.75 | 92.06 | 0.057 |
| Gemini-3 Pro | End-to-end | 90.33 | 0.065 | 89.18 | 88.28 | 90.29 | 0.071 |
| Qwen3-VL-235B | End-to-end | 89.15 | 0.069 | 88.14 | 86.21 | 90.55 | 0.068 |
| dots.ocr | End-to-end | 88.41 | 0.048 | 83.22 | 86.78 | 90.62 | 0.053 |
| PaddleOCR-VL 1.5 | Pipeline | 94.50 | 0.035 | 94.21 | 92.76 | 95.79 | 0.042 |
| Model | OCRBench | OCRBenchv2 (en/zh) | CCOCR-multilan | CCOCR-overall |
|---|---|---|---|---|
| Qianfan-OCR (Ours) | 880 | 56.0 / 60.77 | 76.7 | 79.3 |
| Qwen3-VL-4B | 873 | 60.68 / 59.13 | 74.2 | 76.5 |
| MonkeyOCR | 655 | 21.78 / 38.91 | 43.8 | 35.2 |
| DeepSeek-OCR | 459 | 15.98 / 38.31 | 32.5 | 27.6 |
| Benchmark | Qianfan-OCR | Qwen3-VL-4B | Qwen3-VL-2B |
|---|---|---|---|
| DocVQA | 92.8 | 94.9 | 92.7 |
| CharXiv_DQ | 94.0 | 81.8 | 69.7 |
| CharXiv_RQ | 85.2 | 48.5 | 41.3 |
| ChartQA | 88.1 | 83.3 | 78.3 |
| ChartQAPro | 42.9 | 36.2 | 24.5 |
| ChartBench | 85.9 | 74.9 | 73.2 |
| TextVQA | 80.0 | 81.8 | 79.9 |
| OCRVQA | 66.8 | 64.7 | 59.3 |
💡 Two-stage OCR+LLM systems score 0.0 on CharXiv (both DQ and RQ), demonstrating that chart structures discarded during text extraction are essential for reasoning.
| Model | Overall | OCRBench KIE | OCRBenchv2 KIE (en) | OCRBenchv2 KIE (zh) | CCOCR KIE | Nanonets KIE (F1) |
|---|---|---|---|---|---|---|
| Qianfan-OCR (Ours) | 87.9 | 95.0 | 82.8 | 82.3 | 92.8 | 86.5 |
| Qwen3-VL-235B-A22B | 84.2 | 94.0 | 85.6 | 62.9 | 95.1 | 83.8 |
| Qwen3-4B-VL | 83.5 | 89.0 | 82.1 | 71.3 | 91.6 | 83.3 |
| Gemini-3.1-Pro | 79.2 | 96.0 | 87.8 | 63.4 | 72.5 | 76.1 |
| Model | PPS (pages/sec) |
|---|---|
| Qianfan-OCR (W8A8) | 1.024 |
| Qianfan-OCR (W16A16) | 0.503 |
| MinerU 2.5 | 1.057 |
| MonkeyOCR-pro-1.2B | 0.673 |
| Dots OCR | 0.352 |
All benchmarks on a single NVIDIA A100 GPU with vLLM 0.10.2.
Qianfan-OCR supports a comprehensive set of document intelligence tasks through prompt-driven control:
| Task Category | Specific Tasks |
|---|---|
| Document Parsing | Image-to-Markdown conversion, multi-page parsing, structured output (JSON/HTML) |
| Layout Analysis | Bounding box detection, element type classification (25 categories), reading order |
| Table Recognition | Complex table extraction (merged cells, rotated tables), HTML output |
| Formula Recognition | Inline and display math formulas, LaTeX output |
| Chart Understanding | Chart QA, trend analysis, data extraction from various chart types |
| Key Information Extraction | Receipts, invoices, certificates, medical records, ID cards |
| Handwriting Recognition | Chinese and English handwritten text |
| Scene Text Recognition | Street signs, product labels, natural scene text |
| Multilingual OCR | 192 languages including Latin, Cyrillic, Arabic, South/Southeast Asian, CJK scripts |
ollama run maternion/Qianfan-OCR:4b "/example1.jpg OCR the image and output in Markdown."
ollama run maternion/Qianfan-OCR:4b "/example2.png OCR the image and output in Markdown.<think>"
ollama run maternion/Qianfan-OCR:4b "/example3.png OCR the image and extract the Name, Location and Height columns."
We provide a Qianfan OCR Document Intelligence skill for image and PDF understanding workflows.
It can be used by users of OpenClaw, Claude Code, Codex, and other assistants that support this skill format.
This skill packages reusable instructions, scripts, and references so the agent can automatically apply Qianfan-powered document intelligence to tasks such as:
The skill is designed for visual understanding tasks over images and PDFs, and includes the execution flow needed to prepare inputs, choose the right analysis mode, and call the bundled CLI tools.
@misc{dong2026qianfanocrunifiedendtoendmodel,
title={Qianfan-OCR: A Unified End-to-End Model for Document Intelligence},
author={Daxiang Dong and Mingming Zheng and Dong Xu and Chunhua Luo and Bairong Zhuang and Yuxuan Li and Ruoyun He and Haoran Wang and Wenyu Zhang and Wenbo Wang and Yicheng Wang and Xue Xiong and Ayong Zheng and Xiaoying Zuo and Ziwei Ou and Jingnan Gu and Quanhao Guo and Jianmin Wu and Dawei Yin and Dou Shen},
year={2026},
eprint={2603.13398},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2603.13398},
}
We thank the Baidu AI Cloud team for infrastructure support, the Baige and Kunlun teams for AI infrastructure assistance, and all contributors to the Qianfan platform.
This project is licensed under the Apache License 2.0. See LICENSE for the
full license text.
Some bundled third-party source files are licensed under the MIT License. See
NOTICE for the file list and corresponding attribution details.