43 Downloads Updated 3 days ago
ollama run baichuan-inc/baichuan-m2-32b:q4_k_m
Updated 3 days ago
3 days ago
59b23717719f Β· 22GB Β·
This repository contains the model presented in Baichuan-M2: Scaling Medical Capability with Large Verifier System.
Baichuan-M2-32B is Baichuan AIβs medical-enhanced reasoning model, the second medical model released by Baichuan. Designed for real-world medical reasoning tasks, this model builds upon Qwen2.5-32B with an innovative Large Verifier System. Through domain-specific fine-tuning on real-world medical questions, it achieves breakthrough medical performance while maintaining strong general capabilities.
Model Features:
Baichuan-M2 incorporates three core technical innovations: First, through the Large Verifier System, it combines medical scenario characteristics to design a comprehensive medical verification framework, including patient simulators and multi-dimensional verification mechanisms; second, through medical domain adaptation enhancement via Mid-Training, it achieves lightweight and efficient medical domain adaptation while preserving general capabilities; finally, it employs a multi-stage reinforcement learning strategy, decomposing complex RL tasks into hierarchical training stages to progressively enhance the modelβs medical knowledge, reasoning, and patient interaction capabilities.
Core Highlights: - π Worldβs Leading Open-Source Medical Model: Outperforms all open-source models and many proprietary models on HealthBench, achieving medical capabilities closest to GPT-5 - π§ Doctor-Thinking Alignment: Trained on real clinical cases and patient simulators, with clinical diagnostic thinking and robust patient interaction capabilities - β‘ Efficient Deployment: Supports 4-bit quantization for single-RTX4090 deployment, with 58.5% higher token throughput in MTP version for single-user scenarios
| Model Name | HealthBench | HealthBench-Hard | HealthBench-Consensus |
|---|---|---|---|
| Baichuan-M2 | 60.1 | 34.7 | 91.5 |
| gpt-oss-120b | 57.6 | 30 | 90 |
| Qwen3-235B-A22B-Thinking-2507 | 55.2 | 25.9 | 90.6 |
| Deepseek-R1-0528 | 53.6 | 22.6 | 91.5 |
| GLM-4.5 | 47.8 | 18.7 | 85.3 |
| Kimi-K2 | 43 | 10.7 | 90.9 |
| gpt-oss-20b | 42.5 | 10.8 | 82.6 |
| Benchmark | Baichuan-M2-32B | Qwen3-32B (Thinking) |
|---|---|---|
| AIME24 | 83.4 | 81.4 |
| AIME25 | 72.9 | 72.9 |
| Arena-Hard-v2.0 | 45.8 | 44.5 |
| CFBench | 77.6 | 75.7 |
| WritingBench | 8.56 | 7.90 |
Note: AIME uses max_tokens=64k, others use 32k; temperature=0.6 for all tests.
π Technical Blog: Blog - Baichuan-M2
π Technical Report: Arxiv - Baichuan-M2
Patient Simulator: Virtual patient system based on real clinical cases
Multi-Dimensional Verification: 8 dimensions including medical accuracy, response completeness, and follow-up awareness
Dynamic Scoring: Real-time generation of adaptive evaluation criteria for complex clinical scenarios
Mid-Training: Medical knowledge injection while preserving general capabilities
Reinforcement Learning: Multi-stage RL strategy optimization
General-Specialized Balance: Carefully balanced medical, general, and mathematical composite training data
For deploying the Q4_K_M quantized model, you can use llama.cpp or ollama, please visit their website to get the specific operational steps for deploying the model.
Licensed under the Apache License 2.0. Research and commercial use permitted.
Empowering Healthcare with AI, Making Health Accessible to All