43 3 days ago

ollama run baichuan-inc/baichuan-m2-32b:q4_k_m

Details

3 days ago

59b23717719f Β· 22GB Β·

qwen2
Β·
32.8B
Β·
Q4_K_M
{{- if .System -}}<<|im_start|>>system {{ .System }}<<|im_end|>> {{- end -}} {{- range .Messages -}}
{ "stop": [ "<<|im_end|>>", "<<|im_start|>>" ], "temperature": 0.6,

Readme

Baichuan-M2-32b

This repository contains the model presented in Baichuan-M2: Scaling Medical Capability with Large Verifier System.

License Hugging Face M2 GPTQ-4bit Huawei Ascend 8bit

🌟 Model Overview

Baichuan-M2-32B is Baichuan AI’s medical-enhanced reasoning model, the second medical model released by Baichuan. Designed for real-world medical reasoning tasks, this model builds upon Qwen2.5-32B with an innovative Large Verifier System. Through domain-specific fine-tuning on real-world medical questions, it achieves breakthrough medical performance while maintaining strong general capabilities.

Model Features:

Baichuan-M2 incorporates three core technical innovations: First, through the Large Verifier System, it combines medical scenario characteristics to design a comprehensive medical verification framework, including patient simulators and multi-dimensional verification mechanisms; second, through medical domain adaptation enhancement via Mid-Training, it achieves lightweight and efficient medical domain adaptation while preserving general capabilities; finally, it employs a multi-stage reinforcement learning strategy, decomposing complex RL tasks into hierarchical training stages to progressively enhance the model’s medical knowledge, reasoning, and patient interaction capabilities.

Core Highlights: - πŸ† World’s Leading Open-Source Medical Model: Outperforms all open-source models and many proprietary models on HealthBench, achieving medical capabilities closest to GPT-5 - 🧠 Doctor-Thinking Alignment: Trained on real clinical cases and patient simulators, with clinical diagnostic thinking and robust patient interaction capabilities - ⚑ Efficient Deployment: Supports 4-bit quantization for single-RTX4090 deployment, with 58.5% higher token throughput in MTP version for single-user scenarios

πŸ“Š Performance Metrics

HealthBench Scores

Model Name HealthBench HealthBench-Hard HealthBench-Consensus
Baichuan-M2 60.1 34.7 91.5
gpt-oss-120b 57.6 30 90
Qwen3-235B-A22B-Thinking-2507 55.2 25.9 90.6
Deepseek-R1-0528 53.6 22.6 91.5
GLM-4.5 47.8 18.7 85.3
Kimi-K2 43 10.7 90.9
gpt-oss-20b 42.5 10.8 82.6

General Performance

Benchmark Baichuan-M2-32B Qwen3-32B (Thinking)
AIME24 83.4 81.4
AIME25 72.9 72.9
Arena-Hard-v2.0 45.8 44.5
CFBench 77.6 75.7
WritingBench 8.56 7.90

Note: AIME uses max_tokens=64k, others use 32k; temperature=0.6 for all tests.

πŸ”§ Technical Features

πŸ“— Technical Blog: Blog - Baichuan-M2

πŸ“‘ Technical Report: Arxiv - Baichuan-M2

Large Verifier System

  • Patient Simulator: Virtual patient system based on real clinical cases

  • Multi-Dimensional Verification: 8 dimensions including medical accuracy, response completeness, and follow-up awareness

  • Dynamic Scoring: Real-time generation of adaptive evaluation criteria for complex clinical scenarios

    Medical Domain Adaptation

  • Mid-Training: Medical knowledge injection while preserving general capabilities

  • Reinforcement Learning: Multi-stage RL strategy optimization

  • General-Specialized Balance: Carefully balanced medical, general, and mathematical composite training data

βš™οΈ Quick Start

For deploying the Q4_K_M quantized model, you can use llama.cpp or ollama, please visit their website to get the specific operational steps for deploying the model.

⚠️ Usage Notices

  1. Medical Disclaimer: For research and reference only; cannot replace professional medical diagnosis or treatment
  2. Intended Use Cases: Medical education, health consultation, clinical decision support
  3. Safe Use: Recommended under guidance of medical professionals

πŸ“„ License

Licensed under the Apache License 2.0. Research and commercial use permitted.

🀝 Acknowledgements

  • Base Model: Qwen2.5-32B
  • Training Framework: verl
  • Inference Engines: vLLM, SGLang
  • Quantization: AutoRound, GPTQ Thank you to the open-source community. We commit to continuous contribution and advancement of healthcare AI.

πŸ“ž Contact Us


Empowering Healthcare with AI, Making Health Accessible to All