This is a 32B reasoning model trained from Qwen2.5-32B-Instruct with 17K data. The performance is on par with o1-preview model on both math and coding.

243 3 months ago

2 Tags
ba1e6161ae00 • 20GB • 3 months ago
217d0bed168a • 35GB • 3 months ago