35 Downloads Updated 1 month ago
Name
5 models
alpie-core:latest
20GB · 128K context window · Text · 1 month ago
alpie-core:code
20GB · 128K context window · Text · 1 month ago
alpie-core:reasoning
20GB · 128K context window · Text · 1 month ago
alpie-core:research
20GB · 128K context window · Text · 1 month ago
alpie-core:tools
20GB · 128K context window · Text · 1 month ago
Welcome to 169Pi’s Alpie-Core
Alpie-Core is one of the first 4-bit quantized reasoning models, a 32B parameter system developed by 169Pi team that matches or outperforms several full-precision frontier models. Built from the DeepSeek-R1-Distill-Qwen-32B backbone, it represents a major leap in efficient reasoning, sustainable AI, and democratized intelligence, all trained on just 8 NVIDIA Hopper GPUs.
Alpie-Core redefines what’s possible under limited resources by combining LoRA/QLoRA, groupwise-blockwise quantization, and synthetic data distillation, achieving state-of-the-art results on reasoning, coding, and math benchmarks — all while reducing memory footprint by over 75%. Designed for researchers, developers, and enterprises, Alpie-Core brings frontier-level reasoning to accessible, low-compute environments.
Get started
You can get started by downloading or running Alpie-Core with Ollama:
To pull the model:
ollama pull 169pi/alpie-core
To run it instantly:
ollama run 169pi/alpie-core
Alpie-Core can also be integrated programmatically for local or API-based workflows.
Benchmarks
Alpie-Core is built for structured reasoning, step-by-step logic, and factual responses. It achieves MMLU 81.28% | GSM8K 92.75% | BBH 85.12% | SWE-Bench Verified 57.8% | SciQ 98.0% | HumanEval 57.23% :
Feature Highlights
1. Technical Advancements
2. API & Integration Ready
3. Sustainable and Accessible
Quantization
License: Apache 2.0
Use freely for research, customisation, and commercial deployment without copyleft restrictions. Ideal for experimentation, extension, and open collaboration.
More about 169Pi