The IBM Granite 1B and 3B models are long-context mixture of experts (MoE) Granite models from IBM designed for low latency usage.
1,475 Pulls Updated 3 days ago
Updated 3 days ago
3 days ago
df6f6578dba8 · 2.0GB
Readme
Granite mixture of experts models
The IBM Granite 1B and 3B models are long-context mixture of experts (MoE) Granite models from IBM designed for low latency usage.
The models are trained on over 10 trillion tokens of data, the Granite MoE models are ideal for deployment in on-device applications or situations requiring instantaneous inference.
Parameter Sizes
1B:
ollama run granite3.1-moe:1b
3B:
ollama run granite3.1-moe:3b
Supported Languages
English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, Chinese (Simplified)
Capabilities
- Summarization
- Text classification
- Text extraction
- Question-answering
- Retrieval Augmented Generation (RAG)
- Code related tasks
- Function-calling tasks
- Multilingual dialog use cases
- Long-context tasks including long document/meeting summarization, long document QA, etc.
Granite dense models
The Granite dense models are available in 2B and 8B parameter sizes designed to support tool-based use cases and for retrieval augmented generation (RAG), streamlining code generation, translation and bug fixing.
Learn more
- Developers: IBM Research
- GitHub Repository: ibm-granite/granite-language-models
- Website: Granite Docs
- Release Date: December 18th, 2024
- License: Apache 2.0.