72 Downloads Updated 1 week ago
ollama run yasserrmd/GLM4.7-Distill-LFM2.5-1.2B
Ollama Model: yasserrmd/GLM4.7-Distill-LFM2.5-1.2B
Base Model: LiquidAI LFM2.5-1.2B
Distilled From: GLM-4.7
Quantization: 8-bit (Ollama)
License: Apache-2.0
This is an 8-bit quantized Ollama build of GLM4.7-Distill-LFM2.5-1.2B, a compact instruction-following model distilled from GLM-4.7 into the LiquidAI LFM2.5 architecture.
The goal of this model is practical local usage: fast startup, low memory footprint, and stable instruction adherence on CPU-only or constrained environments.
Original model and training details are available on Hugging Face: https://huggingface.co/yasserrmd/GLM4.7-Distill-LFM2.5-1.2B
ollama pull yasserrmd/GLM4.7-Distill-LFM2.5-1.2B
ollama run yasserrmd/GLM4.7-Distill-LFM2.5-1.2B
ollama run yasserrmd/GLM4.7-Distill-LFM2.5-1.2B "Explain quantization in simple terms."
These settings work well for most instruction and reasoning tasks:
Lower temperatures are recommended for deterministic and concise outputs.
This model is not designed for safety-critical or medical/legal decision-making.
Apache-2.0. See the Hugging Face model card for full license details.