1,612 Downloads Updated 1 month ago
ollama run sam860/lfm2.5:1.2b-Q8_0
Updated 1 month ago
1 month ago
0d3305cdb88a · 1.2GB ·
Uploaded in fp16 and Q8_0
Temperature: The model was tuned for very deterministic output, so start with 0.1 – 0.2. Raise to ≈0.6 only if you need more creative or exploratory answers.
LFM2.5‑1.2B‑Instruct – a 1.2B parameter, hybrid Liquid‑architecture model built for edge deployment. I’ve only uploaded the Instruct variant.
Key points:
<|tool_call_start|> / <|tool_call_end|> token wrappers.