118 1 month ago

Source: https://huggingface.co/LiquidAI/LFM2.5-350M-GGUF

ollama run jewelzufo/LFM2.5-350M-GGUF

Details

1 month ago

87fa81987516 · 267MB ·

lfm2
·
422M
·
Q4_K_M
LICENSE TEXT LFM Open License v1.0 TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. D
{ "stop": [ "<|startoftext|>", "<|im_start|>", "<|im_end|>", "<|
LICENSE TEXT LFM Open License v1.0 TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. D
{{ if .System }}<|startoftext|><|im_start|>system {{ .System }}<|im_end|> {{ end }}{{ if .Prompt }}<

Readme

Liquid AI
Try LFMDocsLEAPDiscord

LFM2.5-350M-GGUF

LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.

Find more details in the original model card: https://huggingface.co/LiquidAI/LFM2.5-350M

🏃 How to run LFM2

Example usage with llama.cpp:

llama-cli -hf LiquidAI/LFM2.5-350M-GGUF