enstazao/ qalb:8b-instruct-fp16

21 1 month ago

ollama run enstazao/qalb:8b-instruct-fp16

Details

1 month ago

f6da7aeb8be2 · 16GB ·

llama
·
8.03B
·
F16
آپ ایک مددگار اور بے ضرر مصنوعی ذہانت کے اسسٹنٹ ہیں۔ آپ
{ "repeat_penalty": 1.1, "stop": [ "<|start_header_id|>", "<|end_header_id|>
{{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Pr

Readme

Qalb-1.0-8B-Instruct (FP16)

Qalb is the state-of-the-art Urdu Large Language Model (LLM), designed for 230M+ speakers. It was developed through systematic continued pre-training on 1.97 billion Urdu tokens, followed by rigorous instruction fine-tuning to ensure high performance in native Urdu script.

Performance Benchmarks

Qalb-1.0-8B outperforms existing models across various linguistic and reasoning tasks:

Task Qalb (Ours) Alif-1.0-Instruct LLaMA-3.1-8B-Instruct
Overall Score 90.34 87.1 45.7
Translation 94.41 89.3 58.9
Classification 96.38 93.9 61.4
Sentiment Analysis 95.79 94.3 54.3
Ethics 90.83 85.7 27.3
Reasoning 88.59 83.5 45.6
QA (Question Answering) 80.40 73.8 30.5
Generation 85.97 90.2 42.8

Usage Guide

Terminal (Bash)

To run the model directly in your terminal:

ollama run enstazao/qalb:8b-instruct-fp16

Python API

Use the ollama Python library for integration:

import ollama

response = ollama.chat(model='enstazao/qalb:8b-instruct-fp16', messages=[
  {
    'role': 'user',
    'content': 'پاکستان کا قومی کھیل کیا ہے؟',
  },
])

print(response['message']['content'])

Curl (REST API)

curl http://localhost:11434/api/generate -d '{
  "model": "enstazao/qalb:8b-instruct-fp16",
  "prompt": "پاکستان کا قومی کھیل کیا ہے؟",
  "stream": false
}'

Limitation & Bias

While Qalb has been trained to be helpful and harmless, it may still reflect biases present in the training data. Users should fact-check critical information, especially in medical, legal, or religious contexts.

Citation

If you use Qalb in your research, please cite:

@article{qalb2025,
  title={Qalb: Largest State-of-the-Art Urdu Large Language Model for 230M Speakers with Systematic Continued Pre-training},
  author={Hassan, Muhammad Taimoor and Ahmed, Jawad and Awais, Muhammad},
  journal={arXiv preprint arXiv:2601.08141},
  year={2026},
  eprint={2601.08141},
  archivePrefix={arXiv},
  primaryClass={cs.CL},
  url={[https://arxiv.org/abs/2601.08141](https://arxiv.org/abs/2601.08141)},
  doi={10.48550/arXiv.2601.08141}
}