PRIHLOP/
PLLuM:12B-nc-instruct-Q8_0

1,056 6 months ago

PLLuM is a family of large language models (LLMs) specialized in Polish and other Slavic/Baltic languages, with additional English data incorporated for broader generalization.

8b 12b

6 months ago

999fde72cd3d · 13GB ·

llama
·
12.2B
·
Q8_0

Readme

PLLuM: A Family of Polish Large Language Models

Overview

PLLuM is a family of large language models (LLMs) specialized in Polish and other Slavic/Baltic languages, with additional English data incorporated for broader generalization. Developed through an extensive collaboration with various data providers, PLLuM models are built on high-quality text corpora and refined through instruction tuning, preference learning, and advanced alignment techniques. These models are intended to generate contextually coherent text, offer assistance in various tasks (e.g., question answering, summarization), and serve as a foundation for specialized applications such as domain-specific intelligent assistants.

Key Highlights

Extensive Data Collection We gathered large-scale, high-quality text data in Polish (around 150B tokens after cleaning and deduplication) and additional text in Slavic, Baltic, and English languages. Part of these tokens (28B) can be used in fully open-source models, including for commercial use (in compliance with relevant legal regulations).

Organic Instruction Dataset

We curated the largest Polish collection of manually created “organic instructions” (~40k prompt-response pairs, including ~3.5k multi-turn dialogs). This human-authored instruction set is based on an extensive typology of human-model interactions and it covers a range of subtle aspects of supervised fine-tuning (SFT) that might be overlooked with automated approaches (including large scale distillation of ‘strong LLMs’). It was also designed to mitigate negative linguistic transfer from non-Polish textual data used in the pre-training phase.

Polish Preference Corpus We created the first Polish-language preference corpus, featuring prompts and multiple model responses manually assessed by a demographically diverse team of annotators. This dataset teaches the model not only correctness (factual and linguistic) but also balance and safety—especially for potentially controversial or adversarial topics.

Evaluation Benchmarks

We developed custom benchmarks to evaluate our models on tasks relevant to Polish public administration, where PLLuM achieved top scores among all tested models. In broader Polish-language tasks, PLLuM models also attain state-of-the-art results.

Model Description

Below is a summary of the main PLLuM models, including their licenses, bases, and parameter sizes. All model names link to a specific Hugging Face resources, while the base models and licenses link to their respective sources or license references. Note that all -nc- models are intended to non-commercial use.

The models with fully open licenses are continuously pretrained on approximately 30 billion tokens of Polish text due to copyright considerations. The models with CC-BY-NC-4.0 licenses used approximately 150 billion tokens of Polish text. The models with the -nc and -chat suffix were aligned on human preferences and are generally safer and more efficient to use in dialog, general purpose scenarios.

Referencies

Website

Hugging Face repository