31 4 months ago

Series of open-source Bulgarian language models fine-tuned specifically for function calling and tool use.

2.6b 9b 27b

Models

View all →

Readme

Bulgarian Language Models for Function Calling 🇧🇬

📄 Full methodology, dataset details, and evaluation results coming in the upcoming paper

Overview 🚀

TUCAN (Tool-Using Capable Assistant Navigator) is a series of open-source Bulgarian language models fine-tuned specifically for function calling and tool use.

These models can interact with external tools, APIs, and databases, making them appropriate for building AI agents and Model Context Protocol (MCP) applications.

Built on top of BgGPT models from INSAIT Institute, these models have been enhanced with function-calling capabilities.

Motivation 🎯

Although BgGPT models demonstrate strong Bulgarian language comprehension, they face challenges in maintaining the precise formatting necessary for consistent function calling. Despite implementing detailed system prompts, their performance in this specific task remains suboptimal.

This project addresses that gap by fine-tuning BgGPT, providing the Bulgarian AI community with proper tool-use capabilities in their native language.

Models and variants 📦

Available in three sizes with full models, LoRA adapters, and quantized GGUF variants:

Model Size Full Model LoRA Adapter GGUF (Quantized)
2.6B Tucan-2.6B-v1.0 LoRA GGUF
9B Tucan-9B-v1.0 LoRA GGUF
27B Tucan-27B-v1.0 LoRA GGUF

GGUF variants include: q4_k_m, q5_k_m, q6_k, q8_0, q4_0 quantizations