439 Downloads Updated 5 months ago
Turkcell-LLM-7b-v1 is a Turkish-optimized large language model based on the Mistral architecture. Fine-tuned using DORA and LORA methods on over 5 billion tokens of Turkish data, it delivers robust natural language understanding and generation tailored for Turkish.
This model has been uploaded by RefinedNeuro and is available on the Ollama platform for local deployment across macOS, Linux, and Windows.
ollama run RefinedNeuro/Turkcell-LLM-7b-v1
curl -X POST http://localhost:11434/api/generate -d '{
"model": "RefinedNeuro/Turkcell-LLM-7b-v1",
"prompt": "Türkiye'nin başkenti neresidir?"
}'
Licensed under the Apache License 2.0.
Uploaded to Ollama by RefinedNeuro.
https://huggingface.co/TURKCELL/Turkcell-LLM-7b-v1
This model is an extended version of a Mistral-based Large Language Model (LLM) for Turkish. It was trained on a cleaned Turkish raw dataset containing 5 billion tokens. The training process involved using the DORA method initially. Following this, we utilized Turkish instruction sets created from various open-source and internal resources for fine-tuning with the LORA method.
lora_alpha
: 128lora_dropout
: 0.05r
: 64target_modules
: “all-linear”lora_alpha
: 128lora_dropout
: 0.05r
: 256target_modules
: “all-linear”