80 Downloads Updated 1 month ago
A specialized AI model fine-tuned for expertise in Large Language Models (LLMs), Retrieval-Augmented Generation (RAG), and AI Agents.
ollama pull jiakai/qwen3-14b-ai-expert-250819
ollama run jiakai/qwen3-14b-ai-expert-250819
ollama run jiakai/qwen3-14b-ai-expert-250819 "What is the difference between RAG and fine-tuning?"
ollama run jiakai/qwen3-14b-ai-expert-250819
>>> What is the MCP Protocol?
>>> How do AI agents work with tool calling?
>>> Explain the concept of retrieval-augmented generation
curl http://localhost:11434/api/generate -d '{
"model": "jiakai/qwen3-14b-ai-expert-250819",
"prompt": "Explain the architecture of transformer models",
"stream": false
}'
import requests
import json
def chat_with_model(prompt):
response = requests.post('http://localhost:11434/api/generate',
json={
'model': 'jiakai/qwen3-14b-ai-expert-250819',
'prompt': prompt,
'stream': False
})
return json.loads(response.text)['response']
# Example usage
result = chat_with_model("What are the key components of a RAG system?")
print(result)
This model excels in providing detailed, accurate responses about:
@misc{Qwen3-14B-AI-Expert-250819,
author = {real-jiakai},
title = {Qwen3-14B-AI-Expert-250819},
year = 2025,
url = {https://huggingface.co/GXMZU/Qwen3-14B-ai-expert-250819},
publisher = {Hugging Face}
}