Updated 3 months ago
3 months ago
20247f89c0bc · 4.9GB
model
archllama
·
parameters8.03B
·
quantizationQ4_K_M
4.9GB
params
{
"stop": [
"<|im_start|>",
"<|im_end|>"
],
"temperature": 0,
"top_p
87B
template
{{ if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>us
161B
system
당신은 유용한 AI 어시스턴트입니다. 사용자의 질의에 대해 친절하고 정확�
233B
Readme
Information
- Original Model : MLP-KTLim/llama-3-Korean-Bllossom-8B-gguf-Q4_K_M
- Site : https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B-gguf-Q4_K_M/tree/main
- File : .. 4.92 GB gguf file .. download
- Tool calls(No)
Process
- 1. Download: huggingface-cli download MLP-KTLim/llama-3-Korean-Bllossom-8B-gguf-Q4_K_M –local-dir /home/sguser/ollamaModels/MLP-KTLim
- 2. Make Modelfile
- 3. Create: ollama create llama3korean8B4QKM -f Modelfile
- 4. Run : ollama run llama3korean8B4QKM