miti99/ gte-qwen2:latest

2 13 hours ago

https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct

ollama run miti99/gte-qwen2

Details

13 hours ago

e3a4005a7764 · 3.6GB ·

qwen2
·
1.78B
·
F16
{{- range .Messages }}<|im_start|>{{ .Role }} {{ .Content }}<|im_end|> {{ end }}<|im_start|>assistan
{ "num_ctx": 8192, "stop": [ "<|im_start|>", "<|im_end|>" ] }

Readme

Customized Alibaba’s gte-Qwen2-1.5B-instruct model. Converted using llama.cpp

How i built it?

python convert_hf_to_gguf.py ./gte-qwen2-1.5b  --outfile ./gte-qwen2-1.5b-f16.gguf  --outtype f16
python -m gguf.scripts.gguf_new_metadata .\gte-qwen2-1.5b-f16.gguf .\gte-qwen2-1.5b-f16-fixed.gguf --remove-metadata qwen2.pooling_type --remove-metadata qwen2.attention.causal
ollama create miti99/gte-qwen2 -f Modelfile
ollama push miti99/gte-qwen2

With Modelfile:

FROM ./gte-qwen2-1.5b-f16-fixed.gguf

# Required for embedding models
PARAMETER num_ctx 8192